Research Statement
My training before beginning my PhD was in natural science and in mathematics, and I tend to carry these perspectives into my research work. Most poignantly, I am not an engineer, either by training or by affect. As a result, my research work tends to focus on "how does..." questions, rather than "how can..." questions. That means that I do a lot more basic research than applied research.
I characterize the essence of my approach in Reinforcement Learning with the term "mathematical reinforcement learning", to contrast my work with RL theory, which tends to focus on the analysis of algorithms which approach optimality, rather than my specialty of mathematical analysis. My emphasis is thus on understanding how and why modern methods can find effective suboptimal policies, and on discovering and describing novel structures which elucidate the reinforcement learning problem and may improve the performance of these methods.
I am also very interested in studying the discrepancy between learning methods as we describe them mathematically and their implementations in digital computers. This interest has led me to work towards better understanding the effects of function approximator expressivity on learning. One practical outcome of this line of research has been my work on AltNet, a resetting method which is able to maintain plasticity in the long term.
My belief is that by improving our understanding of learning itself, and of the structures which we use as substrates for learning, we will be able to make much more durable progress in understanding natural and artificial intelligence than we can with a totalizing focus on optimization techniques. I hope that my research can contribute to such an understanding.
If you'd like to talk more about this kind of research, please reach out to me.