I’m a PhD candidate in statistics advised by Daniel Roy at the University of Toronto, supported by the Vector Institute and an NSERC Doctoral Canada Graduate Scholarship. I received my BSc in financial mathematics from Western University in 2018.
My research area is broadly statistical machine learning, with a focus on theoretical performance guarantees for sequential decision making. Some problems I’m currently thinking about are:
Adaptivity: How can we characterize the difficulty of learning data beyond classical stationary dependence structures and design algorithms that adapt to these difficulty notions?
Statistical Complexities: Existing notions of predictor and algorithm complexity may be vacuous or suboptimal for modern machine learning systems. What are the right notions of complexity that lead to matching theoretical guarantees on empirical performance?
Transfer Learning: Empirical performance has far outpaced the theory of learning data with few or no labels and using this data to make predictions on out-of-distribution data. What are the correct theoretical formalizations of these tasks that result in sample complexity guarantees representative of empirical performance?
To view my curriculum vitae, click here.
*denotes equal contribution