I’m a PhD candidate in statistics advised by Daniel Roy at the University of Toronto and the Vector Institute. From January to March 2020 I will be visiting the Institute for Advanced Study in Princeton as part of their Special Year on Optimization, Statistics, and Theoretical Machine Learning.
My research area is broadly theoretical machine learning, where I work on generalization error, online learning theory, and statistical complexity. Recently, I’ve been trying to derive online regret bounds that are adaptable to adversarial and benign situations for non-Lipschitz loss.
In general, I believe that the large gap between theory and practice in machine learning must be closed to improve the reliability and interpretability of modern algorithms, which is crucial to deploying them for decisions where human lives are at stake. This ideal means I am interested in results which are consistent with or explain those seen by actual practitioners.
I have also spent time working on queueing theory results with David Stanford, and I’m actively involved in these projects as they develop.
To view my curriculum vitae, click here.
PhD, Statistics, 2023 (expected)
University of Toronto
BSc, Financial Modelling, 2018
If you have a problem to solve or just an idea that you think is interesting, please send me an email with the details. Particularly if the problem is related to fundamental machine learning, I’d be eager to collaborate on a project.
I also do more formal consulting for traditional statistics problems such as model fitting, inference, and interpretation. Some consulting projects I’ve worked on in the past involved identifying distributional differences between training and testing data, predicting travel preferences for app users, and modelling residential real estate prices.