I'm a Research Scientist in Google DeepMind (formerly Google Brain), mostly doing "full-stack" research on language modeling, from pre-training to fine-tuning/post-training. Before that I had experience as a software engineer developing and deploying machine learning models in production to millions of users (e.g. as machine learning lead for Gmail Spam). Academically, I was fortunate to get my start in machine learning at the University of Toronto.
Interns: Abigail See, Jamie Murdoch, Ethan Steinberg, Eric Chu, Yu-An Chung, Jingzing Zhang (hosted by Yao Zhao), Reinald Amplayo, Jason Phang, Kundan Krishna, Yixin Liu
Residents: Colin Raffel, Jonas Kemp, Jie Ren
Reviewing for: ICLR, ICML, NEURIPS, ACL, EMNLP, ARR
Conference Area chair for: ACL (Summarization), ACL (Large Language models), EMNLP (Language Modeling)