Jonathan Huggins aka jonathan huggins, jonathan h huggins, jhuggins

         PhD Candidate in Computer Science

Computer Science and Artifical Intelligence Laboratory (CSAIL)
Massachusetts Institute of Technology

Jump to: Publications | News | Contact | Thanks

research

My current research interests are in developing better algorithms for probabilistic and statistical inference and improving our theoretical understanding of existing inference algorithms. My focus is on Bayesian inference, learning theory, and the interplay between efficient learning and inference. I am particularly interested in: the learning-theoretic properties of inference algorithms for hierarchical models, probabilistic programs, and other rich model classes; algorithms for large-scale Bayesian inference; and inference algorithms for rich model classes.

recent news

SEMPTEMBER 2017
PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference, coauthored with Ryan Adams and Tamara Broderick, has been accepted with a spotlight talk to the Conference on Neural Information Processing Systems.

FEBRUARY 2017
We have updated our preprint Truncated Random Measures, which has been completely rewritten to include more general truncation bounds and applications to a wider range of sequential representations.

JANUARY 2017
Quantifying the Accuracy of Approximate Diffusions and Markov Chains, coauthored with James Zou, has been accepted to the Conference on Artificial Intelligence and Statistics.

OCTOBER 2016
James Zou and I have updated our preprint Quantifying the Accuracy of Approximate Diffusions and Markov Chains, which now includes an application of our approach to piecewise deterministic Markov processes, including zig-zag processes.

AUGUST 2016
Coresets for Scalable Bayesian Logistic Regression, coauthored with Trevor Campbell and Tamara Broderick, has been accepted to the Conference on Neural Information Processing Systems.

Academic Bio

I am a senior PhD student at MIT EECS/CSAIL, where I'm co-advised by Josh Tenenbaum and Tamara Broderick. During my PhD I also spent time with Ryan Adam's group at Harvard and worked with Lester Mackey at Microsoft Research. Before coming to MIT I was a mathematics major at Columbia University, where I worked with Frank Wood on Bayesian nonparametric modeling and with Liam Paninski on statistical methods for neuroscience.

Collaborators

Tamara Broderick, Trevor Campbell, Matthew J. Johnson, Lester Mackey, Vikash K. Mansinghka, Karthik Narasimhan Ari Pakman, Liam Paninski, Eftychios A. Pnevmatikakis, Kamiar Rahnama Rad, Dan Roy, Cynthia Rudin, Ardavan Saeedi, Carl Smith, Josh B. Tenenbaum, Frank Wood, James Zou

valid xhtml

Valid XHTML 1.1 re-validate
Valid CSS /css/screen2

preprints and working papers

Truncated Random Measures
Trevor Campbell*, Jonathan H. Huggins*, Jonathan P. How, Tamara Broderick
*Contributed equally
[slides from BNP 2017]

recent publicationsand research articles view all

Sequential Monte Carlo as Approximate Sampling: bounds, adaptive resampling via ∞-ESS, and an application to Particle Gibbs
(with Daniel M. Roy)
Bernoulli, In press.

PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference
Jonathan H. Huggins, Ryan P. Adams, Tamara Broderick
In Proc. of the 31st Annual Conference on Neural Information Processing Systems (NIPS), 2017.
[code]

Quantifying the Accuracy of Approximate Diffusions and Markov Chains
(with James Zou)
In Proc. of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017.

Coresets for Scalable Bayesian Logistic Regression
Jonathan H. Huggins, Trevor Campbell, Tamara Broderick
In Proc. of the 30th Annual Conference on Neural Information Processing Systems (NIPS), 2016.
[spotlight video] [AABI NIPS workshop talk video] [code]

Risk and Regret of Hierarchical Bayesian Learners
Jonathan H. Huggins, Joshua B. Tenenbaum
In Proc. of the 32nd International Conference on Machine Learning (ICML), 2015.

JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes
Jonathan H. Huggins*, Karthik Narasimhan*, Ardavan Saeedi*, Vikash L. Mansinghka
*Contributed equally
In Proc. of the 32nd International Conference on Machine Learning (ICML), 2015.
[code]

Unpublished Work

Detailed Derivations of Small-variance Asymptotics for some Hierarchical Bayesian Nonparametric Models
Jonathan H. Huggins, Ardavan Saeedi, Matthew J. Johnson
arXiv:1501.00052 [stat.ML], 2014

Infinite Structured Hidden Semi-Markov Models
Jonathan H. Huggins, Frank Wood
arXiv:1407.0044 [stat.ME], 2014

In Fall 2011, for a final project in Stephen Edwards' compilers class, David Hu, Hans Hyttinen, Harley McGrew and I created YAPPL (Yet Another Probabilistic Programming Language). The final report includes a short tutorial and the language reference manual. The code for the compiler is written in OCaml, which is one of my favorite programming languages.

Candid

contact info

Email:  jhuggins -at- mit edu
Curriculum Vitæ: PDF
Bitbucket: jhhuggins
Google Scholar: profile

Office:
Stata Center, Room G451
32 Vassar Street
Cambridge, MA 02139