![]() |
Tommi S. Jaakkola, Ph.D. Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society MIT Computer Science and Artificial Intelligence Laboratory Stata Center, Bldg 32-G470 Cambridge, MA 02139 tommi at csail dot mit dot edu [home] [papers] [research] [people] |
Machine learning methods are commonly used across engineering and sciences, from computer systems to physics. Commercial sites such as search engines, recommender systems, advertisers, and financial institutions employ machine learning algorithms for content recommendation, modeling customer behavior, compliance, or risk. As a discipline, machine learning tries to design and understand computer programs that learn from experience for the purpose of prediction or control. In this course, you will learn about principles and algorithms for turning data into effective automated predictions. We will cover concepts such as representation, over-fitting, regularization, and generalization; topics such as clustering, classification, and probabilistic modeling; and methods such as on-line algorithms, support vector machines, hidden Markov models, and Bayesian networks.
The need to study human languages from a computational perspective has never been greater. Much of the vast amounts of information available today is in a textual form, requiring us to develop automated tools to search, extract, translate, and summarize the data. This course on natural language processing (NLP) focuses exactly on such problems, covering syntactic, semantic and discourse processing models, and their applications to information extraction, machine translation, and text summarization. As a new feature this year, the course will emphasize deep learning techniques for NLP, introducing them in parallel and comparatively with more traditional approaches to NLP.
The graduate subject covers principles, techniques, and algorithms in machine learning from the point of view of statistical inference, emphasizing methods that are broadly useful across engineering and sciences. Topics include representation, generalization, and model selection; and methods such as linear/additive models, active learning, boosting, support vector machines, non-parametric Bayesian methods, and graphical models.