Andrew McCallum

University of Massachusetts Amherst

Andrew McCallum is a Professor and Director of the Information Extraction and Synthesis Laboratory in the School of Computer Science at University of Massachusetts Amherst. He has published over 250 papers in many areas of AI, including natural language processing, machine learning, data mining and reinforcement learning, and his work has received over 34,000 citations. He obtained his PhD from University of Rochester in 1995 with Dana Ballard and a postdoctoral fellowship from CMU with Tom Mitchell and Sebastian Thrun. In the early 2000's he was Vice President of Research and Development at at WhizBang Labs, a 170-person start-up company that used machine learning for information extraction from the Web. He is a AAAI Fellow, the recipient of the UMass Chancellor's Award for Research and Creative Activity, the UMass NSM Distinguished Research Award, the UMass Lilly Teaching Fellowship, and research awards from Google, IBM, Yahoo and Microsoft. He was the General Chair for the International Conference on Machine Learning (ICML) 2012, and is the current president of the International Machine Learning Society, as well as member of the editorial board of the Journal of Machine Learning Research. For the past ten years, McCallum has been active in research on statistical machine learning applied to text, especially information extraction, entity resolution, semi-supervised learning, topic models, and social network analysis. His work on open peer review can be found at http://openreview.net. McCallum's web page is http://www.cs.umass.edu/~mccallum.

Title: 
Representation and Reasoning with Universal Schema Embeddings
Abstract: 

Work in knowledge representation has long struggled to design schemas of entity- and relation-types that capture the desired balance of specificity and generality while also supporting reasoning and information integration from various sources of input evidence. In our "universal schema" approach to knowledge representation we operate
on the union of all input schemas (from structured KBs to OpenIE textual patterns) while also supporting integration and generalization by learning vector embeddings whose neighbhorhoods capture semantic implicature. In this talk I will briefly review our past work on a knowledge graph with universal schema relations and entity types, then describe new research in multi-sense embeddings, Gaussian embeddings that capture uncertainty and asymmetries, and logical implicature of new relations through multi-hop relation paths compositionally modeled by recursive neural tensor networks.