Demand for real world applications
Nowadays, to solve real-world problems in many areas such as cognitive sciences, biology, finance, physics, social sciences, etc, scientists think about data-driven solutions to a progressively increasing extent.
Challenge for domain expertsHowever, current technologies offer cumbersome solutions along multiple dimensions. Some of this include, Interaction with messy naturally occurring data; Necessity for extensive programming; Necessity of exploiting various learning paradigms and techniques; and Extensive experimental exploration for model selection, feature selection, parameter tuning due to the lack of theoretical evidence about the effectiveness of various models.
High-level goal and implied directions
DeLBP workshop aims at highlighting the issues and challenges that arise for having a declarative data driven problem-solving paradigm. This paradigm facilitates and simplifies the design and the development of intelligent real world applications that consider learning from data and reasoning based on knowledge. It highlights the challenges in making machine learning accessible to various domain experts and application programmers.
Conventional programming languages have not been primarily designed to offer help for the above-mentioned challenges. To Achieve the DeLBP goals there is a need to go beyond designing tools for classic machine learning new innovative abstractions and enriching the existing solutions and frameworks with the capabilities in:
Specifying the requirements of the application at a high abstraction level; Exploiting the expert knowledge in learning; Dealing with uncertainty in data and knowledge in various layers of the application program; Using representations that support flexible relational feature engineering; Using representations that support flexible reasoning and structure learning; Ability to reuse, combining and chaining models and perform flexible inference on complex models or pipelines of decision making; Integrating a range of learning and inference algorithms; Closing the loop of moving from data to knowledge and exploiting knowledge to generate data; and finally having a unified programming environment to design application programs.
Related communitiesOver the last few years the research community has tried to address these problems from multiple perspectives, most notably various approaches based on Probabilistic programming (PP), Logical Programming (LP), Constrained Conditional models (CCM) and other integrated paradigms such as Probabilistic Logical Programming (PLP) and Statistical relational learning (SRL). These paradigms and related languages aim at learning over probabilistic structures and exploiting knowledge in learning. Moreover, in the recent years several Deep Learning tools have created easy to use abstractions for programming model configurations for deep architectures. We aim at motivating the need for further research toward a unified framework in this area based on the above mentioned key existing paradigms as well as other related research such as First-order query languages, database management systems (DBMS), deductive databases (DDB), hybrid optimization and deep architectures for learning from data and knowledge. We are interested in connecting these ideas towards developing a Declarative Learning Based Programming Paradigm and investigate the required type of languages, representations and computational models to support such a paradigm.
HighlightThough the theme of this workshop remains generic as in the past versions, we will aim at emphasizing on ideas and opinions regarding conceptual representations of deep learning architectures that connect various computational units to the semantics of declarative data and knowledge representations. We also encourage submissions on learning to learn programs.
Topics SummaryThe main research questions and topics of interest include the following existing topics in the context of an integrated learning based paradigm:
- New programming abstractions and modularity levels towards a unified framework for (deep/structured) learning and reasoning,
- Frameworks/Computational models to combine learning and reasoning paradigms and exploit accomplishments in AI from various perspectives.
- Flexible use of structured and relational data from heterogeneous resources in learning.
- Data modeling (relational/graph-based databases) issues in such a new integrated framework for learning based on data and knowledge.
- Exploiting knowledge such as expert knowledge and common sense knowledge expressed via multiple formalisms, in learning.
- The ability of closing the loop to acquire knowledge from data and data from knowledge towards life-long learning, and reasoning.
- Using declarative domain knowledge to guide the design of learning models,
- including feature extraction, model selection, dependency structure and deep model architecture.
- Automation of hyper-parameter tuning.
- Design and representation of complex learning and inference models.
- The interface for learning-based programming,
- either in the form of programming languages, declarations, frameworks, libraries or graphical user interfaces.
- Storage and retrieval of trained learning models in a flexible way to facilitate incremental learning.
- Related applications in Natural language processing, Computer vision, Bioinformatics, Computational biology, multi-agent systems, etc.
- Learning to learn programs
|9:00-9:15||Workshop Overview. DeLBP aims and challenges||Parisa Kordjamshidi|
|9:15-10:05||Keynote talk. Scruff: A Deep Probabilistic Cognitive Architecture||Avi Pfeffer|
|10:05-10:25||Accepted paper. Fairness-aware Relational Learning and Inference||Golnoosh Farnadi, Behrouz Babaki and Lise Getoor|
|11:00-11:50||Keynote talk. Reading and Reasoning with Neural Program Interpreters||Sebastian Riedel|
|11:50-12:10||Accepted Paper. Image Classification Using Deep Learning and Prior Knowledge||Michelangelo Diligenti, Soumali Roychowdhury and Marco Gori|
|2:10-03:00||Keynote talk. Probabilistic Logics and Declarative Statistical Learning||William Cohen|
|3:00-3:30||Invited Paper. Snorkel: Rapid Training Data Creation with Weak Supervision||Alex Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, Christopher Ré|
|4:00-4:50||Keynote talk. Pyro: Programmable Probabilistic Programming with Python and PyTorch||Eli Bingham|
- Sebastian Riedel, University College London
- William Cohen, Carnegie Mellon University
- Avi Pfeffer, Charles River Analytics Scruff: A Deep Probabilistic Cognitive Architecture
- Eli Bingham, Uber AI Labs Pyro: Programmable Probabilistic Programming with Python and PyTorch
Reading and Reasoning with Neural Program Interpreters
Abstract. We are getting better at teaching end-to-end neural models how to answer questions about content in natural language text. However, progress has been mostly restricted to extracting answers that are directly stated in the text. In this talk, I will present our work towards teaching machines not only to read but also to reason with what was read and to do this in an interpretable and controlled fashion. Our main hypothesis is that this can be achieved by the development of neural abstract machines that follow the blueprint of program interpreters for real-world programming languages. We test this idea using two languages: an imperative (Forth) and a declarative (Prolog/Datalog) one. In both cases, we implement differentiable interpreters that can be used for learning reasoning patterns. Crucially, because they are based on interpretable host languages, the interpreters also allow users to easily inject prior knowledge and inspect the learnt patterns. Moreover, on tasks such as math word problems and relational reasoning, our approach compares favourably to state-of-the-art methods.
Bio. Sebastian Riedel is a reader in Natural Language Processing and Machine Learning at the University College London (UCL), where he is leading the Machine Reading lab. He is also the head of research at Bloomsbury AI and an Allen Distinguished Investigator. He works in the intersection of Natural Language Processing and Machine Learning, and focuses on teaching machines how to read and reason. He was educated in Hamburg-Harburg (Dipl. Ing) and Edinburgh (MSc., PhD), and worked at the University of Massachusetts Amherst and Tokyo University before joining UCL.
Probabilistic Logics and Declarative Statistical Learning
Abstract. TensorLog is a simple probabilistic first-order logic in which logical queries can be compiled into differentiable functions in a neural network infrastructure, such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for learning the parameters of a probabilistic logic. We show how TensorLog's integration with deep learners allows one to express logical constraints on learning for tasks such as question-answering against a knowledge base and semi-supervised learning for network data.
Bio. William Cohen received his bachelor's degree in Computer Science from Duke University in 1984, and a PhD in Computer Science from Rutgers University in 1990. From 1990 to 2000 Dr. Cohen worked at AT&T Bell Labs and later AT&T Labs-Research, and from April 2000 to May 2002 Dr. Cohen worked at Whizbang Labs, a company specializing in extracting information from the web. Dr. Cohen is a past president of the International Machine Learning Society. In the past he has also served as an action editor for the the AI and Machine Learning series of books published by Morgan Claypool, for the journal Machine Learning, the journal Artificial Intelligence, the Journal of Machine Learning Research, and the Journal of Artificial Intelligence Research. He was General Chair for the 2008 International Machine Learning Conference, held July 6-9 at the University of Helsinki, in Finland; Program Co-Chair of the 2006 International Machine Learning Conference; and Co-Chair of the 1994 International Machine Learning Conference. Dr. Cohen was also the co-Chair for the 3rd Int'l AAAI Conference on Weblogs and Social Media, which was held May 17-20, 2009 in San Jose, and was the co-Program Chair for the 4rd Int'l AAAI Conference on Weblogs and Social Media. He is a AAAI Fellow, and was a winner of the 2008 the SIGMOD "Test of Time" Award for the most influential SIGMOD paper of 1998, and the 2014 SIGIR "Test of Time" Award for the most influential SIGIR paper of 2002-2004.
Abstract. Probabilistic programming is able to build rich models of systems that combine prior knowledge with the ability to learn from data. One of the reasons for the success of deep learning is the ability to discover hidden features of the domain through complex multi-layered, nonlinear functions; another reason is the ability to learn and reason effectively about these functions in a scalable way. Our goal is to develop generative probabilistic programs that have the same properties. Recent trends in cognitive science view perception and action in a unified framework based on downward prediction using a generative probabilistic model and upward propagation of errors. Scruff is intended to be a probabilistic programming cognitive architecture based on this idea. Scruff provides many different mechanisms for accomplishing intelligent behavior, all within a neat generative probabilistic framework. Scruff uses Haskell’s rich type system to create a library of models, where each kind of model is able to support certain kinds of inference efficiently. The type system ensures that only compatible models can be linked together. Current mechanisms include learning parameters via gradient ascent backpropagation (as in deep neural networks), reinforcement learning to perform inference, conditioning on various kinds of evidence, and different ways of computing probabilities. Using Scruff, we are exploring a range of new kinds of deep models, such as deep noisy-or networks, deep probabilistic context free grammars, and deep conditional linear Guassian networks.
Bio. Dr. Avi Pfeffer is Chief Scientist at Charles River Analytics. Dr. Pfeffer is a leading researcher on a variety of computational intelligence techniques including probabilistic reasoning, machine learning, and computational game theory. Dr. Pfeffer has developed numerous innovative probabilistic representation and reasoning frameworks, such as probabilistic programming, which enables the development of probabilistic models using the full power of programming languages, and statistical relational learning, which provides the ability to combine probabilistic and relational reasoning. He is the lead developer of Charles River Analytics’ Figaro probabilistic programming language. As an Associate Professor at Harvard, he developed IBAL, the first general-purpose probabilistic programming language. While at Harvard, he also produced systems for representing, reasoning about, and learning the beliefs, preferences, and decision-making strategies of people in strategic situations. Prior to joining Harvard, he invented object-oriented Bayesian networks and probabilistic relational models, which form the foundation of the field of statistical relational learning. Dr. Pfeffer serves as Action Editor of the Journal of Machine Learning Research and served as Associate Editor of Artificial Intelligence Journal and as Program Chair of the Conference on Uncertainty in Artificial Intelligence. He has published many journal and conference articles and is the author of a text on probabilistic programming. Dr. Pfeffer received his Ph.D. in computer science from Stanford University and his B.A. in computer science from the University of California, Berkeley.
- Alex Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, Christopher Ré, Snorkel: Rapid Training Data Creation with Weak Supervision.
- Golnoosh Farnadi, Behrouz Babaki and Lise Getoor, Fairness-aware Relational Learning and Inference.
- Michelangelo Diligenti, Soumali Roychowdhury and Marco Gori, Image Classification Using Deep Learning and Prior Knowledge.
SubmissionsWe encourage contributions with either a technical paper (AAAI style, 6 pages without references), a position statement (AAAI style, 2 pages maximum) or an abstract of a published work. AAAI Style files available here. Please make submissions via EasyChair, here.
- Submission Deadline:
October 20th, 2017Extended to October 31st
- Notification: November 13th, 2017
- Workshop Days: February 3, 2018
||Tulane University, IHMCfirstname.lastname@example.org|
|University of Pennsylvaniaemail@example.com|
||University of California, Los Angeles|
|University of California, Irvine|
|Vrije University of Brussels|
||Amazon Cambridge, UK|
|University of California, Santa Barbara|
|Technical University of Dortmund|
|University of California, Los Angeles|
|University of Oxford|
|Charles River Analytics|