- Implicitly Learning to Reason in First-Order Logic; Vaishak Belle and Brendan Juba
- Logical inference as cost minimization in vector spaces; Taisuke Sato and Ryosuke Kojima
|9:10-10:10||Invited talk. Discrete Probabilistic Programming from First Principles.||Guy van den Broeck|
|10:10-10:30||Accepted paper 1. Implicitly Learning to Reason in First-Order Logic. [Paper]||Vaishak Belle and Brendan Juba|
|11:00-11:20||Accepted paper 2. Logical inference as cost minimization in vector spaces. [Paper]||Taisuke Sato and Ryosuke Kojima|
|11:20-11:40||Accepted paper 3. From Ontologies to Learning-Based Programs. [Paper]||Quan Guo, Andrzej Uszok and Parisa Kordjamshidi|
|11:40-12:00||Accepted paper 4. Learning Relational Representations with Auto-encoding Logic Programs. [Paper]||Sebastijan Dumancic, Tias Guns, Wannes Meert and Hendrik Blockeel|
|12:00-12:20||Accepted paper 5. Efficient Search-Based Weighted Model Integration. [Paper]||Zhe Zeng and Guy Van den Broeck. (Presenter: Paolo Morettin)|
|14:10-15:10||Invited talk. Exploiting Document Intent for Deep Understanding of Text: Case Studies in Law and Molecular Biology.||Leora Morgenstern|
|15:10-15:30||Accepted paper 7. Query-driven PAC-learning for reasoning. [Paper]||Brendan Juba|
|16:00-16:20||Accepted paper 8. LTL and Beyond: Formal Languages for Goal Specification in Reinforcement Learning. [Paper]||Alberto Camacho, Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano and Sheila McIlraith|
- Guy Van den Broeck, University of California Los Angeles
- Leora Morgenstern, Systems & Technology Research
Title. Discrete Probabilistic Programming from First Principles.
Abstract: This talk will build up semantics and probabilistic reasoning algorithms for discrete probabilistic programs from first principles. We begin by explaining simple semantics for imperative probabilistic programs, highlighting how they are different from classical representations of uncertainty in AI, and the possible pitfalls along the way. Then we dive into algorithms for reasoning about such programs and exploiting their structure, either through abstraction of the probabilistic program, or by compilation into a tractable representation for inference.
Bio: Guy Van den Broeck is an Assistant Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning (Statistical Relational Learning, Tractable Learning, Probabilistic Programming), Knowledge Representation and Reasoning (Probabilistic Graphical Models, Lifted Probabilistic Inference, Knowledge Compilation, Probabilistic Databases), and Artificial Intelligence in general. Guy is the recipient of the IJCAI-19 Computers and Thought Award. His work has been recognized with best paper awards from key artificial intelligence venues such as UAI, ILP, and KR, and an outstanding paper honorable mention at AAAI. Guy also serves as Associate Editor for the Journal of Artificial Intelligence Research (JAIR). Website: http://web.cs.ucla.edu/~guyvdb/
Title: Exploiting Document Intent for Deep Understanding of Text: Case Studies in Law and Molecular Biology
Abstract:Traditional machine reading systems focus on extracting a set of relations (fixed or flexible; provided or learned) and their arguments from text. Such reading systems do reasonably well at extracting explicit facts from documents. However, they also miss a great deal of information, sometimes the most important information in a document. I argue in this talk that a system that is able to recognize document intent will be able to extract much more information from both unstructured and semi-structured documents. That is because understanding document intent can provide information that is only implicit in text. This implicit information can help guide relation and entity extraction and support further inferences from extracted triples. I discuss two systems developed using this methodology — (1) The TAILCM system, which extract populated templates from complex regulatory financial text; and (2) The LTR system, which extracted protein-protein reactions from tables of unpredictable format in the molecular biological literature — and suggest how this methodology can be extended to other complex documents.
Nowadays, to solve real-world problems in many areas such as cognitive sciences, biology, finance, physics, social sciences, etc, scientists think about data-driven solutions. However, current technologies and tools offer cumbersome solutions in the following cases: When the data is messy and naturally occurring, that is, converting the data to vector/tensor representations is not straight forward; When we need to exploit the structure of the data beyond using flat vectors; When we need to exploit domain knowledge in various forms on top of the data; When we want to exploit various learning paradigms and techniques in the above mentioned cases.
Conventional programming languages have not been primarily designed to offer help for the above-mentioned challenges. DeLBP workshop aims at highlighting the issues and challenges that arise for having a declarative data driven problem-solving paradigm. This paradigm aims at facilitating and simplifying the design and the development of intelligent real world applications that consider learning from data and reasoning based on knowledge. It highlights the challenges in making machine learning accessible to various domain experts and application programmers particularly in the above-mentioned scenarios. To Achieve the DeLBP goals there is a need to go beyond designing tools for classic machine learning for new innovative abstractions and enriching the existing solutions and frameworks with the capabilities in:
Specifying the requirements of the application at a high abstraction level; Exploiting the expert knowledge in learning; Dealing with uncertainty in data and knowledge in various layers of the application program; Using representations that support flexible relational feature engineering; Using representations that support flexible reasoning and structure learning; Ability to reuse, combine and chain models and perform flexible inference on complex models or pipelines of decision making; Integrating a range of learning and inference algorithms; Closing the loop of moving from data to knowledge and exploiting knowledge to generate data; and finally having a unified programming environment to design application programs.
Related communitiesOver the last few years the research community has tried to address these problems from multiple perspectives, most notably various approaches based on Probabilistic programming (PP), Logical Programming (LP), Constrained Conditional models (CCM) and other integrated paradigms such as Probabilistic Logical Programming (PLP) and Statistical relational learning (SRL). These paradigms and related languages aim at learning over probabilistic structures and exploiting knowledge in learning. Moreover, in the recent years several Deep Learning tools have created easy to use abstractions for programming model configurations for deep architectures which can is also connected to differentiable programming then. We aim at motivating the need for further research toward a unified framework in this area based on the above mentioned key existing paradigms as well as other related research such as First-order query languages, deductive databases (DDB), hybrid optimization and deep architectures for learning from data and knowledge and differentiable programming in our sense of learning based programs. We are interested in connecting these ideas related to Declarative Learning Based Programming Paradigm and investigate the required type of languages, representations and computational models to support such a paradigm.
HighlightThough the theme of this workshop remains generic as in the past versions, we will aim at emphasizing on ideas and opinions regarding considering domain knowledge in statistical and deep learning architectures and particularly the program representations to express data and knowledge for machine learning models.
Topics SummaryThe main research questions and topics of interest include the following existing topics in the context of an integrated learning based paradigm:
- New programming abstractions and modularity levels towards a unified framework for (deep/structured) learning and reasoning,
- Frameworks/Computational models to combine learning and reasoning paradigms and exploit accomplishments in AI from various perspectives.
- Flexible use of structured and relational data from heterogeneous resources in learning.
- Data modeling (relational/graph-based databases) issues in such a new integrated framework for learning based on data and knowledge.
- Exploiting knowledge such as expert knowledge and common sense knowledge expressed via multiple formalisms, in learning.
- The ability of closing the loop to acquire knowledge from data and data from knowledge towards life-long learning, and reasoning.
- Using declarative domain knowledge to guide the design of learning models,
- including feature extraction, model selection, dependency structure and deep model architecture.
- Design and representation of complex learning and inference models.
- The interface for learning-based programming,
- either in the form of programming languages, declarations, frameworks, libraries or graphical user interfaces.
- Storage and retrieval of trained learning models in a flexible way to facilitate incremental learning.
- Related applications in Natural language processing, Computer vision, Bioinformatics, Computational biology, multi-agent systems, etc.
- Learning to learn programs and program synthesis with our specific perspective related to learning based programs.
- Vaishak Belle and Brendan Juba, Implicitly Learning to Reason in First-Order Logic.
- Brendan Juba, Query-driven PAC-learning for reasoning.
- Taisuke Sato and Ryosuke Kojima, Logical inference as cost minimization in vector spaces.
- Tal Friedman and Guy Van den Broeck, Towards Complex Querying of Probabilistic Classifiers.
- Quan Guo, Andrzej Uszok and Parisa Kordjamshidi, From Ontologies to Learning-Based Programs.
- Alberto Camacho, Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano and Sheila McIlraith, LTL and Beyond: Formal Languages for Goal Specification in Reinforcement Learning.
- Sebastijan Dumancic, Tias Guns, Wannes Meert and Hendrik Blockeel, Learning Relational Representations with Auto-encoding Logic Programs.
- Zhe Zeng and Guy Van den Broeck, Efficient Search-Based Weighted Model Integration.
SubmissionsWe encourage contributions with either a technical paper (IJCAI style, 6 pages without references), a position statement (IJCAI style, 2 pages maximum) or an abstract of a published work. IJCAI Style files available here. Please make submissions via EasyChair, here.
- Submission Deadline:
April 30th Extended to May 12th
May 20thJune 12th, 2019
- Workshop Day: August 12, 2019
|Tulane University, IHMCemail@example.com|
|University of Washingtonfirstname.lastname@example.org|
|University of Pennsylvaniaemail@example.com|
|University of California, Los Angeles|
|Charles River Analytics|
|Vrije Universiteit Brussel (VUB)|
|University College London|
|Uber AI Labs|
|University of California, Santa Cruz|
|Katholieke Universiteit Leuven|
|University of Oxford|
|The University of Queensland|
|Örebro University, University of Bremen|
|University College London|
|Katholieke Universiteit Leuven|