NEWS: Among the presented papers which were originally submitted to the DeLBP workshop we selected two best papers:
  1. Implicitly Learning to Reason in First-Order Logic; Vaishak Belle and Brendan Juba
  2. Logical inference as cost minimization in vector spaces; Taisuke Sato and Ryosuke Kojima
The first paper is accepted to be published in NeurIPS-2019 and the second will be our representative paper for the IJCAI workshop's best selection.


9:00-9:10 Opening remarks. Organizers
9:10-10:10 Invited talk. Discrete Probabilistic Programming from First Principles. Guy van den Broeck
10:10-10:30 Accepted paper 1. Implicitly Learning to Reason in First-Order Logic. [Paper] Vaishak Belle and Brendan Juba
10:30-11:00 Coffee Break.
11:00-11:20 Accepted paper 2. Logical inference as cost minimization in vector spaces. [Paper] Taisuke Sato and Ryosuke Kojima
11:20-11:40 Accepted paper 3. From Ontologies to Learning-Based Programs. [Paper] Quan Guo, Andrzej Uszok and Parisa Kordjamshidi
11:40-12:00 Accepted paper 4. Learning Relational Representations with Auto-encoding Logic Programs. [Paper] Sebastijan Dumancic, Tias Guns, Wannes Meert and Hendrik Blockeel
12:00-12:20 Accepted paper 5. Efficient Search-Based Weighted Model Integration. [Paper] Zhe Zeng and Guy Van den Broeck. (Presenter: Paolo Morettin)
12:20-14:10 Lunch Break.
14:10-15:10 Invited talk. Exploiting Document Intent for Deep Understanding of Text: Case Studies in Law and Molecular Biology. Leora Morgenstern
15:10-15:30 Accepted paper 7. Query-driven PAC-learning for reasoning. [Paper] Brendan Juba
15:30-16:00 Coffee Break.
16:00-16:20 Accepted paper 8. LTL and Beyond: Formal Languages for Goal Specification in Reinforcement Learning. [Paper] Alberto Camacho, Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano and Sheila McIlraith
16:20-17:20 Panel discussion.

Invited Speakers


Nowadays, to solve real-world problems in many areas such as cognitive sciences, biology, finance, physics, social sciences, etc, scientists think about data-driven solutions. However, current technologies and tools offer cumbersome solutions in the following cases: When the data is messy and naturally occurring, that is, converting the data to vector/tensor representations is not straight forward; When we need to exploit the structure of the data beyond using flat vectors; When we need to exploit domain knowledge in various forms on top of the data; When we want to exploit various learning paradigms and techniques in the above mentioned cases.

Conventional programming languages have not been primarily designed to offer help for the above-mentioned challenges. DeLBP workshop aims at highlighting the issues and challenges that arise for having a declarative data driven problem-solving paradigm. This paradigm aims at facilitating and simplifying the design and the development of intelligent real world applications that consider learning from data and reasoning based on knowledge. It highlights the challenges in making machine learning accessible to various domain experts and application programmers particularly in the above-mentioned scenarios. To Achieve the DeLBP goals there is a need to go beyond designing tools for classic machine learning for new innovative abstractions and enriching the existing solutions and frameworks with the capabilities in:

Specifying the requirements of the application at a high abstraction level; Exploiting the expert knowledge in learning; Dealing with uncertainty in data and knowledge in various layers of the application program; Using representations that support flexible relational feature engineering; Using representations that support flexible reasoning and structure learning; Ability to reuse, combine and chain models and perform flexible inference on complex models or pipelines of decision making; Integrating a range of learning and inference algorithms; Closing the loop of moving from data to knowledge and exploiting knowledge to generate data; and finally having a unified programming environment to design application programs.

Related communities

Over the last few years the research community has tried to address these problems from multiple perspectives, most notably various approaches based on Probabilistic programming (PP), Logical Programming (LP), Constrained Conditional models (CCM) and other integrated paradigms such as Probabilistic Logical Programming (PLP) and Statistical relational learning (SRL). These paradigms and related languages aim at learning over probabilistic structures and exploiting knowledge in learning. Moreover, in the recent years several Deep Learning tools have created easy to use abstractions for programming model configurations for deep architectures which can is also connected to differentiable programming then. We aim at motivating the need for further research toward a unified framework in this area based on the above mentioned key existing paradigms as well as other related research such as First-order query languages, deductive databases (DDB), hybrid optimization and deep architectures for learning from data and knowledge and differentiable programming in our sense of learning based programs. We are interested in connecting these ideas related to Declarative Learning Based Programming Paradigm and investigate the required type of languages, representations and computational models to support such a paradigm.


Though the theme of this workshop remains generic as in the past versions, we will aim at emphasizing on ideas and opinions regarding considering domain knowledge in statistical and deep learning architectures and particularly the program representations to express data and knowledge for machine learning models.

Topics Summary

The main research questions and topics of interest include the following existing topics in the context of an integrated learning based paradigm:

Accepted Papers


We encourage contributions with either a technical paper (IJCAI style, 6 pages without references), a position statement (IJCAI style, 2 pages maximum) or an abstract of a published work. IJCAI Style files available here. Please make submissions via EasyChair, here.

Important Dates

Organizing Committee

  • Parisa Kordjamshidi
  • Tulane University, IHMC
  • Hannaneh Hajishirazi
  • University of Washington
  • Quan Guo
  • Tulane University
  • Nikolaos Vasiloglou II
  • RelationalAI
  • Kristian Kersting
  • TU Darmstadt
  • Dan Roth
  • University of Pennsylvania


    Program Committee

  • Guy Van den Broeck
  • University of California, Los Angeles
  • Avi Pfeffer
  • Charles River Analytics
  • Rodrigo de Salvo Braz
  • SRI International
  • Tias Guns
  • Vrije Universiteit Brussel (VUB)
  • Christopher Ré
  • Stanford University
  • Pasquale Minervini
  • University College London
  • Eli Bingham
  • Uber AI Labs
  • Alexander Ratner
  • Stanford University
  • Golnoosh Farnadi
  • University of California, Santa Cruz
  • Behrouz Babaki
  • Katholieke Universiteit Leuven
  • Mark Kaminski
  • University of Oxford
  • Aneesha Bakharia
  • The University of Queensland
  • Nantia Makrynioti
  • AUEB
  • Dan Goldwasser
  • Purdue University
  • Mehul Bhatt
  • Örebro University, University of Bremen
  • Stephen Bach
  • Brown University
  • Matko Bošnjak
  • University College London
  • Sebastijan Dumancic
  • Katholieke Universiteit Leuven