The main goal of Declarative Learning Based Programming (DeLBP) workshop is to investigate the issues that arise when designing and using programming languages that support learning from data and knowledge.
DeLBP aims at facilitating and simplifying the design and development of intelligent real world applications that use machine learning and reasoning by addressing the following commonly observed challenges: Interaction with messy, naturally occurring data; Specifying the requirements of the application at a high abstraction level; Dealing with uncertainty in data and knowledge in various layers of the application program; Using representations that support flexible relational feature engineering; Using representations that support flexible reasoning and structure learning; Integrating a range of learning and inference algorithms; and finally addressing the above mentioned issues in one unified programming environment.
Conventional programming languages offer no help to application programmers that attempt to design and develop applications that make use of real world data, and reason about it in a way that involves learning interdependent concepts from data, incorporating existing models, and reasoning about existing and trained models and their parametrization. Over the last few years the research community has tried to address these problems from multiple perspectives, most notably various approaches based on Probabilistic programming, Logical Programming and the integrated paradigms. The goal of this workshop is to present and discuss the current related research and the way various challenges have been addressed. We aim at motivating the need for further research toward a unified framework in this area based on the key existing paradigms: Probabilistic Programing (PP), Logic Programming (LP), Probabilistic Logical Programming (PLP), First-order query languages and database management systems (DBMS) and deductive databases (DDB), Statistical relational learning and related languages (SRL), and connect these to the ideas of Learning Based Programming.
We aim to discuss and investigate the required type of languages and representations that facilitate modeling probabilistic or non-probabilistic complex learning models, and provide the ability to combine, chain and perform flexible inference with existing models and by exploiting first-order background knowledge.
- Data modeling (Relational data modeling or Graph based)
- First-order Knowledge Representation
- Relational feature engineering
- Design and Representation of complex Learning and Inference Models
- Probabilistic Programming
- Probabilistic Logical Learning and Reasoning
- Declarative Languages
- Automation of Hyper-Parameter Tuning
- Applications in Natural language processing, Computer vision and Bioinformatics
|9:00-9:30||Introductory Remarks||Dan Roth|
|9:30-10:30||Invited Talk: The Democratization of Optimation||Kristian Kersting|
|11:00-12:00||Invited Talk: Probabilistic Soft Logic: A Scalable, Declarative Approach to Structured Prediction from Noisy Data||Lise Getoor|
|12:00-12:20||Paper 1: Constructive Geometric Constraint Solving as a General Framework for KR-Based Commonsense Spatial Reasoning||Carl Schultz and Mehul Bhatt|
|2:00-2:20||Paper 2: JudgeD: a Probabilistic Datalog with Dependencies||Brend Wanders, Maurice van Keulen and Jan Flokstra|
|2:20-2:40||Paper 3: On declarative modeling of structured pattern mining||Tias Guns, Sergey Paramonov and Benjamin Negrevergne|
|2:40-3:00||Paper 4: Learning Constraints and Optimization Criteria||Samuel Kolb|
|3:00-3:20||Paper 5: RELOOP: A Python-Embedded Declarative Language for Relational Optimization||Martin Mladenov, Danny Heinrich, Leonard Kleinhans, Felix Gonsior and Kristian Kersting|
|3:20-4:10||Break + Demos||paper1-demo (Carl Schultz); paper2-demo (Brend Wanders, JudgeD: the Mystery of the Phantom Flame); paper5-demo (Martin Mladenov, RELOOP); Wolfe-demo (Sameer Singh); Saul-demo (Parisa Kordjamshidi)|
- Kristian Kersting, Technical University of Dortmund
- Lise Getoor, University of California Santa Cruz
Title: The Democratization of Optimization
ABSTRACT. Democratizing data does not mean dropping a huge spreadsheet on everyone’s desk and saying, “good luck,” it means to make data mining, machine learning and AI methods useable in such a way that people can easily instruct machines to have a „look" at the data and help them to understand and act on it. A promising approach is the declarative “Model + Solver” paradigm that was and is behind many revolutions in computing in general: instead of outlining how a solution should be computed, we specify what the problem is using some modeling language and solve it using highly optimized solvers. Analyzing data, however, involves more than just the optimization of an objective function subject to constraints. Before optimization can take place, a large effort is needed to not only formulate the model but also to put it in the right form. We must often build models before we know what individuals are in the domain and, therefore, before we know what variables and constraints exist. Hence modeling should facilitate the formulation of abstract, general knowledge. This not only concerns the syntactic form of the model but also needs to take into account the abilities of the solvers; the efficiency with which the problem can be solved is to a large extent determined by the way the model is formalized. In this talk, I shall review our recent efforts on relational optimisation. It can reveal the rich logical structure underlying many AI and data mining problems both at the formulation as well as the optimization level. Ultimately, it will make optimization several times easier and more powerful than current approaches and is a step towards achieving the grand challenge of automated programming as sketched by Jim Gray in his Turing Award Lecture.Joint work with Martin Mladenov and Pavel Tokmakov and based on previous joint works together with Babak Ahmadi, Amir Globerson, Martin Grohe, Fabian Hadiji, Marion Neumann, Aziz Erkal Selman, and many more.
BIOGRAPHY. Kristian Kersting is an Associate Professor at the TU Dortmund University, Germany. His research interests are in machine learning, artificial intelligence and data science. He received a PhD in Computer Science from the the University of Freiburg, Germany, in 2006. After a PostDoc at MIT, he was with the Fraunhofer IAIS and the University of Bonn. He is the author or co-author of over 130 technical publications. He is a winner of the ECCAI Dissertation Award 2006, the ECML-2006 Best Student Paper, the GIS-2011 Best Poster Award, the AAAI-2013 Outstanding PC Member Award, and the AIIDE-2015 Best Paper Award. He is an action editor of the Machine Learning journal, the Data Mining and Knowledge Discovery journal, the Artificial Intelligence journal, and the Journal of Artificial Intelligence Research. He was program co-chair of ECML PKDD-2013, SRL-2009, and MLG-2007, cofounded the international workshop on Statistical Relational AI (StarAI), and has served on numerous program committees.
Title: Probabilistic Soft Logic: A Scalable, Declarative Approach to Structured Prediction from Noisy Data
ABSTRACT. A fundamental challenge in developing impactful artificial intelligence technologies is balancing the ability to model rich,structured domains with the ability to scale to big data. Many important problem areas are both richly structured and large scale, including social and biological networks, knowledge graphs and the Web, computer vision and natural language. In this talk, I will introduce Probabilistic Soft Logic (PSL), a declarative probabilistic programming language that is able to both capture rich structure and scale to big data. The mathematic framework upon which PSL is based, hinge-loss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes three different approaches to inference. The three views come from the randomized algorithms community (round probabilities), the graphical models community (local consistency relaxations), and the fuzzy logic community (Lukasiewicz t-norms). I show that all three views lead to the same inference objective, and that this inference objective is convex, leading to highly efficient inference algorithms. I describe extensions to learning to support latent variables, which enable HL-MRFs to capture even richer dependencies. Along the way, I describe results in a variety of different domains.Joint work with Stephen Bach, Bert Huang, Matthias Broechler and other members of the LINQs research group.
BIOGRAPHY. Lise Getoor is a Professor in the Computer Science Department at UC Santa Cruz. Her research areas include machine learning, data integration and reasoning under uncertainty, with an emphasis on graph and network data. She is a AAAI Fellow, serves on the Computing Research Association and International Machine Learning Society Boards, was co-chair of ICML 2011, is a recipient of an NSF Career Award and ten best paper and best student paper awards. She received her PhD from Stanford University, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a Professor at the University of Maryland, College Park from 2001–2013.
- Martin Mladenov, Danny Heinrich, Leonard Kleinhans, Felix Gonsior and Kristian Kersting. RELOOP: A Python-Embedded Declarative Language for Relational Optimization.
- Tias Guns, Sergey Paramonov and Benjamin Negrevergne. On declarative modeling of structured pattern mining.
- Brend Wanders, Maurice van Keulen and Jan Flokstra. JudgeD: a Probabilistic Datalog with Dependencies.
- Samuel Kolb. Learning Constraints and Optimization Criteria.
- Carl Schultz and Mehul Bhatt. Constructive Geometric Constraint Solving as a General Framework for KR-Based Commonsense Spatial Reasoning.
- Parisa Kordjamshidi  firstname.lastname@example.org          University of Illinois at Urbana-Champaign
- Dan Roth  email@example.com                                 University of Illinois at Urbana-Champaign
- Avi Pfeffer   firstname.lastname@example.org                              Charles River Analytics
- Guy Van den Broeck    email@example.com       University of California, Los Angeles
- Sameer Singh    firstname.lastname@example.org     University of Washington, Seattle
- Vivek Srikumar    email@example.com                  University of Utah
- Rodrigo de Salvo Braz    firstname.lastname@example.org            SRI International