One of the most important foundational challenge of Statistical relational learning is the development of a uniform framework in which learning and logical reasoning are seamlessly integrated. State of the art approaches propose to modify well known machine learning methods based on parameter optimization (e.g., neural networks and graphical models) in order to take into account structural knowledge expressed by logical constraints. In this paper, we follow an alternative direction, considering the Bayesian approach to machine learning. In particular, given a partial knowledge in hybrid domains (i.e., domains that contains relational structure and continuous features) as a set  of axioms and a stochastic (in)dependence hypothesis ℱ encoded in a first order language , we propose to model it by a probability distribution function (PDF) (∣,ℱ) over the -interpretations . The stochastic (in)dependence ℱ is represented as a Bayesian Markov Logic Network w.r.t. a parametric undirected graph, interpreted as the PDF. We propose to approximate (∣,ℱ) by variational inference and show that such approximation is possible if and only if ℱ satisfies a property called orthogonality. This property can be achieved also by extending , and adjusting  and ℱ.

Bayesian Markov Logic Networks - Bayesian Inference for Statistical Relational Learning

Nedbal, Radim;Luciano Serafini
2018

Abstract

One of the most important foundational challenge of Statistical relational learning is the development of a uniform framework in which learning and logical reasoning are seamlessly integrated. State of the art approaches propose to modify well known machine learning methods based on parameter optimization (e.g., neural networks and graphical models) in order to take into account structural knowledge expressed by logical constraints. In this paper, we follow an alternative direction, considering the Bayesian approach to machine learning. In particular, given a partial knowledge in hybrid domains (i.e., domains that contains relational structure and continuous features) as a set  of axioms and a stochastic (in)dependence hypothesis ℱ encoded in a first order language , we propose to model it by a probability distribution function (PDF) (∣,ℱ) over the -interpretations . The stochastic (in)dependence ℱ is represented as a Bayesian Markov Logic Network w.r.t. a parametric undirected graph, interpreted as the PDF. We propose to approximate (∣,ℱ) by variational inference and show that such approximation is possible if and only if ℱ satisfies a property called orthogonality. This property can be achieved also by extending , and adjusting  and ℱ.
978-3-030-03840-3
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11582/316563
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact