Probabilistic Graphical Models with message propagation are a very powerful paradigm to build robust models for  real phenomena.

Basic methods

A Bayesian directed graph represents our knowledge about the dependence among a set of random variables. Nodes represent the variables and the arches the conditional probability functions. A Bayesian graph can be transformed into a Factor Graph (FG) in which variables are branches and blocks are the probability functions. Furthermore, the FG can be transformed to an interconnection of only SISO (Single Input Single Output) blocks and diverters (FG in reduced normal form (FG-Rn). Such architectures are very flexible for handling “probability message propagation” as a Bayesian graph can be used for inference, generation, corrections, using probabilities that travels bi-directionally in the system. The basic sum-product operations are rigorous translation of Bayes’ theorem and marginalization. 

 

  • Giovanni Di Gennaro, F: A. N. Palmieri, Amedeo Buonanno, "Optimized Realization of Bayesian Networks in Reduced Normal Form using Latent Variable Model," submitted for journal publication, available on arXiv:1901.06201, stat.ML, Jan 18 2019.
  • Giovanni Di Gennaro, F. A. N. Palmieri, Amedeo Buonanno, "Computational Optimization for Normal Form Realization Of Bayesian Model," Proceedings of 2018 workshop on Machine Learning for Signal Processing (MLSP2018), Aalborg, Denmark, 17-20 Sept. 2018, on IEEE Xplore since 01 Nov 2018, DOI: 10.1109/MLSP.2018.8517090, ISSN: 1551-2541
  • F. A. N. Palmieri, "A Comparison of Algorithms for Learning Hidden Variables in Bayesian Factor Graphs in Reduced Normal Form," in IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 11, pp. 2242-2255, Nov. 2016; doi: 10.1109/TNNLS.2015.2477379 (Date of Publication: Sept 23 2015).
  • Buonanno A., Palmieri F.A.N., "Simulink Implementation of Belief Propagation in Normal Factor Graphs". In: Bassis S., Esposito A., Morabito F. (eds) Advances in Neural Networks: Computational and Theoretical Issues. Smart Innovation, Systems and Technologies, vol 37. Springer, Cham (WIRN 2014); DOI: https://doi.org/10.1007/978-3-319-18164-6_2; Publisher Name: Springer, Cham; Print ISBN: 978-3-319-18163-9; Online ISBN: 978-3-319-18164-6.
  • F. A. N. Palmieri, "A Comparison of Algorithms for Learning Hidden Variables in Normal Graphs," submitted for journal publication, available on-line at arXiv: 1308.5576v1 [stat.ML] 26 Aug 2013.
  • F. Palmieri, "Notes on Cutset Conditioning on Factor Graphs with Cycles," IOS Press in the KBIES book series, Neural Nets WIRN09 - Proceedings of the 19th Italian Workshop on Neural Nets, Vietri sul Mare, Salerno, Italy, May 28–30 2009, Edited by Bruno Apolloni, Simone Bassis, Carlo F. Morabito, ISBN 978-1-60750-072-8.
  • F. Palmieri, D. Mattera, P. Salvo Rossi, G. Romano, "La Costruzione di Memorie Associative su Grafi Fattoriali," in Modelli, sistemi e applicazioni di Vita Artificiale e Computazione Evolutiva, WIVACE 2009, a cura di Orazio Miglino, Michela Ponticorvo, Angelo Rega, Franco Rubinacci, Atti del VI Workshop Italiano di Vita Artificiale e Computazione Evolutiva. Napoli 23-25 Novembre 2009, Fridericiana Editrice Universitaria, ISBN: 978-88-8338-091-4.
  • F. Palmieri, "Notes on Factor Graphs," New Directions in Neural Networks with IOS Press in the KBIES book series, Proceedings of WIRN 2008, Vietri sul mare, June 2008; ISBN 978-1-58603-984-4.
  • Francesco A. N. Palmieri, "Learning Non-Linear Functions with Factor Graphs," IEEE Transactions on Signal Processing, Vol.61, N. 17, pp. 4360 - 4371, 2013, T-SP-14461-2012.R2; DOI: 10.1109/TSP.2013.2270463
  • Francesco A. N. Palmieri and Alberto Cavallo, "Probability Learning and Soft Quantization in Bayesian Factor Graphs," in Neural Nets and Surroundings, Proc. of 22nd Italian Workshop on Neural Nets, WIRN 2012, May 17-19, Vietri sul Mare, Salerno, Italy, pp 3-10; DOI: 10.1007/978-3-642-35467-0_1; ISBN: 978-3-642-35466-3; Springer; Series: Smart Innovation, Systems and Technologies, Vol. 19, Apolloni, B.; Bassis, S.; Esposito, A.; Morabito, F.C. (Eds.);
  • F. A. N. Palmieri, D. Ciuonzo, D. Mattera, G. Romano, P. Salvo Rossi, "From Examples to Bayesian Inference," 21th Workshop on Neural Networks, Vietri sul Mare, Salerno, Italy, June 3-5, 2011, IOS Press; ISBN: 978-1-60750-971-4.
  • F. Palmieri, G. Romano, P. Salvo Rossi, D. Mattera, ''Building a Bayesian Factor Tree From Examples,'' Proceedings of the the 2nd International Workshop on Cognitive Information Processing, 14-16 June, 2010 Elba Island (Tuscany) - Italy; 2150-4938; ISBN: 978-1-4244-6457-9; DOI: 10.1109/CIP.2010.5604232.

 

Context Analysis

Bayesian networks can also be learned from examples. We have contributed to the field with basic theory and with applications to this very promising paradigm for the design of artificial intelligence systems.

 

  • Buonanno A., Iadicicco P., Di Gennaro G., Palmieri F.A.N.,"Context Analysis Using a Bayesian Normal Graph." Chapter in: Esposito A., Faundez-Zanuy M., Morabito F., Pasero E. (eds) Neural Advances in Processing Nonlinear Dynamic Signals. WIRN 2017. Smart Innovation, Systems and Technologies, vol 102. Springer, Cham; First Online: 22 July 2018; DOI https://doi.org/10.1007/978-3-319-95098-3_8; Print ISBN 978-3-319-95097-6; Online ISBN 978-3-319-95098-3; Also presented at the 27th Workshop on Neural Networks, WIRN 2017, Vietri s.m., June 14-16, 2017.

 

 

  • Buonanno, L. di Grazia and F. A. N. Palmieri, "Bayesian Clustering on Images with Factor Graphs in Reduced Normal Form," in S. Bassis, A. Esposito, F.C. Morabito, and E. Pasero, Advances in Neural Networks, (WIRN 2015), vol. 54, pp. 57-65, ISBN: 978-331933746-3; DOI: 10.1007/978-3-319-33747-0_6; Springer Science 2016.

 

Discrete Independent Component Analysis (DICA)

Using this paradigm several inference tasks (classification, pattern completion, pattern correction) can be performed in a unified framework injecting in the network the evidence at any arbitrary point, let the messages to propagate and collecting the results of the messages’ propagation.

 

 

  • F. A. N. Palmieri and A. Buonanno, "Discrete independent component analysis (DICA) with belief propagation," 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, Sept. 17-20, 2015, pp. 1-6; doi: 10.1109/MLSP.2015.7324364. Also available on ArXiv http://arxiv.org/abs/1505.06814 since May 2015.

 

Multi-Layer Bayesian Networks

We have applied LVMs to large number of applications as: clustering of images, scene understanding, discrete independent component analysis and text correction. Moreover, LVM can be seen as a basic building block for Deep Learning architecture.

 

 

  • A. Buonanno and F. A. N. Palmieri, "Two-dimensional multi-layer Factor Graphs in Reduced Normal Form," 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, 2015, pp. 1-6; doi: 10.1109/IJCNN.2015.7280415
  • A. Buonanno, F. A. N. Palmieri, "Towards Building Deep Networks with Bayesian Factor Graphs," submitted for journal publication, Feb 2015, available at arXiv:1502.04492 [cs.CV]
  • F. A. N. Palmieri and A. Buonanno, "Belief Propagation and Learning in Convolution Multi-layer Factor Graphs," Proceedings of 4th International Workshop on Cognitive Information Processing, CIP2014, May 26-28, 2014, Copenhagen, Denmark. DOI: 10.1109/CIP.2014.6844500.