{"id":422,"date":"2015-03-23T13:52:03","date_gmt":"2015-03-23T12:52:03","guid":{"rendered":"http:\/\/aixia2015.unife.it\/?page_id=422"},"modified":"2019-07-04T11:57:01","modified_gmt":"2019-07-04T10:57:01","slug":"mldm","status":"publish","type":"page","link":"https:\/\/aixia2015.unife.it\/events\/mldm\/","title":{"rendered":"4th Italian Workshop on Machine Learning and Data Mining (MLDM.it 2015)"},"content":{"rendered":"
Following the success of the first three editions of the Italian Workshop on Machine Learning and Data Mining at the AI*IA Symposiums and AI*IA Conferences on Artificial Intelligence, this workshop aims at bringing together researchers actively involved in the fields of machine learning, data mining, pattern recognition, and knowledge discovery. The meeting is organized by the AI*IA Working Group on Machine Learning and Data Mining whose general goal is to promote Italian scientific activities in the field of machine learning and data mining, and foster collaborations between research groups.<\/p>\n Workshop Day: September\u00a022nd<\/span>, 2015<\/strong><\/p>\n MLDM.it will be held in Main Room<\/strong> - First Floor, Palazzo Trotti Mosti<\/em> (Trotti Mosti Palace): Corso Ercole I d'Este, 37 - Building Plan<\/a><\/p>\n The program is downloadable here<\/a>.<\/p>\n <\/p>\n
\n<\/a>
\nDuring the workshop, researchers will have the opportunity to present their recent results and discuss problems and challenges relevant to the community. Following the tradition of MLDM.it, presentations are upon invitation. Attendance to MLDM is open to all the AI*IA Conference participants.<\/p>\nProgram<\/h2>\n
\n\n
\n <\/th>\n 22 September 2015<\/strong><\/th>\n <\/th>\n<\/tr>\n<\/thead>\n \n\n <\/td>\n <\/td>\n <\/td>\n<\/tr>\n \n 8.50-9.00<\/td>\n Opening<\/em><\/td>\n <\/td>\n<\/tr>\n \n 9.00-9.10<\/td>\n Welcome<\/em><\/td>\n <\/td>\n<\/tr>\n \n Section I<\/strong><\/td>\n Complex Data Analysis<\/strong><\/td>\n <\/td>\n<\/tr>\n \n 9.10-9.30<\/td>\n Mining Time-Adaptive Interpolative Clusters: Linking Interpolation to Summarization in Geophysical Data Streams<\/em><\/td>\n Annalisa Appice, Donato Malerba<\/td>\n<\/tr>\n \n 9.30-9.50<\/td>\n Visual Detection of small groups and leaders in crowd through structured learning<\/em><\/td>\n Simone Calderara<\/td>\n<\/tr>\n \n 9.50-10.10<\/td>\n Real-Time Topic Detection in Twitter<\/em><\/td>\n Salvatore Gaglio, Giuseppe Lo Re, Marco Morana<\/td>\n<\/tr>\n \n 10.10-10.30<\/td>\n Choice of Training-Validation partitions impacts predictive performance<\/em><\/td>\n Alessandro Zandon\u00e0, Marco Chierici, Giuseppe Jurman, Cesare Furlanello<\/td>\n<\/tr>\n \n Coffee break<\/strong><\/td>\n <\/td>\n <\/td>\n<\/tr>\n \n Section II<\/strong> <\/td>\n Deep Learning<\/strong><\/td>\n <\/td>\n<\/tr>\n \n 11.00-11.20<\/td>\n Two approaches for Recurrent Neural Networks Pre-training<\/em><\/td>\n Luca Pasa, Alessandro Sperduti, Alberto Testolin<\/td>\n<\/tr>\n \n 11.20-11.40<\/td>\n On the Complexity of Neural Network Classifiers: A Comparison between Shallow and Deep Architectures<\/em><\/td>\n Monica Bianchini, Franco Scarselli<\/td>\n<\/tr>\n \n 11.40-12.00<\/td>\n Deep Reservoir Computing<\/em><\/td>\n Claudio Gallicchio, Alessio Micheli<\/td>\n<\/tr>\n \n 12.00-12.20<\/td>\n Tweaking Sum-Product Network Structure Learning<\/em><\/td>\n Antonio Vergari, Nicola Di Mauro, Floriana Esposito<\/td>\n<\/tr>\n \n Lunch break<\/strong><\/td>\n <\/td>\n <\/td>\n<\/tr>\n \n Section III<\/strong><\/td>\n Learning Models and Analysis (I)<\/strong><\/td>\n <\/td>\n<\/tr>\n \n 14.30-14.50<\/td>\n Distributed Probabilistic Logic Learning<\/em><\/td>\n Elena Bellodi, Giuseppe Cota, Evelina Lamma, Fabrizio Riguzzi, Riccardo Zese<\/td>\n<\/tr>\n \n 14.50-15.10<\/td>\n Truly scalable latent locally linear SVMs<\/em><\/td>\n Ilja Kuzborskij, Barbara Caputo<\/td>\n<\/tr>\n \n 15.10-15.30<\/td>\n A model of online learning as a Linear Quadratic Gaussian (LQG) optimal control problem with random matrices<\/em><\/td>\n Giorgio Gnecco, Alberto Bemporad, Marco Gori, Rita Morisi, and Marcello Sanguineti<\/td>\n<\/tr>\n \n 15.30-15.50<\/td>\n Grammatical Inference for Structural Knowledge Extraction<\/em><\/td>\n Pietro Cottone, Salvatore Gaglio, Giuseppe Lo Re, and Marco Ortolani<\/td>\n<\/tr>\n \n Coffee break<\/strong><\/td>\n <\/td>\n <\/td>\n<\/tr>\n \n Section IV<\/strong><\/td>\n Learning Models and Analysis (II)<\/strong><\/td>\n <\/td>\n<\/tr>\n \n 16.30-16.50<\/td>\n RNAsynth: a graph kernel approach to learn constraints for RNA inverse folding<\/em><\/td>\n Fabrizio Costa<\/td>\n<\/tr>\n \n 16.50-17.10<\/td>\n Transducing Sentences to Syntactic Feature Vectors: an Alternative Way to \u201cParse\u201d?<\/em><\/td>\n Fabio Massimo Zanzotto, Lorenzo Dell\u2019Arciprete, Lorenzo Ferrone<\/td>\n<\/tr>\n \n 17.10-17.30<\/td>\n Conceptual change as a cognitive phase transition<\/em><\/td>\n Lorenza Saitta<\/td>\n<\/tr>\n \n 17.30-17.45<\/td>\n Model complexities of shallow neural networks for the approximation of input-output mappings with large variations<\/em><\/td>\n Marcello Sanguineti, Vera Kurkova<\/td>\n<\/tr>\n \n 17.45-18.00<\/td>\n Discussion and Conclusions: The future of MLDM<\/em><\/td>\n <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n Organising Committee<\/h2>\n