Company Logo
  • fibremap_940x230.jpg
  • intro1b_940x230.jpg
  • intro2_940x230.jpg
  • intro3b_940x230.jpg
  • intro4_940x230.jpg


Uncategorised

Position:
Department of Information Engineering
Address:
Via G.B. Tiepolo 85
Padova
PD
35129
Italy
Phone:
+39-049-827-7831
http://robotics.dei.unipd.it

2021

  1. Stefano Tortora, Roberto Sassi, Ruggero Carli and Emanuele Menegatti. Weighted Shared-Autonomy with Assistance-to-Target and Collision Avoidance for Intelligent Assistive Robotics. In Lecture Notes in Networks and Systems 412. 2022, 580–593. URL, DOI BibTeX

    @conference{10.1007/978-3-030-95892-3_44,
    	id = "10.1007/978-3-030-95892-3_44",
    	pages = "580--593",
    	isbn = "978-3-030-95891-6",
    	doi = "10.1007/978-3-030-95892-3_44",
    	url = "https://link.springer.com/chapter/10.1007/978-3-030-95892-3_44",
    	keywords = "Assistive robotics; Collision avoidance; Human-robot interaction; Robotic manipulator; Shared-autonomy",
    	abstract = "Intelligent Assistive Robotics (IAR) has been recently introduced as a branch of Service Robotics developing semi-autonomous robots helping people with physical disability in daily-living activities. Literature often focuses on the development of assistive robots with a single semi-autonomous behavior, while the integration of multiple assistance is rarely considered. In this paper, we propose a novel shared-autonomy controller integrating the contribution of two semi-autonomous behavioral modules: an assistance-to-target module, adjusting user’s input to simplify the target reaching, and a collision avoidance module, moving the robot away from trajectories leading to possible collisions with obstacles. An arbitration function based on the risk of collision is introduced to prevent conflicts between the two behaviors. The proposed controller has been successfully evaluated both offline and online in a reach-to-grasp task with a simulated robotic manipulator. Results show that the proposed methods significantly reduced not only the time to complete the task with respect to the pure teleoperation or controllers including just one semi-autonomous behavior, but also the user’s workload controlling the manipulator with a wearable interface. The context-awareness employed by the IAR may increase the reliability of the human-robot interaction, pushing forward the use of this technology in complex environments to assist disabled people at home.",
    	booktitle = "Lecture Notes in Networks and Systems",
    	volume = 412,
    	year = 2022,
    	title = "Weighted Shared-Autonomy with Assistance-to-Target and Collision Avoidance for Intelligent Assistive Robotics",
    	author = "Tortora, Stefano and Sassi, Roberto and Carli, Ruggero and Menegatti, Emanuele"
    }
    
  2. Daniel Fusaro, Emilio Olivastri, Daniele Evangelista, Marco Imperoli, Emanuele Menegatti and Alberto Pretto. Pushing the Limits of Learning-based Traversability Analysis for Autonomous Driving on CPU. In Proceedings of INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS SYSTEMS (IAS-17). 2022. DOI BibTeX

    @conference{10.48550/arXiv.2206.03083,
    	id = "10.48550/arXiv.2206.03083",
    	doi = "10.48550/arXiv.2206.03083",
    	booktitle = "Proceedings of INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS SYSTEMS (IAS-17)",
    	year = 2022,
    	title = "Pushing the Limits of Learning-based Traversability Analysis for Autonomous Driving on CPU",
    	author = "Fusaro, Daniel and Olivastri, Emilio and Evangelista, Daniele and Imperoli, Marco and Menegatti, Emanuele and Pretto, Alberto"
    }
    
  3. M Yoshida, G Giruzzi, N Aiba, J F Artaud, J Ayllon-Guerola, L Balbinot, O Beeke, E Belonohy, P Bettini, W Bin, A Bierwage, T Bolzonella, M Bonotto, C Boulbe, J Buermans, M Chernyshova, S Coda, R Coelho, S Davis, C Day, G De Tommasi, M Dibon, A Ejiri, G Falchetto, A Fassina, B Faugeras, L Figini, M Fukumoto, S Futatani, K Galazka, J Garcia, M Garcia-Munoz, L Garzotti, L Giacomelli, L Giudicotti, S Hall, N Hayashi, C Hoa, M Honda, K Hoshino, M Iafrati, A Iantchenko, S Ide, S Iio, R Imazawa, S Inoue, A Isayama, E Joffrin, K Kamiya, Y Ko, M Kobayashi, T Kobayashi, G Kocsis, A Kovacsik, T Kurki-Suonio, B Lacroix, P Lang, P Lauber, A Louzguiti, E De La Luna, G Marchiori, M Mattei, A Matsuyama, S Mazzi, A Mele, F Michel, Y Miyata, J Morales, P Moreau, A Moro, T Nakano, M Nakata, E Narita, R Neu, S Nicollet, M Nocente, S Nowak, F P Orsitto, V Ostuni, Y Ohtani, N Oyama, R Pasqualotto, B Pegourie, E Perelli, L Pigatto, C Piccinni, A Pironti, P Platania, B Ploeckl, D Ricci, P Roussel, G Rubino, R Sano, K Sarkimaki, K Shinohara, S Soare, C Sozzi, S Sumida, T Suzuki, Y Suzuki, T Szabolics, T Szepesi, Y Takase, M Takech, N Tamura, K Tanaka, H Tanaka, M Tardocchi, A Terakado, H Tojo, T Tokuzawa, A Torre, N Tsujii, H Tsutsui, Y Ueda, H Urano, M Valisa, M Vallar, J Vega, F Villone, T Wakatsuki, T Wauters, M Wischmeier, S Yamoto and L Zani. Plasma physics and control studies planned in JT-60SA for ITER and DEMO operations and risk mitigation. PLASMA PHYSICS AND CONTROLLED FUSION 64, 2022. DOI BibTeX

    @article{10.1088/1361-6587/ac57a0,
    	id = "10.1088/1361-6587/ac57a0",
    	doi = "10.1088/1361-6587/ac57a0",
    	keywords = "JT-60SA; plasma control; risk mitigation; scenario development",
    	abstract = "A large superconducting machine, JT-60SA has been constructed to provide major contributions to the ITER program and DEMO design. For the success of the ITER project and fusion reactor, understanding and development of plasma controllability in ITER and DEMO relevant higher beta regimes are essential. JT-60SA has focused the program on the plasma controllability for scenario development and risk mitigation in ITER as well as on investigating DEMO relevant regimes. This paper summarizes the high research priorities and strategy for the JT-60SA project. Recent works on simulation studies to prepare the plasma physics and control experiments are presented, such as plasma breakdown and equilibrium controls, hybrid and steady-state scenario development, and risk mitigation techniques. Contributions of JT-60SA to ITER and DEMO have been clarified through those studies.",
    	volume = 64,
    	journal = "PLASMA PHYSICS AND CONTROLLED FUSION",
    	publisher = "IOP Publishing Ltd",
    	year = 2022,
    	title = "Plasma physics and control studies planned in JT-60SA for ITER and DEMO operations and risk mitigation",
    	author = "Yoshida, M. and Giruzzi, G. and Aiba, N. and Artaud, J. F. and Ayllon-Guerola, J. and Balbinot, L. and Beeke, O. and Belonohy, E. and Bettini, P. and Bin, W. and Bierwage, A. and Bolzonella, T. and Bonotto, M. and Boulbe, C. and Buermans, J. and Chernyshova, M. and Coda, S. and Coelho, R. and Davis, S. and Day, C. and De Tommasi, G. and Dibon, M. and Ejiri, A. and Falchetto, G. and Fassina, A. and Faugeras, B. and Figini, L. and Fukumoto, M. and Futatani, S. and Galazka, K. and Garcia, J. and Garcia-Munoz, M. and Garzotti, L. and Giacomelli, L. and Giudicotti, L. and Hall, S. and Hayashi, N. and Hoa, C. and Honda, M. and Hoshino, K. and Iafrati, M. and Iantchenko, A. and Ide, S. and Iio, S. and Imazawa, R. and Inoue, S. and Isayama, A. and Joffrin, E. and Kamiya, K. and Ko, Y. and Kobayashi, M. and Kobayashi, T. and Kocsis, G. and Kovacsik, A. and Kurki-Suonio, T. and Lacroix, B. and Lang, P. and Lauber, P. and Louzguiti, A. and De La Luna, E. and Marchiori, G. and Mattei, M. and Matsuyama, A. and Mazzi, S. and Mele, A. and Michel, F. and Miyata, Y. and Morales, J. and Moreau, P. and Moro, A. and Nakano, T. and Nakata, M. and Narita, E. and Neu, R. and Nicollet, S. and Nocente, M. and Nowak, S. and Orsitto, F. P. and Ostuni, V. and Ohtani, Y. and Oyama, N. and Pasqualotto, R. and Pegourie, B. and Perelli, E. and Pigatto, L. and Piccinni, C. and Pironti, A. and Platania, P. and Ploeckl, B. and Ricci, D. and Roussel, P. and Rubino, G. and Sano, R. and Sarkimaki, K. and Shinohara, K. and Soare, S. and Sozzi, C. and Sumida, S. and Suzuki, T. and Suzuki, Y. and Szabolics, T. and Szepesi, T. and Takase, Y. and Takech, M. and Tamura, N. and Tanaka, K. and Tanaka, H. and Tardocchi, M. and Terakado, A. and Tojo, H. and Tokuzawa, T. and Torre, A. and Tsujii, N. and Tsutsui, H. and Ueda, Y. and Urano, H. and Valisa, M. and Vallar, M. and Vega, J. and Villone, F. and Wakatsuki, T. and Wauters, T. and Wischmeier, M. and Yamoto, S. and Zani, L."
    }
    
  4. L Tonin, G Beraldo, S Tortora and E Menegatti. ROS-Neuro: An Open-Source Platform for Neurorobotics. FRONTIERS IN NEUROROBOTICS 16, 2022. DOI BibTeX

    @article{10.3389/fnbot.2022.886050,
    	id = "10.3389/fnbot.2022.886050",
    	doi = "10.3389/fnbot.2022.886050",
    	keywords = "brain-machine interface; neural interface; neurorobotics; ROS; ROS-Neuro",
    	abstract = "The growing interest in neurorobotics has led to a proliferation of heterogeneous neurophysiological-based applications controlling a variety of robotic devices. Although recent years have seen great advances in this technology, the integration between human neural interfaces and robotics is still limited, making evident the necessity of creating a standardized research framework bridging the gap between neuroscience and robotics. This perspective paper presents Robot Operating System (ROS)-Neuro, an open-source framework for neurorobotic applications based on ROS. ROS-Neuro aims to facilitate the software distribution, the repeatability of the experimental results, and support the birth of a new community focused on neuro-driven robotics. In addition, the exploitation of Robot Operating System (ROS) infrastructure guarantees stability, reliability, and robustness, which represent fundamental aspects to enhance the translational impact of this technology. We suggest that ROS-Neuro might be the future development platform for the flourishing of a new generation of neurorobots to promote the rehabilitation, the inclusion, and the independence of people with disabilities in their everyday life.",
    	volume = 16,
    	journal = "FRONTIERS IN NEUROROBOTICS",
    	publisher = "Frontiers Media S.A.",
    	year = 2022,
    	title = "ROS-Neuro: An Open-Source Platform for Neurorobotics",
    	author = "Tonin, L. and Beraldo, G. and Tortora, S. and Menegatti, E."
    }
    
  5. D Fusaro, E Olivastri, D Evangelista, P Iob and A Pretto. An Hybrid Approach to Improve the Performance of Encoder-Decoder Architectures for Traversability Analysis in Urban Environments. In 2022 IEEE Intelligent Vehicles Symposium (IV) 2022-June. 2022, 1745–1750. DOI BibTeX

    @conference{10.1109/IV51971.2022.9827248,
    	id = "10.1109/IV51971.2022.9827248",
    	pages = "1745--1750",
    	isbn = "978-1-6654-8821-1",
    	doi = "10.1109/IV51971.2022.9827248",
    	booktitle = "2022 IEEE Intelligent Vehicles Symposium (IV)",
    	volume = "2022-June",
    	year = 2022,
    	title = "An Hybrid Approach to Improve the Performance of Encoder-Decoder Architectures for Traversability Analysis in Urban Environments",
    	author = "Fusaro, D. and Olivastri, E. and Evangelista, D. and Iob, P. and Pretto, A."
    }
    
  6. Loris Nanni, Sheryl Brahnam, Michelangelo Paci and Stefano Ghidoni. Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets. SENSORS 22, 2022. DOI BibTeX

    @article{10.3390/s22166129,
    	id = "10.3390/s22166129",
    	doi = "10.3390/s22166129",
    	keywords = "MeLU variants; activation functions; biomedical classification; convolutional neural networks; ensembles; Diagnostic Imaging; Neural Networks, Computer",
    	abstract = "CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering in the hundreds of thousands. Building Deep CNN ensembles of pre-trained CNNs is one powerful method for overcoming this problem. Ensembles combine the outputs of multiple classifiers to improve performance. This method relies on the introduction of diversity, which can be introduced on many levels in the classification workflow. A recent ensembling method that has shown promise is to vary the activation functions in a set of CNNs or within different layers of a single CNN. This study aims to examine the performance of both methods using a large set of twenty activations functions, six of which are presented here for the first time: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The proposed method was tested on fifteen medical data sets representing various classification tasks. The best performing ensemble combined two well-known CNNs (VGG16 and ResNet50) whose standard ReLU activation layers were randomly replaced with another. Results demonstrate the superiority in performance of this approach.",
    	volume = 22,
    	journal = "SENSORS",
    	year = 2022,
    	title = "Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets",
    	author = "Nanni, Loris and Brahnam, Sheryl and Paci, Michelangelo and Ghidoni, Stefano"
    }
    
  7. A Gottardi, S Tortora, E Tosello and E Menegatti. Shared Control in Robot Teleoperation With Improved Potential Fields. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, pages 1–13, 2022. URL, DOI BibTeX

    @article{10.1109/THMS.2022.3155716,
    	id = "10.1109/THMS.2022.3155716",
    	pages = "1--13",
    	doi = "10.1109/THMS.2022.3155716",
    	url = "https://ieeexplore.ieee.org/abstract/document/9734752",
    	keywords = "Aerospace electronics; Artificial potential fields (APFs) le2; Collision avoidance; collision avoidance; Hidden Markov models; human–robot interaction; Real-time systems; Robots; shared control; soft constraint satisfaction problem (CSP); Task analysis; teleoperation; Trajectory",
    	abstract = "In shared control teleoperation, the robot assists the user in accomplishing the desired task. Rather than simply executing the user’s command, the robot attempts to integrate it with information from the environment, such as obstacle and/or goal locations, and it modifies its behavior accordingly. In this article, we propose a real-time shared control teleoperation framework based on an artificial potential field approach improved by the dynamic generation of escape points around the obstacles. These escape points are virtual attractive points in the potential field that the robot can follow to overcome the obstacles more easily. The selection of which escape point to follow is done in real time by solving a soft-constrained problem optimizing the reaching of the most probable goal, estimated from the user’s action. Our proposal has been extensively compared with two state-of-the-art approaches in a static cluttered environment and a dynamic setup with randomly moving objects. Experimental results showed the efficacy of our method in terms of quantitative and qualitative metrics. For example, it significantly decreases the time to complete the tasks and the user’s intervention, and it helps reduce the failure rate. Moreover, we received positive feedback from the users that tested our proposal. Finally, the proposed framework is compatible with both mobile and manipulator robots.",
    	journal = "IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2022,
    	title = "Shared Control in Robot Teleoperation With Improved Potential Fields",
    	author = "Gottardi, A. and Tortora, S. and Tosello, E. and Menegatti, E."
    }
    
  8. Gloria Beraldo, Luca Tonin, José R Millán and Emanuele Menegatti. Shared Intelligence for Robot Teleoperation via BMI. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2022. URL, DOI BibTeX

    @article{10.1109/THMS.2021.3137035,
    	id = "10.1109/THMS.2021.3137035",
    	doi = "10.1109/THMS.2021.3137035",
    	url = "https://ieeexplore.ieee.org/abstract/document/9682521",
    	abstract = "This article proposes a novel shared intelligence system for brain-machine interface (BMI) teleoperated mobile robots where user’s intention and robot’s intelligence are concurrent elements equally participating in the decision process. We designed the system to rely on policies guiding the robot’s behavior according to the current situation. We hypothesized that the fusion of these policies would lead to the identification of the next, most probable, location of the robot in accordance with the user’s expectations. We asked 13 healthy subjects to evaluate the system during teleoperated navigation tasks in a crowded office environment with a keyboard (reliable interface) and with 2-class motor imagery (MI) BMI (uncertain control channel). Experimental results show that our shared intelligence system 1) allows users to efficiently teleoperate the robot in both control modalities; 2) it ensures a level of BMI navigation performances comparable to the keyboard control; 3) it actively assists BMI users in accomplishing the tasks. These results highlight the importance of investigating advanced human-machine interaction (HMI) strategies and introducing robotic intelligence to improve the performances of BMI actuated devices.",
    	journal = "IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS",
    	year = 2022,
    	title = "Shared Intelligence for Robot Teleoperation via BMI",
    	author = "Beraldo, Gloria and Tonin, Luca and Millán, José del R. and Menegatti, Emanuele"
    }
    
  9. Stefano Tortora, Gloria Beraldo, Francesco Bettella, Emanuela Formaggio, Maria Rubega, Alessandra Del Felice, Stefano Masiero, Ruggero Carli, Nicola Petrone, Emanuele Menegatti and Luca Tonin. Neural correlates of user learning during long-term BCI training for the Cybathlon competition. JOURNAL OF NEUROENGINEERING AND REHABILITATION 19, 2022. URL, DOI BibTeX

    @article{10.1186/s12984-022-01047-x,
    	id = "10.1186/s12984-022-01047-x",
    	doi = "10.1186/s12984-022-01047-x",
    	url = "https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-022-01047-x",
    	keywords = "Brain-computer interface; Cybathlon; Long-term evaluation; Motor imagery; Mutual learning; Riemann geometry; User learning; Brain; Electroencephalography; Humans; Machine Learning; Reproducibility of Results; Brain-Computer Interfaces",
    	abstract = "Brain-computer interfaces (BCIs) are systems capable of translating human brain patterns, measured through electroencephalography (EEG), into commands for an external device. Despite the great advances in machine learning solutions to enhance the performance of BCI decoders, the translational impact of this technology remains elusive. The reliability of BCIs is often unsatisfactory for end-users, limiting their application outside a laboratory environment.",
    	volume = 19,
    	journal = "JOURNAL OF NEUROENGINEERING AND REHABILITATION",
    	year = 2022,
    	title = "Neural correlates of user learning during long-term BCI training for the Cybathlon competition",
    	author = "Tortora, Stefano and Beraldo, Gloria and Bettella, Francesco and Formaggio, Emanuela and Rubega, Maria and Del Felice, Alessandra and Masiero, Stefano and Carli, Ruggero and Petrone, Nicola and Menegatti, Emanuele and Tonin, Luca"
    }
    
  10. A Saviolo, M Bonotto, D Evangelista, M Imperoli, J Lazzaro, E Menegatti and A Pretto. Learning to Segment Human Body Parts with Synthetically Trained Deep Convolutional Networks. In Lecture Notes in Networks and Systems 412. 2022, 696–712. DOI BibTeX

    @conference{10.1007/978-3-030-95892-3_52,
    	id = "10.1007/978-3-030-95892-3_52",
    	pages = "696--712",
    	isbn = "978-3-030-95891-6",
    	doi = "10.1007/978-3-030-95892-3_52",
    	keywords = "Deep learning; Foreground segmentation; Human body part segmentation; Semantic segmentation; Synthetic datasets",
    	abstract = "This paper presents a new framework for human body part segmentation based on Deep Convolutional Neural Networks trained using only synthetic data. The proposed approach achieves cutting-edge results without the need of training the models with real annotated data of human body parts. Our contributions include a data generation pipeline, that exploits a game engine for the creation of the synthetic data used for training the network, and a novel pre-processing module, that combines edge response maps and adaptive histogram equalization to guide the network to learn the shape of the human body parts ensuring robustness to changes in the illumination conditions. For selecting the best candidate architecture, we perform exhaustive tests on manually annotated images of real human body limbs. We further compare our method against several high-end commercial segmentation tools on the body parts segmentation task. The results show that our method outperforms the other models by a significant margin. Finally, we present an ablation study to validate our pre-processing module. With this paper, we release an implementation of the proposed approach along with the acquired datasets.",
    	booktitle = "Lecture Notes in Networks and Systems",
    	volume = 412,
    	publisher = "Springer Science and Business Media Deutschland GmbH",
    	year = 2022,
    	title = "Learning to Segment Human Body Parts with Synthetically Trained Deep Convolutional Networks",
    	author = "Saviolo, A. and Bonotto, M. and Evangelista, D. and Imperoli, M. and Lazzaro, J. and Menegatti, E. and Pretto, A."
    }
    
  11. Gloria Beraldo, Kenji Koide, Amedeo Cesta, Satoshi Hoshino, Jun Miura, Matteo Salvà and Emanuele Menegatti. Shared autonomy for telepresence robots based on people-aware navigation. In Proceedings of IEEE 16th international conference on Intelligent Autonomous System (IAS). 2022. URL, DOI BibTeX

    @conference{10.1007/978-3-030-95892-3_9,
    	id = "10.1007/978-3-030-95892-3_9",
    	doi = "10.1007/978-3-030-95892-3_9",
    	url = "https://link.springer.com/chapter/10.1007/978-3-030-95892-3_9",
    	keywords = "Shared-autonomy, People-aware, Social navigation, Dynamic environments, Telepresence robots",
    	booktitle = "Proceedings of IEEE 16th international conference on Intelligent Autonomous System (IAS)",
    	year = 2022,
    	title = "Shared autonomy for telepresence robots based on people-aware navigation",
    	author = "Beraldo, Gloria and Koide, Kenji and Cesta, Amedeo and Hoshino, Satoshi and Miura, Jun and Salvà, Matteo and Menegatti, Emanuele"
    }
    


2021

  1. G Maguolo, L Nanni and S Ghidoni. Ensemble of convolutional neural networks trained with different activation functions. EXPERT SYSTEMS WITH APPLICATIONS 166, 2021. DOI BibTeX

    @article{10.1016/j.eswa.2020.114048,
    	doi = "10.1016/j.eswa.2020.114048",
    	keywords = "Activation functions; Ensemble; MeLU; Neural networks",
    	abstract = "Activation functions play a vital role in the training of Convolutional Neural Networks. For this reason, developing efficient and well-performing functions is a crucial problem in the deep learning community. The idea of these approaches is to allow a reliable parameter learning, avoiding vanishing gradient problems. The goal of this work is to propose an ensemble of Convolutional Neural Networks trained using several different activation functions. Moreover, a novel activation function is here proposed for the first time. Our aim is to improve the performance of Convolutional Neural Networks in small/medium sized biomedical datasets. Our results clearly show that the proposed ensemble outperforms Convolutional Neural Networks trained with a standard ReLU as activation function. The proposed ensemble outperforms with a p-value of 0.01 each tested stand-alone activation function; for reliable performance comparison we tested our approach on more than 10 datasets, using two well-known Convolutional Neural Networks: Vgg16 and ResNet50.",
    	volume = 166,
    	journal = "EXPERT SYSTEMS WITH APPLICATIONS",
    	publisher = "Elsevier Ltd",
    	year = 2021,
    	title = "Ensemble of convolutional neural networks trained with different activation functions",
    	author = "Maguolo, G. and Nanni, L. and Ghidoni, S.",
    	id = "11577_3389751"
    }
    
  2. Maria Rubega, Emanuela Formaggio, Roberto Di Marco, Margherita Bertuccelli, Stefano Tortora, Emanuele Menegatti, Manuela Cattelan, Paolo Bonato, Stefano Masiero and Alessandra Del Felice. Cortical correlates in upright dynamic and static balance in the elderly. SCIENTIFIC REPORTS 11, 2021. DOI BibTeX

    @article{10.1038/s41598-021-93556-3,
    	doi = "10.1038/s41598-021-93556-3",
    	abstract = "Falls are the second most frequent cause of injury in the elderly. Physiological processes associated with aging affect the elderly's ability to respond to unexpected balance perturbations, leading to increased fall risk. Every year, approximately 30% of adults, 65 years and older, experiences at least one fall. Investigating the neurophysiological mechanisms underlying the control of static and dynamic balance in the elderly is an emerging research area. The study aimed to identify cortical and muscular correlates during static and dynamic balance tests in a cohort of young and old healthy adults. We recorded cortical and muscular activity in nine elderly and eight younger healthy participants during an upright stance task in static and dynamic (core board) conditions. To simulate real-life dual-task postural control conditions, the second set of experiments incorporated an oddball visual task. We observed higher electroencephalographic (EEG) delta rhythm over the anterior cortex in the elderly and more diffused fast rhythms (i.e., alpha, beta, gamma) in younger participants during the static balance tests. When adding a visual oddball, the elderly displayed an increase in theta activation over the sensorimotor and occipital cortices. During the dynamic balance tests, the elderly showed the recruitment of sensorimotor areas and increased muscle activity level, suggesting a preferential motor strategy for postural control. This strategy was even more prominent during the oddball task. Younger participants showed reduced cortical and muscular activity compared to the elderly, with the noteworthy difference of a preferential activation of occipital areas that increased during the oddball task. These results support the hypothesis that different strategies are used by the elderly compared to younger adults during postural tasks, particularly when postural and cognitive tasks are combined. The knowledge gained in this study could inform the development of age-specific rehabilitative and assistive interventions.",
    	volume = 11,
    	journal = "SCIENTIFIC REPORTS",
    	year = 2021,
    	title = "Cortical correlates in upright dynamic and static balance in the elderly",
    	author = "Rubega, Maria and Formaggio, Emanuela and Di Marco, Roberto and Bertuccelli, Margherita and Tortora, Stefano and Menegatti, Emanuele and Cattelan, Manuela and Bonato, Paolo and Masiero, Stefano and Del Felice, Alessandra",
    	id = "11577_3395906"
    }
    
  3. Maria Rubega, Roberto Di Marco, Marianna Zampini, Emanuela Formaggio, Emanuele Menegatti, Paolo Bonato, Stefano Masiero and Alessandra Del Felice. Muscular and cortical activation during dynamic and static balance in the elderly: A scoping review. AGING BRAIN 1, 2021. URL, DOI BibTeX

    @article{10.1016/j.nbas.2021.100013,
    	doi = "10.1016/j.nbas.2021.100013",
    	url = "https://www.sciencedirect.com/science/article/pii/S2589958921000098#ak005",
    	keywords = "Electromyography, Electroencephalography, Balance, Older adults, Postural control",
    	abstract = "Falls due to balance impairment are a major cause of injury and disability in the elderly. The study of neurophysiological correlates during static and dynamic balance tasks is an emerging area of research that could lead to novel rehabilitation strategies and reduce fall risk. This review aims to highlight key concepts and identify gaps in the current knowledge of balance control in the elderly that could be addressed by relying on surface electromyographic (EMG) and electroencephalographic (EEG) recordings. The neurophysiological hypotheses underlying balance studies in the elderly as well as the methodologies, findings, and limitations of prior work are herein addressed. The literature shows: 1) a wide heterogeneity in the experimental procedures, protocols, and analyses; 2) a paucity of studies involving the investigation of cortical activity; 3) aging-related alterations of cortical activation during balance tasks characterized by lower cortico-muscular coherence and increased allocation of attentional control to postural tasks in the elderly; and 4) EMG patterns characterized by delayed onset after perturbations, increased levels of activity, and greater levels of muscle co-activation in the elderly compared to younger adults. EMG and EEG recordings are valuable tools to monitor muscular and cortical activity during the performance of balance tasks. However, standardized protocols and analysis techniques should be agreed upon and shared by the scientific community to provide reliable and reproducible results. This will allow researchers to gain a comprehensive knowledge on the neurophysiological changes affecting static and dynamic balance in the elderly and will inform the design of rehabilitative and preventive interventions.",
    	volume = 1,
    	journal = "AGING BRAIN",
    	publisher = "Elsevier",
    	year = 2021,
    	title = "Muscular and cortical activation during dynamic and static balance in the elderly: A scoping review",
    	author = "Rubega, Maria and Di Marco, Roberto and Zampini, Marianna and Formaggio, Emanuela and Menegatti, Emanuele and Bonato, Paolo and Masiero, Stefano and Del Felice, Alessandra",
    	id = "11577_3388698"
    }
    
  4. M Terreran and S Ghidoni. Light deep learning models enriched with Entangled features for RGB-D semantic segmentation. ROBOTICS AND AUTONOMOUS SYSTEMS 146, 2021. DOI BibTeX

    @article{10.1016/j.robot.2021.103862,
    	doi = "10.1016/j.robot.2021.103862",
    	keywords = "Deep learning; Scene understanding; Semantic segmentation",
    	abstract = "Semantic segmentation is a crucial task in emerging robotic applications like autonomous driving and social robotics. State-of-the-art methods in this field rely on deep learning, with several works in the literature following the trend of using larger networks to achieve higher performance. However, this leads to greater model complexity and higher computational costs, which make it difficult to integrate such models on mobile robots. In this work we investigate how it is possible to obtain lighter performing deep models introducing additional data at a very low computational cost, instead of increasing the network complexity. We consider the features used in the 3D Entangled Forests algorithm, proposing different strategies to integrate such additional information into different deep networks. The new features allow to obtain lighter and performing segmentation models, either by shrinking the network size or improving existing networks proposed for real-time segmentation. Such result represents an interesting alternative in mobile robotics application, where computational power and energy are limited.",
    	volume = 146,
    	journal = "ROBOTICS AND AUTONOMOUS SYSTEMS",
    	publisher = "Elsevier B.V.",
    	year = 2021,
    	title = "Light deep learning models enriched with Entangled features for RGB-D semantic segmentation",
    	author = "Terreran, M. and Ghidoni, S.",
    	id = "11577_3400526"
    }
    
  5. Mattia Guidolin, RAZVAN ANDREI BUDAU PETREA, Roberto Oboe, Monica Reggiani, Emanuele Menegatti and Luca Tagliapietra. On the accuracy of IMUs for human motion tracking: a comparative evaluation. In Conference Proceedings - IEEE International Conference on Mechatronics. 2021. DOI BibTeX

    @conference{10.1109/ICM46511.2021.9385684,
    	doi = "10.1109/ICM46511.2021.9385684",
    	abstract = "Inertial Measurement Units (IMUs) are becoming more and more popular in human motion tracking applications. Among the other advantages, wearability, portability, limited costs, and accuracy are the main drivers of their increasing use. These devices are nowadays well-established commercially available products, ranging from few to hundreds of Euros. The main purpose of this study is to investigate the potentialities and the limits of IMUs belonging to different commercial segments in providing accurate orientation estimates within the operating conditions characterizing the human motion. These are simulated by means of a direct drive servomotor, in order to ensure accuracy and repeatability of the whole assessment pipeline. Both static and dynamic conditions are analyzed, the latter obtained by varying frequency and amplitude of a sinusoidal motion, thus evaluating the performances for a broad set of movements. IMUs orientations are estimated through proprietary filters, when available, and then compared with the well-established Madgwick's filter, to effectively investigate the performances of the on-board sensors. Results show that the low-cost IMUs are suited for applications requiring low bandwidth, while the comparison through Madgwick's filter did not highlight appreciable differences among the IMUs.",
    	booktitle = "Conference Proceedings - IEEE International Conference on Mechatronics",
    	year = 2021,
    	title = "On the accuracy of IMUs for human motion tracking: a comparative evaluation",
    	author = "Guidolin, Mattia and BUDAU PETREA, RAZVAN ANDREI and Oboe, Roberto and Reggiani, Monica and Menegatti, Emanuele and Tagliapietra, Luca",
    	id = "11577_3363611"
    }
    
  6. Mattia Guidolin, Emanuele Menegatti, Monica Reggiani and Luca Tagliapietra. A ROS driver for Xsens wireless inertial measurement unit systems. In Conference Proceedings - IEEE International Conference on Industrial Technology. 2021, 677–683. DOI BibTeX

    @conference{10.1109/ICIT46573.2021.9453640,
    	pages = "677--683",
    	doi = "10.1109/ICIT46573.2021.9453640",
    	keywords = "Human robot interaction, XSens, IMU, ROS, HiRos, Inertial Measurement Units",
    	abstract = "This paper presents an efficient open-source driver for interfacing Xsens inertial measurement systems (in particular the Xsens MTw Awinda wireless motion trackers) with the Robot Operating System (ROS). The driver supports the simultaneous connection of up to 20 trackers, limit fixed by the Xsens software, to a master PC, and directly streams sensors data (linear accelerations, angular velocities, magnetic fields, orientations) up to 120 Hz to the ROS network through one or multiple configurable topics. Moreover, a synchronization procedure is implemented to avoid possible partial frames where the readings from one (or multiple) trackers are missing. The proposed messages are based on ROS standard ones and comply with the ROS developer guidelines. This guarantees the compatibility of any ROS package requiring as input ROS messages with the proposed driver, thus effectively integrating Xsens inertial measurement systems with the ROS ecosystem. This work aims to push forward the development of a large variety of human-robot interaction applications where accurate real-time knowledge of human motion is crucial.",
    	booktitle = "Conference Proceedings - IEEE International Conference on Industrial Technology",
    	year = 2021,
    	title = "A ROS driver for Xsens wireless inertial measurement unit systems",
    	author = "Guidolin, Mattia and Menegatti, Emanuele and Reggiani, Monica and Tagliapietra, Luca",
    	id = "11577_3363610"
    }
    
  7. Alberto Bacchin, Gloria Beraldo and Emanuele Menegatti. Learning to plan people-aware trajectories for robot navigation: A genetic algorithm. In Proceedings of IEEE European Conference on Mobile Robots (EMCR). 2021. URL, DOI BibTeX

    @conference{10.1109/ECMR50962.2021.9568804,
    	isbn = "978-166541213-1",
    	doi = "10.1109/ECMR50962.2021.9568804",
    	url = "https://ieeexplore.ieee.org/document/9568804",
    	abstract = "Nowadays, one of the emergent challenges in mobile robotics consists of navigating safely and efficiently in dynamic environments populated by people. This paper focuses on the robot's motion planning by proposing a learning-based method to adjust the robot's trajectories to people's movements by respecting the proxemics rules. With this purpose, we design a genetic algorithm to train the navigation stack of ROS during the goal-based navigation while the robot is disturbed by people. We also present a simulation environment based on Gazebo that extends the animated model for emulating a more natural human's walking. Preliminary results show that our approach is able to plan people-aware robot's trajectories respecting proxemics limits without worsening the performance in navigation.",
    	booktitle = "Proceedings of IEEE European Conference on Mobile Robots (EMCR)",
    	year = 2021,
    	title = "Learning to plan people-aware trajectories for robot navigation: A genetic algorithm",
    	author = "Bacchin, Alberto and Beraldo, Gloria and Menegatti, Emanuele",
    	id = "11577_3402573"
    }
    
  8. Stefano Tortora, Maria Rubega, Emanuela Formaggio, Roberto Di Marco, Stefano Masiero, Emanuele Menegatti, Luca Tonin and Alessandra Del Felice. Age-related differences in visual P300 ERP during dual-task postural balance. In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) 2021. 2021, 6511-6514–6514. URL, DOI BibTeX

    @conference{10.1109/EMBC46164.2021.9630088,
    	pages = "6511-6514--6514",
    	isbn = "978-1-7281-1179-7",
    	doi = "10.1109/EMBC46164.2021.9630088",
    	url = "https://ieeexplore.ieee.org/document/9630088",
    	abstract = "Standing and concurrently performing a cognitive task is a very common situation in everyday life. It is associated with a higher risk of falling in the elderly. Here, we aim at evaluating the differences of the P300 evoked potential elicited by a visual oddball paradigm between healthy younger (< 35 y) and older (> 64 y) adults during a simultaneous postural task. We found that P300 latency increases significantly (p < 0.001) when the elderly are engaged in more challenging postural tasks; younger adults show no effect of balance condition. Our results demonstrate that, even if the elderly have the same accuracy in odd stimuli detection as younger adults do, they require a longer processing time for stimulus discrimination. This finding suggests an increased attentional load which engages additional cerebral reserves.",
    	booktitle = "2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)",
    	volume = 2021,
    	year = 2021,
    	title = "Age-related differences in visual P300 ERP during dual-task postural balance",
    	author = "Tortora, Stefano and Rubega, Maria and Formaggio, Emanuela and Di Marco, Roberto and Masiero, Stefano and Menegatti, Emanuele and Tonin, Luca and Del Felice, Alessandra",
    	id = "11577_3409536"
    }
    
  9. R Di Marco, M Rubega, O Lennon, E Formaggio, N Sutaj, G Dazzi, C Venturin, I Bonini, R Ortner, H A C Bazo, L Tonin, S Tortora, S Masiero and A Del Felice. Experimental protocol to assess neuromuscular plasticity induced by an exoskeleton training session. METHODS AND PROTOCOLS 4, 2021. DOI BibTeX

    @article{10.3390/mps4030048,
    	doi = "10.3390/mps4030048",
    	keywords = "Aging; EEG; EMG; Exoskeleton; Neuromuscular plasticity; Rehabilitation; Stroke",
    	abstract = "Exoskeleton gait rehabilitation is an emerging area of research, with potential applications in the elderly and in people with central nervous system lesions, e.g., stroke, traumatic brain/spinal cord injury. However, adaptability of such technologies to the user is still an unmet goal. Despite important technological advances, these robotic systems still lack the fine tuning necessary to adapt to the physiological modification of the user and are not yet capable of a proper human-machine interaction. Interfaces based on physiological signals, e.g., recorded by electroencephalography (EEG) and/or electromyography (EMG), could contribute to solving this technological challenge. This protocol aims to: (1) quantify neuro-muscular plasticity induced by a single training session with a robotic exoskeleton on post-stroke people and on a group of age and sex-matched controls; (2) test the feasibility of predicting lower limb motor trajectory from physiological signals for future use as control signal for the robot. An active exoskeleton that can be set in full mode (i.e., the robot fully replaces and drives the user motion), adaptive mode (i.e., assistance to the user can be tuned according to his/her needs), and free mode (i.e., the robot completely follows the user movements) will be used. Participants will undergo a preparation session, i.e., EMG sensors and EEG cap placement and inertial sensors attachment to measure, respectively, muscular and cortical activity, and motion. They will then be asked to walk in a 15 m corridor: (i) self-paced without the exoskeleton (pre-training session); (ii) wearing the exoskeleton and walking with the three modes of use; (iii) self-paced without the exoskeleton (post-training session). From this dataset, we will: (1) quantitatively estimate short-term neuroplasticity of brain connectivity in chronic stroke survivors after a single session of gait training; (2) compare muscle activation patterns during exoskeleton-gait between stroke survivors and age and sex-matched controls; and (3) perform a feasibility analysis on the use of physiological signals to decode gait intentions.",
    	volume = 4,
    	journal = "METHODS AND PROTOCOLS",
    	publisher = "MDPI AG",
    	year = 2021,
    	title = "Experimental protocol to assess neuromuscular plasticity induced by an exoskeleton training session",
    	author = "Di Marco, R. and Rubega, M. and Lennon, O. and Formaggio, E. and Sutaj, N. and Dazzi, G. and Venturin, C. and Bonini, I. and Ortner, R. and Bazo, H. A. C. and Tonin, L. and Tortora, S. and Masiero, S. and Del Felice, A.",
    	id = "11577_3408171"
    }
    
  10. Gloria Beraldo, Luca Tonin, Amedeo Cesta and Emanuele Menegatti. Brain-Driven Telepresence Robots: A Fusion of User’s Commands with Robot’s Intelligence. In 19th Proceedings of the International Conference of the Italian Association for Artificial Intelligence 12414. 2021, 235–248. URL, DOI BibTeX

    @conference{10.1007/978-3-030-77091-4_15,
    	pages = "235--248",
    	isbn = "978-3-030-77090-7",
    	doi = "10.1007/978-3-030-77091-4_15",
    	url = "https://link.springer.com/chapter/10.1007/978-3-030-77091-4_15",
    	booktitle = "19th Proceedings of the International Conference of the Italian Association for Artificial Intelligence",
    	volume = 12414,
    	year = 2021,
    	title = "Brain-Driven Telepresence Robots: A Fusion of User’s Commands with Robot’s Intelligence",
    	author = "Beraldo, Gloria and Tonin, Luca and Cesta, Amedeo and Menegatti, Emanuele",
    	id = "11577_3395898"
    }
    
  11. Gloria Beraldo, Luca Tonin and Emanuele Menegatti. Shared Intelligence for User-Supervised Robots: From User’s Commands to Robot’s Actions. In 19th Proceedings of the International Conference of the Italian Association for Artificial Intelligence 12414. 2021, 457–465. URL, DOI BibTeX

    @conference{10.1007/978-3-030-77091-4_27,
    	pages = "457--465",
    	isbn = "978-3-030-77090-7",
    	doi = "10.1007/978-3-030-77091-4_27",
    	url = "https://link.springer.com/chapter/10.1007/978-3-030-77091-4_27",
    	booktitle = "19th Proceedings of the International Conference of the Italian Association for Artificial Intelligence",
    	volume = 12414,
    	year = 2021,
    	title = "Shared Intelligence for User-Supervised Robots: From User’s Commands to Robot’s Actions",
    	author = "Beraldo, Gloria and Tonin, Luca and Menegatti, Emanuele",
    	id = "11577_3395897"
    }
    
  12. Gloria Beraldo, Luca Tonin, Amedeo Cesta and Emanuele Menegatti. Shared approaches to mentally drive telepresence robots. In Conference Proceedings - 7th Italian Workshop on Artificial Intelligence and Robotics (AIRO 2020). 2021. URL BibTeX

    @conference{,
    	11577_3359960pages = "22--27",
    	url = "https://www.scopus.com/record/display.uri?eid=2-s2.0-85101290718&origin=AuthorEval&zone=hIndex-DocumentList",
    	abstract = "Recently there has been a growing interest in designing human-in-the-loop applicationsbased on shared approachesthat fuse the user’s commands with the perception of the context. In this scenario, we focus on user-supervised telepresence robots, designed to improvethe quality of life of peoplesuffering from severe physical disabilities or elderlywho cannot move anymore.In this regard, we introduce brain-machine interfaces that enable usersto directly control the robot through their brain activity. Since the nature of this interface, characterized by low bit rate and noise, herein, we present different methodologiestoaugment the human-robot interaction and to facilitate the research and the development of these technologies.",
    	booktitle = "Conference Proceedings - 7th Italian Workshop on Artificial Intelligence and Robotics (AIRO 2020)",
    	year = 2021,
    	title = "Shared approaches to mentally drive telepresence robots",
    	author = "Beraldo, Gloria and Tonin, Luca and Cesta, Amedeo and Menegatti, Emanuele",
    	id = "11577_3359960"
    }
    
  13. A Pretto, S Aravecchia, W Burgard, N Chebrolu, C Dornhege, T Falck, F V Fleckenstein, A Fontenla, M Imperoli, R Khanna, F Liebisch, P Lottes, A Milioto, D Nardi, S Nardi, J Pfeifer, M Popovic, C Potena, C Pradalier, E Rothacker-Feder, I Sa, A Schaefer, R Siegwart, C Stachniss, A Walter, W Winterhalter, X Wu and J Nieto. Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution. IEEE ROBOTICS AND AUTOMATION MAGAZINE 28:29–49, 2021. DOI BibTeX

    @article{10.1109/MRA.2020.3012492,
    	pages = "29--49",
    	doi = "10.1109/MRA.2020.3012492",
    	volume = 28,
    	journal = "IEEE ROBOTICS AND AUTOMATION MAGAZINE",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2021,
    	title = "Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution",
    	author = "Pretto, A. and Aravecchia, S. and Burgard, W. and Chebrolu, N. and Dornhege, C. and Falck, T. and Fleckenstein, F. V. and Fontenla, A. and Imperoli, M. and Khanna, R. and Liebisch, F. and Lottes, P. and Milioto, A. and Nardi, D. and Nardi, S. and Pfeifer, J. and Popovic, M. and Potena, C. and Pradalier, C. and Rothacker-Feder, E. and Sa, I. and Schaefer, A. and Siegwart, R. and Stachniss, C. and Walter, A. and Winterhalter, W. and Wu, X. and Nieto, J.",
    	id = "11577_3392373"
    }
    
  14. N Castaman, E Pagello, E Menegatti and A Pretto. Receding Horizon Task and Motion Planning in Changing Environments. ROBOTICS AND AUTONOMOUS SYSTEMS 145, 2021. DOI BibTeX

    @article{10.1016/j.robot.2021.103863,
    	doi = "10.1016/j.robot.2021.103863",
    	keywords = "Non-static Environments; Robot manipulation; Task and Motion Planning",
    	abstract = "Complex manipulation tasks require careful integration of symbolic reasoning and motion planning. This problem, commonly referred to as Task and Motion Planning (TAMP), is even more challenging if the workspace is non-static, e.g. due to human interventions and perceived with noisy non-ideal sensors. This work proposes an online approximated TAMP method that combines a geometric reasoning module and a motion planner with a standard task planner in a receding horizon fashion. Our approach iteratively solves a reduced planning problem over a receding window of a limited number of future actions during the implementation of the actions. Thus, only the first action of the horizon is actually scheduled at each iteration, then the window is moved forward, and the problem is solved again. This procedure allows to naturally take into account potential changes in the scene while ensuring good runtime performance. We validate our approach within extensive experiments in a simulated environment. We showed that our approach is able to deal with unexpected changes in the environment while ensuring comparable performance with respect to other recent TAMP approaches in solving traditional static benchmarks. We release with this paper the open-source implementation of our method.",
    	volume = 145,
    	journal = "ROBOTICS AND AUTONOMOUS SYSTEMS",
    	publisher = "Elsevier B.V.",
    	year = 2021,
    	title = "Receding Horizon Task and Motion Planning in Changing Environments",
    	author = "Castaman, N. and Pagello, E. and Menegatti, E. and Pretto, A.",
    	id = "11577_3397909"
    }
    
  15. Matteo Terreran, Daniele Evangelista, Jacopo Lazzaro and Alberto Pretto. Make It Easier: An Empirical Simplification of a Deep 3D Segmentation Network for Human Body Parts. Markus Vincze, Timothy Patten, Henrik I Christensen, Lazaros Nalpantidis, Ming Liu, 2021. DOI BibTeX

    @inbook{10.1007/978-3-030-87156-7_12,
    	isbn = "978-3-030-87156-7",
    	doi = "10.1007/978-3-030-87156-7_12",
    	booktitle = "2021 13th IEEE International Conference on Computer Vision Systems (ICVS)",
    	publisher = "Markus Vincze, Timothy Patten, Henrik I Christensen, Lazaros Nalpantidis, Ming Liu",
    	year = 2021,
    	title = "Make It Easier: An Empirical Simplification of a Deep 3D Segmentation Network for Human Body Parts",
    	author = "Terreran, Matteo and Evangelista, Daniele and Lazzaro, Jacopo and Pretto, Alberto",
    	id = "11577_3400531"
    }
    
  16. M Fawakherji, C Potena, A Pretto, D D Bloisi and D Nardi. Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming. ROBOTICS AND AUTONOMOUS SYSTEMS 146, 2021. DOI BibTeX

    @article{10.1016/j.robot.2021.103861,
    	doi = "10.1016/j.robot.2021.103861",
    	keywords = "Agricultural robotics; cGANs; Crop/weed detection; Semantic segmentation",
    	abstract = "An effective perception system is a fundamental component for farming robots, as it enables them to properly perceive the surrounding environment and to carry out targeted operations. The most recent methods make use of state-of-the-art machine learning techniques to learn a valid model for the target task. However, those techniques need a large amount of labeled data for training. A recent approach to deal with this issue is data augmentation through Generative Adversarial Networks (GANs), where entire synthetic scenes are added to the training data, thus enlarging and diversifying their informative content. In this work, we propose an alternative solution with respect to the common data augmentation methods, applying it to the fundamental problem of crop/weed segmentation in precision farming. Starting from real images, we create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts. To do that, we employ a conditional GAN (cGAN), where the generative model is trained by conditioning the shape of the generated object. Moreover, in addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images. Quantitative experiments, carried out on three publicly available datasets, show that (i) our model is capable of generating realistic multi-spectral images of plants and (ii) the usage of such synthetic images in the training process improves the segmentation performance of state-of-the-art semantic segmentation convolutional networks.",
    	volume = 146,
    	journal = "ROBOTICS AND AUTONOMOUS SYSTEMS",
    	publisher = "Elsevier B.V.",
    	year = 2021,
    	title = "Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision Farming",
    	author = "Fawakherji, M. and Potena, C. and Pretto, A. and Bloisi, D. D. and Nardi, D.",
    	id = "11577_3399543"
    }
    


2020

  1. M Antonello, S Chiesurin and S Ghidoni. Enhancing semantic segmentation with detection priors and iterated graph cuts for robotics. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 90:1–14, 2020. DOI BibTeX

    @article{10.1016/j.engappai.2019.103467,
    	pages = "1--14",
    	doi = "10.1016/j.engappai.2019.103467",
    	keywords = "Mapping; Object detection; Segmentation and categorization; Semantic scene understanding",
    	abstract = "To foster human–robot interaction, autonomous robots need to understand the environment in which they operate. In this context, one of the main challenges is semantic segmentation, together with the recognition of important objects, which can aid robots during exploration, as well as when planning new actions and interacting with the environment. In this study, we extend a multi-view semantic segmentation system based on 3D Entangled Forests (3DEF) by integrating and refining two object detectors, Mask R-CNN and You Only Look Once (YOLO), with Bayesian fusion and iterated graph cuts. The new system takes the best of its components, successfully exploiting both 2D and 3D data. Our experiments show that our approach is competitive with the state-of-the-art and leads to accurate semantic segmentations.",
    	volume = 90,
    	journal = "ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE",
    	publisher = "Elsevier Ltd",
    	year = 2020,
    	title = "Enhancing semantic segmentation with detection priors and iterated graph cuts for robotics",
    	author = "Antonello, M. and Chiesurin, S. and Ghidoni, S.",
    	id = "11577_3333945"
    }
    
  2. Paolo Franceschi, Nicola Castaman, Stefano Ghidoni and Nicola Pedrocchi. Precise Robotic Manipulation of Bulky Components. IEEE ACCESS 8:222476–222485, 2020. DOI BibTeX

    @article{10.1109/ACCESS.2020.3043069,
    	pages = "222476--222485",
    	doi = "10.1109/ACCESS.2020.3043069",
    	volume = 8,
    	journal = "IEEE ACCESS",
    	year = 2020,
    	title = "Precise Robotic Manipulation of Bulky Components",
    	author = "Franceschi, Paolo and Castaman, Nicola and Ghidoni, Stefano and Pedrocchi, Nicola",
    	id = "11577_3360183"
    }
    
  3. Loris Roveda, Nicola Castaman, Paolo Franceschi, Stefano Ghidoni and Nicola Pedrocchi. A Control Framework Definition to Overcome Position/Interaction Dynamics Uncertainties in Force-Controlled Tasks. In Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA). 2020, 6819–6825. DOI BibTeX

    @conference{10.1109/ICRA40945.2020.9197141,
    	pages = "6819--6825",
    	isbn = "978-1-7281-7395-5",
    	doi = "10.1109/ICRA40945.2020.9197141",
    	booktitle = "Proceedings of 2020 IEEE International Conference on Robotics and Automation (ICRA)",
    	year = 2020,
    	title = "A Control Framework Definition to Overcome Position/Interaction Dynamics Uncertainties in Force-Controlled Tasks",
    	author = "Roveda, Loris and Castaman, Nicola and Franceschi, Paolo and Ghidoni, Stefano and Pedrocchi, Nicola",
    	id = "11577_3342003"
    }
    
  4. L Nanni, A Lumini, S Ghidoni and G Maguolo. Stochastic selection of activation layers for convolutional neural networks. SENSORS 20:1–15, 2020. DOI BibTeX

    @article{10.3390/s20061626,
    	pages = "1--15",
    	doi = "10.3390/s20061626",
    	keywords = "Activation functions; Convolutional Neural Networks; Ensemble of classifiers; Image classification; Skin detection",
    	abstract = "In recent years, the field of deep learning has achieved considerable success in pattern recognition, image segmentation, and many other classification fields. There are many studies and practical applications of deep learning on images, video, or text classification. Activation functions play a crucial role in discriminative capabilities of the deep neural networks and the design of new “static” or “dynamic” activation functions is an active area of research. The main difference between “static” and “dynamic” functions is that the first class of activations considers all the neurons and layers as identical, while the second class learns parameters of the activation function independently for each layer or even each neuron. Although the “dynamic” activation functions perform better in some applications, the increased number of trainable parameters requires more computational time and can lead to overfitting. In this work, we propose a mixture of “static” and “dynamic” activation functions, which are stochastically selected at each layer. Our idea for model design is based on a method for changing some layers along the lines of different functional blocks of the best performing CNN models, with the aim of designing new models to be used as stand-alone networks or as a component of an ensemble. We propose to replace each activation layer of a CNN (usually a ReLU layer) by a different activation function stochastically drawn from a set of activation functions: in this way, the resulting CNN has a different set of activation function layers. The code developed for this work will be available at https://github.com/LorisNanni.",
    	volume = 20,
    	journal = "SENSORS",
    	publisher = "MDPI AG",
    	year = 2020,
    	title = "Stochastic selection of activation layers for convolutional neural networks",
    	author = "Nanni, L. and Lumini, A. and Ghidoni, S. and Maguolo, G.",
    	id = "11577_3334022"
    }
    
  5. K Koide and E Menegatti. Non-overlapping RGB-D camera network calibration with monocular visual odometry. In IEEE International Conference on Intelligent Robots and Systems. 2020, 9005–9011. DOI BibTeX

    @conference{10.1109/IROS45743.2020.9340825,
    	pages = "9005--9011",
    	isbn = "978-1-7281-6212-6",
    	doi = "10.1109/IROS45743.2020.9340825",
    	abstract = "This paper describes a calibration method for RGB-D camera networks consisting of not only static overlapping, but also dynamic and non-overlapping cameras. The proposed method consists of two steps: online visual odometry-based calibration and depth image-based calibration refinement. It first estimates the transformations between overlapping cameras using fiducial tags, and bridges non-overlapping camera views through visual odometry that runs on a dynamic monocular camera. Parameters such as poses of the static cameras and tags, as well as dynamic camera trajectory, are estimated in the form of the pose graph-based online landmark SLAM. Then, depth-based ICP and floor constraints are added to the pose graph to compensate for the visual odometry error and refine the calibration result. The proposed method is validated through evaluation in simulated and real environments, and a person tracking experiment is conducted to demonstrate the data integration of static and dynamic cameras.",
    	booktitle = "IEEE International Conference on Intelligent Robots and Systems",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2020,
    	title = "Non-overlapping RGB-D camera network calibration with monocular visual odometry",
    	author = "Koide, K. and Menegatti, E.",
    	id = "11577_3389558"
    }
    
  6. Y Zhao, T Birdal, J E Lenssen, E Menegatti, L Guibas and F Tombari. Quaternion Equivariant Capsule Networks for 3D Point Clouds. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12346. 2020, 1–19. DOI BibTeX

    @conference{10.1007/978-3-030-58452-8_1,
    	pages = "1--19",
    	isbn = "978-3-030-58451-1",
    	doi = "10.1007/978-3-030-58452-8_1",
    	keywords = "3D; Disentanglement; Equivariance; Quaternion; Rotation",
    	abstract = "We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points. The operator receives a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end transformation equivariance through a novel dynamic routing procedure on quaternions. Further, we theoretically connect dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving iterative re-weighted least squares (IRLS) problems with provable convergence properties. It is shown that such group dynamic routing can be interpreted as robust IRLS rotation averaging on capsule votes, where information is routed based on the final inlier scores. Based on our operator, we build a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space. Our architecture allows joint object classification and orientation estimation without explicit supervision of rotations. We validate our algorithm empirically on common benchmark datasets.",
    	booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
    	volume = 12346,
    	publisher = "Springer Science and Business Media Deutschland GmbH",
    	year = 2020,
    	title = "Quaternion Equivariant Capsule Networks for 3D Point Clouds",
    	author = "Zhao, Y. and Birdal, T. and Lenssen, J. E. and Menegatti, E. and Guibas, L. and Tombari, F.",
    	id = "11577_3389570"
    }
    
  7. F Just, O Ozen, S Tortora, V Klamroth-Marganska, R Riener and G Rauter. Human arm weight compensation in rehabilitation robotics: Efficacy of three distinct methods. JOURNAL OF NEUROENGINEERING AND REHABILITATION 17, 2020. URL, DOI BibTeX

    @article{10.1186/s12984-020-0644-3,
    	doi = "10.1186/s12984-020-0644-3",
    	url = "https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-020-0644-3#citeas",
    	keywords = "Arm weight compensation; EMG; Rehabilitation robotics; Stroke; Workspace assessment",
    	abstract = "Background: Arm weight compensation with rehabilitation robots for stroke patients has been successfully used to increase the active range of motion and reduce the effects of pathological muscle synergies. However, the differences in structure, performance, and control algorithms among the existing robotic platforms make it hard to effectively assess and compare human arm weight relief. In this paper, we introduce criteria for ideal arm weight compensation, and furthermore, we propose and analyze three distinct arm weight compensation methods (Average, Full, Equilibrium) in the arm rehabilitation exoskeleton 'ARMin'. The effect of the best performing method was validated in chronic stroke subjects to increase the active range of motion in three dimensional space. Methods: All three methods are based on arm models that are generalizable for use in different robotic devices and allow individualized adaptation to the subject by model parameters. The first method Average uses anthropometric tables to determine subject-specific parameters. The parameters for the second method Full are estimated based on force sensor data in predefined resting poses. The third method Equilibrium estimates parameters by optimizing an equilibrium of force/torque equations in a predefined resting pose. The parameters for all three methods were first determined and optimized for temporal and spatial estimation sensitivity. Then, the three methods were compared in a randomized single-center study with respect to the remaining electromyography (EMG) activity of 31 healthy participants who performed five arm poses covering the full range of motion with the exoskeleton robot. The best method was chosen for feasibility tests with three stroke patients. In detail, the influence of arm weight compensation on the three dimensional workspace was assessed by measuring of the horizontal workspace at three different height levels in stroke patients. Results: All three arm weight compensation methods reduced the mean EMG activity of healthy subjects to at least 49% compared with the no compensation reference. The Equilibrium method outperformed the Average and the Full methods with a highly significant reduction in mean EMG activity by 19% and 28% respectively. However, upon direct comparison, each method has its own individual advantages such as in set-up time, cost, or required technology. The horizontal workspace assessment in poststroke patients with the Equilibrium method revealed potential workspace size-dependence of arm height, while weight compensation helped maximize the workspace as much as possible. Conclusion: Different arm weight compensation methods were developed according to initially defined criteria. The methods were then analyzed with respect to their sensitivity and required technology. In general, weight compensation performance improved with the level of technology, but increased cost and calibration efforts. This study reports a systematic way to analyze the efficacy of different weight compensation methods using EMG. Additionally, the feasibility of the best method, Equilibrium, was shown by testing with three stroke patients. In this test, a height dependence of the workspace size also seemed to be present, which further highlights the importance of patient-specific weight compensation, particularly for training at different arm heights. Trial registration: ClinicalTrials.gov,NCT02720341. Registered 25 March 2016",
    	volume = 17,
    	journal = "JOURNAL OF NEUROENGINEERING AND REHABILITATION",
    	publisher = "BioMed Central Ltd.",
    	year = 2020,
    	title = "Human arm weight compensation in rehabilitation robotics: Efficacy of three distinct methods",
    	author = "Just, F. and Ozen, O. and Tortora, S. and Klamroth-Marganska, V. and Riener, R. and Rauter, G.",
    	id = "11577_3342656"
    }
    
  8. S Tortora, S Ghidoni, C Chisari, S Micera and F Artoni. Deep learning-based BCI for gait decoding from EEG with LSTM recurrent neural network. JOURNAL OF NEURAL ENGINEERING 17:1–14, 2020. DOI BibTeX

    @article{10.1088/1741-2552/ab9842,
    	pages = "1--14",
    	doi = "10.1088/1741-2552/ab9842",
    	keywords = "brain-computer interface (BCI); deep learning; electroencephalography (EEG); locomotion; longshort term memory (LSTM); mobile brain/body imaging (MoBI)",
    	abstract = "Objective. Mobile Brain/Body Imaging (MoBI) frameworks allowed the research community to find evidence of cortical involvement at walking initiation and during locomotion. However, the decoding of gait patterns from brain signals remains an open challenge. The aim of this work is to propose and validate a deep learning model to decode gait phases from Electroenchephalography (EEG). Approach. A Long-Short Term Memory (LSTM) deep neural network has been trained to deal with time-dependent information within brain signals during locomotion. The EEG signals have been preprocessed by means of Artifacts Subspace Reconstruction (ASR) and Reliable Independent Component Analysis (RELICA) to ensure that classification performance was not affected by movement-related artifacts. Main results. The network was evaluated on the dataset of 11 healthy subjects walking on a treadmill. The proposed decoding approach shows a robust reconstruction (AUC > 90%) of gait patterns (i.e. swing and stance states) of both legs together, or of each leg independently. Significance. Our results support for the first time the use of a memory-based deep learning classifier to decode walking activity from non-invasive brain recordings. We suggest that this classifier, exploited in real time, can be a more effective input for devices restoring locomotion in impaired people.",
    	volume = 17,
    	journal = "JOURNAL OF NEURAL ENGINEERING",
    	publisher = "Institute of Physics Publishing",
    	year = 2020,
    	title = "Deep learning-based BCI for gait decoding from EEG with LSTM recurrent neural network",
    	author = "Tortora, S. and Ghidoni, S. and Chisari, C. and Micera, S. and Artoni, F.",
    	id = "11577_3351210"
    }
    
  9. Matteo Terreran, Elia Bonetto and Stefano Ghidoni. Enhancing Deep Semantic Segmentation of RGB-D Data with Entangled Forests. In 2020 25th IEEE International Conference on Pattern Recognition (ICPR). 2020, 4634–4641. DOI BibTeX

    @conference{10.1109/ICPR48806.2021.9412787,
    	pages = "4634--4641",
    	isbn = "978-1-7281-8809-6",
    	doi = "10.1109/ICPR48806.2021.9412787",
    	keywords = "semantic segmentation, scene understanding, deep learning",
    	abstract = "Semantic segmentation is a problem which is getting more and more attention in the computer vision community. Nowadays, deep learning methods represent the state of the art to solve this problem, and the trend is to use deeper networks to get higher performance. The drawback with such models is a higher computational cost, which makes it difficult to integrate them on mobile robot platforms. In this work we want to explore how to obtain lighter deep learning models without compromising performance. To do so we will consider the features used in the 3D Entangled Forests algorithm and we will study the best strategies to integrate these within FuseNet deep network. Such new features allow us to shrink the network size without loosing performance, obtaining hence a lighter model which achieves state-of-the-art performance on the semantic segmentation task and represents an interesting alternative for mobile robotics applications, where computational power and energy are limited.",
    	booktitle = "2020 25th IEEE International Conference on Pattern Recognition (ICPR)",
    	year = 2020,
    	title = "Enhancing Deep Semantic Segmentation of RGB-D Data with Entangled Forests",
    	author = "Terreran, Matteo and Bonetto, Elia and Ghidoni, Stefano",
    	id = "11577_3373551"
    }
    
  10. Matteo Terreran, Edoardo Lamon, Stefano Michieletto and Enrico Pagello. Low-cost Scalable People Tracking System for Human-Robot Collaboration in Industrial Environment. PROCEDIA MANUFACTURING 51:116–124, 2020. URL, DOI BibTeX

    @article{https://doi.org/10.1016/j.promfg.2020.10.018,
    	pages = "116--124",
    	doi = "https://doi.org/10.1016/j.promfg.2020.10.018",
    	url = "http://www.sciencedirect.com/science/article/pii/S2351978920318734",
    	keywords = "People tracking, human-robot collaboration, low-cost industrial",
    	abstract = "Human-robot collaboration is one of the key elements in the Industry 4.0 revolution, aiming to a close and direct collaboration between robots and human workers to reach higher productivity and improved ergonomics. The first step toward such kind of collaboration in the industrial context is the removal of physical safety barriers usually surrounding standard robotic cells, so that human workers can approach and directly collaborate with robots. Anyway, human safety must be granted avoiding possible collisions with the robot. In this work, we propose the use of a people tracking algorithm to monitor people moving around a robot manipulator and recognize when a person is too close to the robot while performing a task. The system is implemented by a camera network system positioned around the robot workspace, and thoroughly evaluated in different industry-like settings in terms of both tracking accuracy and detection delay.",
    	volume = 51,
    	journal = "PROCEDIA MANUFACTURING",
    	year = 2020,
    	title = "Low-cost Scalable People Tracking System for Human-Robot Collaboration in Industrial Environment",
    	author = "Terreran, Matteo and Lamon, Edoardo and Michieletto, Stefano and Pagello, Enrico",
    	id = "11577_3358830"
    }
    
  11. G Giruzzi, M Yoshida, N Aiba, J F Artaud, J Ayllon-Guerola, O Beeke, A Bierwage, T Bolzonella, M Bonotto, C Boulbe, M Chernyshova, S Coda, R Coelho, D Corona, N Cruz, S Davis, C Day, G De Tommasi, M Dibon, D Douai, D Farina, A Fassina, B Faugeras, L Figini, M Fukumoto, S Futatani, K Galazka, J Garcia, M Garcia-Muñoz, L Garzotti, L Giudicotti, N Hayashi, M Honda, K Hoshino, A Iantchenko, S Ide, S Inoue, A Isayama, E Joffrin, Y Kamada, K Kamiya, M Kashiwagi, H Kawashima, T Kobayashi, A Kojima, T Kurki-Suonio, P Lang, Ph Lauber, E Luna, G Marchiori, G Matsunaga, A Matsuyama, M Mattei, S Mazzi, A Mele, Y Miyata, S Moriyama, JOSE' FRANCISCO Morales, A Moro, T Nakano, R Neu, S Nowak, Fp Orsitto, V Ostuni, N Oyama, S Paméla, R Pasqualotto, B Pégourié, E Perelli, L Pigatto, C Piron, A Pironti, P Platania, B Ploeckl, Daniel RICCI PACIFICI, M Romanelli, G Rubino, S Sakurai, K Särkimäki, M Scannapiego, K Shinohara, J Shiraishi, S Soare, C Sozzi, T Suzuki, Y Suzuki, T Szepesi, M Takechi, K Tanaka, H Tojo, M Turnyanskiy, H Urano, M Valisa, M Vallar, J Varje, J Vega, F Villone, T Wakatsuki, T Wauters, M Wischmeier, S Yamoto and R Zagórski. Advances in the physics studies for the JT-60SA tokamak exploitation and research plan. PLASMA PHYSICS AND CONTROLLED FUSION 62, 2020. URL, DOI BibTeX

    @article{10.1088/1361-6587/ab4771,
    	doi = "10.1088/1361-6587/ab4771",
    	url = "https://iopscience.iop.org/article/10.1088/1361-6587/ab4771",
    	volume = 62,
    	journal = "PLASMA PHYSICS AND CONTROLLED FUSION",
    	year = 2020,
    	title = "Advances in the physics studies for the JT-60SA tokamak exploitation and research plan",
    	author = "Giruzzi, G and Yoshida, M and Aiba, N and Artaud, J F and Ayllon-Guerola, J and Beeke, O and Bierwage, A and Bolzonella, T and Bonotto, M and Boulbe, C and Chernyshova, M and Coda, S and Coelho, R and Corona, D and Cruz, N and Davis, S and Day, C and De Tommasi, G and Dibon, M and Douai, D and Farina, D and Fassina, A and Faugeras, B and Figini, L and Fukumoto, M and Futatani, S and Galazka, K and Garcia, J and Garcia-Muñoz, M and Garzotti, L and Giudicotti, L and Hayashi, N and Honda, M and Hoshino, K and Iantchenko, A and Ide, S and Inoue, S and Isayama, A and Joffrin, E and Kamada, Y and Kamiya, K and Kashiwagi, M and Kawashima, H and Kobayashi, T and Kojima, A and Kurki-Suonio, T and Lang, P and Lauber, Ph and de la Luna, E and Marchiori, G and Matsunaga, G and Matsuyama, A and Mattei, M and Mazzi, S and Mele, A and Miyata, Y and Moriyama, S and Morales, JOSE' FRANCISCO and Moro, A and Nakano, T and Neu, R and Nowak, S and Orsitto, Fp and Ostuni, V and Oyama, N and Paméla, S and Pasqualotto, R and Pégourié, B and Perelli, E and Pigatto, L and Piron, C and Pironti, A and Platania, P and Ploeckl, B and RICCI PACIFICI, Daniel and Romanelli, M and Rubino, G and Sakurai, S and Särkimäki, K and Scannapiego, M and Shinohara, K and Shiraishi, J and Soare, S and Sozzi, C and Suzuki, T and Suzuki, Y and Szepesi, T and Takechi, M and Tanaka, K and Tojo, H and Turnyanskiy, M and Urano, H and Valisa, M and Vallar, M and Varje, J and Vega, J and Villone, F and Wakatsuki, T and Wauters, T and Wischmeier, M and Yamoto, S and Zagórski, R",
    	id = "11577_3324033"
    }
    
  12. Matteo Bonotto, Fabio Villone, Yueqiang Liu and Paolo Bettini. Matrix Based Rational Interpolation for New Coupling Scheme Between MHD and Eddy Current Numerical Models. IEEE TRANSACTIONS ON MAGNETICS, pages 1–1, 2020. DOI BibTeX

    @article{10.1109/TMAG.2019.2954648,
    	pages = "1--1",
    	doi = "10.1109/TMAG.2019.2954648",
    	journal = "IEEE TRANSACTIONS ON MAGNETICS",
    	year = 2020,
    	title = "Matrix Based Rational Interpolation for New Coupling Scheme Between MHD and Eddy Current Numerical Models",
    	author = "Bonotto, Matteo and Villone, Fabio and Liu, Yueqiang and Bettini, Paolo",
    	id = "11577_3325022"
    }
    
  13. Silvia Di Battista, Monica Pivetti, Michele Moro and Emanuele Menegatti. Teachers’ opinions towards Educational Robotics for special needs students: an exploratory Italian study. ROBOTICS 9 (3), 2020. URL, DOI BibTeX

    @article{10.3390/robotics9030072,
    	doi = "10.3390/robotics9030072",
    	url = "https://www.mdpi.com/2218-6581/9/3/72/htm",
    	keywords = "educational robotics, special needs students, educational contexts, learning support teachers, attention deficit hyperactivity disorder—ADHD, autism spectrum disorder—ASD, dyspraxia, Down syndrome—DS",
    	abstract = "Research reveals that robotics can be a valuable tool for school students with special needs (SNs). However, to our knowledge, empirical studies on teachers’ attitudes towards educational robotics for SNs students have been very limited and, in general, do not account for the great variability in the existent difficulties of school-aged children. Our aim is to fill this research gap. This post-test empirical study assessed Italian pre-service and in-service learning support teachers’ attitudes towards the application of Educational Robotics—ER with their students with SNs at the end of a 12-h training course. The results generally showed that most teachers perceived ER as a powerful tool for children with numerous SNs, particularly for Attention Deficit Hyperactivity Disorder—ADHD, Autism Spectrum Disorder—ASD, and Dyspraxia. Looking at the differences depending on the school level, kindergarten teachers perceived that ER is mostly helpful for ASD, ADHD, Down Syndrome—DS as well as with psychological or emotional distress or the needs of foreign students. For primary school teachers, ER was mostly helpful with ADHD, Dyspraxia and ASD. For both junior secondary school teachers and high school teachers, ER was mostly helpful with ASD, Dyspraxia, and ADHD.",
    	volume = "9 (3)",
    	journal = "ROBOTICS",
    	year = 2020,
    	title = "Teachers’ opinions towards Educational Robotics for special needs students: an exploratory Italian study",
    	author = "Di Battista, Silvia and Pivetti, Monica and Moro, Michele and Menegatti, Emanuele",
    	id = "11577_3357753"
    }
    
  14. Kenji Koide, Jun Miura and Emanuele Menegatti. Monocular person tracking and identification with on-line deep feature selection for person following robots. ROBOTICS AND AUTONOMOUS SYSTEMS 124:1–11, 2020. DOI BibTeX

    @article{10.1016/j.robot.2019.103348,
    	pages = "1--11",
    	doi = "10.1016/j.robot.2019.103348",
    	keywords = "Person trackingPerson identificationMobile robot",
    	abstract = "This paper presents a new person tracking and identification framework based on solely a monocular camera. In this framework, we first track persons in the robot coordinate space using Unscented Kalman filter with the ground plane information and human height estimation. Then, we identify the target person to be followed with the combination of Convolutional Channel Features (CCF) and online boosting. It allows us to take advantage of deep neural network-based feature representation while adapting the person classifier to a specific target person depending on the circumstances. The entire system can be run on a recent embedded computation board with a GPU (NVIDIA Jetson TX2), and it can easily be reproduced and reused on a new mobile robot platform. Through evaluations, we validated that the proposed method outperforms existing person identification methods for mobile robots. We applied the proposed method to a real person following robot, and it has been shown that CCF-based person identification realizes robust person following in both indoor and outdoor environments.",
    	volume = 124,
    	journal = "ROBOTICS AND AUTONOMOUS SYSTEMS",
    	publisher = "Elsevier",
    	year = 2020,
    	title = "Monocular person tracking and identification with on-line deep feature selection for person following robots",
    	author = "Koide, Kenji and Miura, Jun and Menegatti, Emanuele",
    	id = "11577_3324498"
    }
    
  15. Cristina Forest, Gloria Beraldo, Roberto Mancin, Emanuele Menegatti and Agnese Suppiej. Maturational aspects of visual P300 in children: a research window for pediatric Brain Computer Interface (BCI)*. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 2020, 451–455. URL, DOI BibTeX

    @conference{10.1109/RO-MAN47096.2020.9223586,
    	pages = "451--455",
    	isbn = "978-1-7281-6075-7",
    	doi = "10.1109/RO-MAN47096.2020.9223586",
    	url = "https://ieeexplore.ieee.org/document/9223586/authors#authors",
    	abstract = "The P300 is an endogenous event-related potential (ERP) involved in several cognitive processes, apparently preserved between adults and children. In the pediatric age it shows different age-related characteristics. Its application in Brain Computer Interface (BCI) pediatric research remains to date still unclear. The aim of this paper is to give an overview of the maturational aspects of the visual P300, that could be used for developing BCI paradigms in the pediatric age.",
    	booktitle = "Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)",
    	year = 2020,
    	title = "Maturational aspects of visual P300 in children: a research window for pediatric Brain Computer Interface (BCI)*",
    	author = "Forest, Cristina and Beraldo, Gloria and Mancin, Roberto and Menegatti, Emanuele and Suppiej, Agnese",
    	id = "11577_3354665"
    }
    
  16. G Nicola, L Tagliapietra, E Tosello, N Navarin, S Ghidoni and E Menegatti. Robotic Object Sorting via Deep Reinforcement Learning: A generalized approach. In 29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020. 2020, 1266–1273. DOI BibTeX

    @conference{10.1109/RO-MAN47096.2020.9223484,
    	pages = "1266--1273",
    	isbn = "978-1-7281-6075-7",
    	doi = "10.1109/RO-MAN47096.2020.9223484",
    	abstract = "This work proposes a general formulation for the Object Sorting problem, suitable to describe any non-deterministic environment characterized by friendly and adversarial interference. Such an approach, coupled with a Deep Reinforcement Learning algorithm, allows training policies to solve different sorting tasks without adjusting the architecture or modifying the learning method. Briefly, the environment is subdivided into a clutter, where objects are freely located, and a set of clusters, where objects should be placed according to predefined ordering and classification rules. A 3D grid discretizes such environment: the properties of an object within a cell depict its state. Such attributes include object category and order. A Markov Decision Process formulates the problem: at each time step, the state of the cells fully defines the environment's one. Users can custom-define object classes, ordering priorities, and failure rules. The latter by assigning a non-uniform risk probability to each cell. Performed experiments successfully trained and validated a Deep Reinforcement Learning model to solve several sorting tasks while minimizing the number of moves and failure probability. Obtained results demonstrate the capability of the system to handle non-deterministic events, like failures, and unpredictable external disturbances, like human user interventions.",
    	booktitle = "29th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2020",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2020,
    	title = "Robotic Object Sorting via Deep Reinforcement Learning: A generalized approach",
    	author = "Nicola, G. and Tagliapietra, L. and Tosello, E. and Navarin, N. and Ghidoni, S. and Menegatti, E.",
    	id = "11577_3358161"
    }
    
  17. Paolo Mistretta, Cecilia Marchesini, Andrea Volpini, Luca Tagliapietra, Tommaso Sciarra, Aldo Lazich, Salvatore Forte, Mauro De Matteis, Emanuele Menegatti and Nicola Petrone. Collection of Kinematic and Kinetic Data of Young and Adult, Male and Female Subjects Performing Periodic and Transient Gait Tasks for Gait Pattern Recognition. PROCEEDINGS 49, 2020. URL, DOI BibTeX

    @article{10.3390/proceedings2020049006,
    	doi = "10.3390/proceedings2020049006",
    	url = "http://doi.org/10.5281/zenodo.3628229",
    	keywords = "database; populations; multiple gait-task; classifier; correlation; virtual IMU",
    	abstract = "The aim of the study was to develop a database of biomechanical data for multiple gait tasks. This database will be used to create a real-time gait pattern classifier that will be implemented in a new-generation active knee prosthesis. With this intent, we collected kinematic and kinetic data of 40 subjects performing 16 gait tasks, categorized as periodic and transient motions. We analyzed four distinct sub-populations, differentiated by age and gender. As the classifier will be based also on inertial data, we chose to synthesize these signals within the motion capture environment. To assess the effects of gender and age we performed a correlation analysis on the signals used as input of the classifier. The results obtained indicate that there is no need to differentiate into four distinct classes for the development of the classifier. Sample data of the dataset are made publicly available.",
    	volume = 49,
    	journal = "PROCEEDINGS",
    	publisher = "Proceedings of The 13th Conference of the International Sports Engineering Association",
    	year = 2020,
    	title = "Collection of Kinematic and Kinetic Data of Young and Adult, Male and Female Subjects Performing Periodic and Transient Gait Tasks for Gait Pattern Recognition",
    	author = "Mistretta, Paolo and Marchesini, Cecilia and Volpini, Andrea and Tagliapietra, Luca and Sciarra, Tommaso and Lazich, Aldo and Forte, Salvatore and Matteis, Mauro De and Menegatti, Emanuele and Petrone, Nicola",
    	id = "11577_3347333"
    }
    
  18. Nicola Castaman, Elisa Tosello, Morris Antonello, Nicola Bagarello, Silvia Gandin, Marco Carraro, Matteo Munaro, Roberto Bortoletto, Stefano Ghidoni, Emanuele Menegatti and Enrico Pagello. RUR53: an unmanned ground vehicle for navigation, recognition, and manipulation. ADVANCED ROBOTICS 35:1–18, 2020. URL, DOI BibTeX

    @article{10.1080/01691864.2020.1833752,
    	pages = "1--18",
    	doi = "10.1080/01691864.2020.1833752",
    	url = "https://www.tandfonline.com/doi/full/10.1080/01691864.2020.1833752",
    	abstract = "This paper proposes RUR53: an Unmanned Ground Vehicle able to navigate through, identify, and reach areas of interest. There, it can recognize, localize, and manipulate work tools to perform both indoor and outdoor complex tasks. Indeed, a wide range of sensors composes the robot and enables it to perceive vast workspaces, reach distant targets, and face the uncertainties of the real world. Precise object detection is also guaranteed, essential to manipulate objects of different shapes and materials. Moreover, a customized 3-finger gripper makes the gripping mode suitable for any lightweight object. Two modalities are proposed: autonomous and teleoperated, letting both unskilled and skilled human operators easily adapt the system to complete personalized tasks. The paper exhaustively describes RUR53 architecture and demonstrates its good performance while executing both indoor and outdoor navigation and manipulation tasks. A specific case study is described where the proposed modular architecture allows to easily switch to a semi-teleoperated mode: the 2017 Mohamed Bin Zayed International Robotics Challenge, where our team ranked third in the Grand Challenge in collaboration with the Czech Technical University in Prague, the University of Pennsylvania, and the University of Lincoln (UK).",
    	volume = 35,
    	journal = "ADVANCED ROBOTICS",
    	publisher = "TAYLOR & FRANCIS",
    	year = 2020,
    	title = "RUR53: an unmanned ground vehicle for navigation, recognition, and manipulation",
    	author = "Castaman, Nicola and Tosello, Elisa and Antonello, Morris and Bagarello, Nicola and Gandin, Silvia and Carraro, Marco and Munaro, Matteo and Bortoletto, Roberto and Ghidoni, Stefano and Menegatti, Emanuele and Pagello, Enrico",
    	id = "11577_3356080"
    }
    
  19. Francesca Agatolio, Calkin Suero Montero, Michele Moro, Andrea Cavazzini and Emanuele Menegatti. Badges Are Back! - Fostering Self-assessment During Personalised Learning in Making and Digital Fabrication. In Educational Robotics in the Context of the Maker Movement 946. 2020, 83–91. URL, DOI BibTeX

    @conference{10.1007/978-3-030-18141-3,
    	pages = "83--91",
    	isbn = "978-3-030-18140-6",
    	doi = "10.1007/978-3-030-18141-3",
    	url = "https://link.springer.com/chapter/10.1007/978-3-030-18141-3_7",
    	keywords = "Technology innovation, Badges in education, Digital fabrication and making",
    	abstract = "Badges have been used in education and leisure activities to accredit knowledge or learning for a long time. Digital badges in particular have been reported to become a gamification element fostering students’ engagement and motivation. In our work, we explore the use of digital badges during digital fabrication and making activities as a way to foster personalised learning paths within the eCraft2Learn learning ecosystem. In this paper, we describe the development of the badge system implemented in the eCraft2Learn digital platform. The system supports students’ self-evaluation and promotes a communicative process between students and teachers for badge allocation. Visual representations of the badge system are provided here alongside the underlying pedagogical justifications behind the system development.",
    	booktitle = "Educational Robotics in the Context of the Maker Movement",
    	volume = 946,
    	year = 2020,
    	title = "Badges Are Back! - Fostering Self-assessment During Personalised Learning in Making and Digital Fabrication",
    	author = "Agatolio, Francesca and Suero Montero, Calkin and Moro, Michele and Cavazzini, Andrea and Menegatti, Emanuele",
    	id = "11577_3361579"
    }
    
  20. Matteo Terreran, Andrea G Tramontano, Jacobus C Lock, Stefano Ghidoni and Nicola Bellotto. Real-time Object Detection using Deep Learning for helping People with Visual Impairments. In IEEE International Conference on Image Processing, Applications and Systems, IPAS. 2020, 89–95. DOI BibTeX

    @conference{10.1109/IPAS50080.2020.9334933,
    	pages = "89--95",
    	isbn = "978-1-7281-7575-1",
    	doi = "10.1109/IPAS50080.2020.9334933",
    	keywords = "Object detection, real-time, electronic travel aid",
    	abstract = "Object detection plays a crucial role in the development of Electronic Travel Aids (ETAs), capable to guide a person with visual impairments towards a target object in an unknown indoor environment. In such a scenario, the object detector runs on a mobile device (e.g. smartphone) and needs to be fast, accurate, and, most importantly, lightweight. Nowadays, Deep Neural Networks (DNN) have become the state-of-the-art solution for object detection tasks, with many works improving speed and accuracy by proposing new architectures or extending existing ones. A common strategy is to use deeper networks to get higher performance, but that leads to a higher computational cost which makes it impractical to integrate them on mobile devices with limited computational power. In this work we compare different object detectors to find a suitable candidate to be implemented on ETAs, focusing on lightweight models capable of working in real-time on mobile devices with a good accuracy. In particular, we select two models: SSD Lite with Mobilenet V2 and Tiny-DSOD. Both models have been tested on the popular OpenImage dataset and a new dataset, named L-CAS Office dataset, collected to further test models’ performance and robustness in a real scenario inspired by the actual perception challenges of a user with visual impairments.",
    	booktitle = "IEEE International Conference on Image Processing, Applications and Systems, IPAS",
    	year = 2020,
    	title = "Real-time Object Detection using Deep Learning for helping People with Visual Impairments",
    	author = "Terreran, Matteo and Tramontano, Andrea G. and Lock, Jacobus C. and Ghidoni, Stefano and Bellotto, Nicola",
    	id = "11577_3373517"
    }
    
  21. Stefano Tortora, Luca Tonin, Carmelo Chisari, Silvestro Micera, Emanuele Menegatti and Fiorenzo Artoni. Hybrid Human-Machine Interface for Gait Decoding Through Bayesian Fusion of EEG and EMG Classifiers. FRONTIERS IN NEUROROBOTICS 14, 2020. URL, DOI BibTeX

    @article{10.3389/fnbot.2020.582728,
    	doi = "10.3389/fnbot.2020.582728",
    	url = "https://www.frontiersin.org/articles/10.3389/fnbot.2020.582728/full",
    	keywords = "EEG, EMG, hybrid BCI, Bayesian inference, LSTM network, mobile brain/body imaging, locomotion",
    	abstract = "Despite the advances in the field of brain computer interfaces (BCI), the use of the sole electroencephalography (EEG) signal to control walking rehabilitation devices is currently not viable in clinical settings, due to its unreliability. Hybrid interfaces (hHMIs) represent a very recent solution to enhance the performance of single-signal approaches. These are classification approaches that combine multiple human-machine interfaces, normally including at least one BCI with other biosignals, such as the electromyography (EMG). However, their use for the decoding of gait activity is still limited. In this work, we propose and evaluate a hybrid human-machine interface (hHMI) to decode walking phases of both legs from the Bayesian fusion of EEG and EMG signals. The proposed hHMI significantly outperforms its single-signal counterparts, by providing high and stable performance even when the reliability of the muscular activity is compromised temporarily (e.g., fatigue) or permanently (e.g., weakness). Indeed, the hybrid approach shows a smooth degradation of classification performance after temporary EMG alteration, with more than 75% of accuracy at 30% of EMG amplitude, with respect to the EMG classifier whose performance decreases below 60% of accuracy. Moreover, the fusion of EEG and EMG information helps keeping a stable recognition rate of each gait phase of more than 80% independently on the permanent level of EMG degradation. From our study and findings from the literature, we suggest that the use of hybrid interfaces may be the key to enhance the usability of technologies restoring or assisting the locomotion on a wider population of patients in clinical applications and outside the laboratory environment.",
    	volume = 14,
    	journal = "FRONTIERS IN NEUROROBOTICS",
    	publisher = "Ganesh R. Naik",
    	year = 2020,
    	title = "Hybrid Human-Machine Interface for Gait Decoding Through Bayesian Fusion of EEG and EMG Classifiers",
    	author = "Tortora, Stefano and Tonin, Luca and Chisari, Carmelo and Micera, Silvestro and Menegatti, Emanuele and Artoni, Fiorenzo",
    	id = "11577_3359191"
    }
    
  22. Gloria Beraldo, Agnese Suppiej, Cristina Forest, Luca Tonin and Emanuele Menegatti. Brain-Computer Interface for Children: State-Of-The-Art and Challenges. In Conference Proceedings - 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020). 2020, 24–29. URL, DOI BibTeX

    @conference{10.1109/SMC42975.2020.9283485,
    	pages = "24--29",
    	isbn = "978-172818526-2",
    	doi = "10.1109/SMC42975.2020.9283485",
    	url = "http://conf.papercept.net/proceedings/SMC20/1155.pdf",
    	abstract = "This work proposes an overview of the recent applications of brain-computer interface (BCI) technology for pediatric populations. Current BCIs have demonstrated the possibility to provide an alternative communication and interaction channel for people suffering from severe motor disabilities. However, to date research has been predominantly conducted in adults, only a few systems have been applied to pediatric population. A survey was carried out to show the ongoing trends of using BCI systems with children. We discuss three areas of applications where BCI might be helpful to children — “Communication & Control”, “BCI Gaming for Neurofeedback Training” and “Rehabilitation” — highlighting the current limitations and the possible future challenges.",
    	booktitle = "Conference Proceedings - 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020)",
    	year = 2020,
    	title = "Brain-Computer Interface for Children: State-Of-The-Art and Challenges",
    	author = "Beraldo, Gloria and Suppiej, Agnese and Forest, Cristina and Tonin, Luca and Menegatti, Emanuele",
    	id = "11577_3359959"
    }
    
  23. Stefano Tortora, Fiorenzo Artoni, Luca Tonin, Carmelo Chisari, Emanuele Menegatti and Silvestro Micera. Discrimination of Walking and Standing from Entropy of EEG Signals and Common Spatial Patterns. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2020, 2008–2013. URL, DOI BibTeX

    @conference{10.1109/SMC42975.2020.9283212,
    	pages = "2008--2013",
    	isbn = "978-1-7281-8527-9",
    	doi = "10.1109/SMC42975.2020.9283212",
    	url = "http://conf.papercept.net/proceedings/SMC20/0584.pdf",
    	keywords = "Human-Machine Interface, Brain-based Information Communications, Assistive Technology",
    	abstract = "Recently, the complexity analysis of brain activity has shown the possibility to provide additional information to discriminate between rest and motion in real-time. In this work, we propose a novel entropy-based machine learning method to classify between standing and walking conditions from the sole brain activity. The Shannon entropy has been used as a complexity measure of electroencephalography (EEG) signals and subject-specific features for classification have been selected by Common Spatial Patterns (CSP) filter. Exploiting these features with a linear classifier, we achieved >85% of classification accuracy over a long period (~25 min) of standing and treadmill walking on 11 healthy subjects. Moreover, we implemented the proposed approach to successfully discriminate in real-time between standing and over-ground walking on one healthy subject. We suggest that the reliable discrimination of rest against walking conditions achieved by the proposed method may be exploited to have more stable control of devices to restore locomotion, avoiding unpredictable and dangerous behaviors due to the delivery of undesired control commands.",
    	booktitle = "2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)",
    	year = 2020,
    	title = "Discrimination of Walking and Standing from Entropy of EEG Signals and Common Spatial Patterns",
    	author = "Tortora, Stefano and Artoni, Fiorenzo and Tonin, Luca and Chisari, Carmelo and Menegatti, Emanuele and Micera, Silvestro",
    	id = "11577_3359978"
    }
    
  24. Gloria Beraldo, Stefano Tortora, Emanuele Menegatti and Luca Tonin. ROS-Neuro: implementation of a closed-loop BMI based on motor imagery. In Conference Proceedings - 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020). 2020, 2031–2037. URL, DOI BibTeX

    @conference{10.1109/SMC42975.2020.9282968,
    	pages = "2031--2037",
    	doi = "10.1109/SMC42975.2020.9282968",
    	url = "http://conf.papercept.net/proceedings/SMC20/0893.pdf",
    	abstract = "The increasing interest of the research community in the intertwined fields of brain-machine interface (BMI) and robotics has led to the development of a variety of brain-actuated devices, ranging from powered wheelchairs and telepresence robots to wearable exoskeletons. Nevertheless, in most cases, the interaction between the two systems is still rudimentary, allowing only an unidirectional simple communication from the BMI to the robot that acts as a mere passive endeffector. This limitation could be due to the lack of a common research framework, facilitating the integration of these two technologies. In this scenario, we proposed ROS-Neuro to overcome the aforementioned limitations by providing a common middleware between BMI and robotics. In this work, we present a working example of the potentialities of ROS-Neuro by describing a full closed-loop implementation of a BMI based on motor imagination. The paper shows the general structure of a closed-loop BMI in ROS-Neuro and describes the specific implementation of the packages related to the proposed motor imagery BMI, already available online with source codes, tutorials and documentations. Furthermore, we show two practical case scenarios where the implemented BMI is used to control a computer game or a telepresence robot with ROS-Neuro. We evaluated the performance of ROS-Neuro by ensuring comparable results with respect to a previous BMI software already validated. Results demonstrated the correct behavior of the provided packages.",
    	booktitle = "Conference Proceedings - 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020)",
    	year = 2020,
    	title = "ROS-Neuro: implementation of a closed-loop BMI based on motor imagery",
    	author = "Beraldo, Gloria and Tortora, Stefano and Menegatti, Emanuele and Tonin, Luca",
    	id = "11577_3359955"
    }
    
  25. C Jeunet, L Tonin, L Albert, R Chavarriaga, B Bideau, F Argelaguet, J R Millan, A Lecuyer and R Kulpa. Uncovering EEG Correlates of Covert Attention in Soccer Goalkeepers: Towards Innovative Sport Training Procedures. SCIENTIFIC REPORTS 10, 2020. URL, DOI BibTeX

    @article{10.1038/s41598-020-58533-2,
    	doi = "10.1038/s41598-020-58533-2",
    	url = "www.nature.com/srep/index.html",
    	abstract = "Advances in sports sciences and neurosciences offer new opportunities to design efficient and motivating sport training tools. For instance, using NeuroFeedback (NF), athletes can learn to self-regulate specific brain rhythms and consequently improve their performances. Here, we focused on soccer goalkeepers’ Covert Visual Spatial Attention (CVSA) abilities, which are essential for these athletes to reach high performances. We looked for Electroencephalography (EEG) markers of CVSA usable for virtual reality-based NF training procedures, i.e., markers that comply with the following criteria: (1) specific to CVSA, (2) detectable in real-time and (3) related to goalkeepers’ performance/expertise. Our results revealed that the best-known EEG marker of CVSA—increased α-power ipsilateral to the attended hemi-field— was not usable since it did not comply with criteria 2 and 3. Nonetheless, we highlighted a significant positive correlation between athletes’ improvement in CVSA abilities and the increase of their α-power at rest. While the specificity of this marker remains to be demonstrated, it complied with both criteria 2 and 3. This result suggests that it may be possible to design innovative ecological training procedures for goalkeepers, for instance using a combination of NF and cognitive tasks performed in virtual reality.",
    	volume = 10,
    	journal = "SCIENTIFIC REPORTS",
    	publisher = "Nature Research",
    	year = 2020,
    	title = "Uncovering EEG Correlates of Covert Attention in Soccer Goalkeepers: Towards Innovative Sport Training Procedures",
    	author = "Jeunet, C. and Tonin, L. and Albert, L. and Chavarriaga, R. and Bideau, B. and Argelaguet, F. and Millan, J. R. and Lecuyer, A. and Kulpa, R.",
    	id = "11577_3330903"
    }
    
  26. Luca Tonin and José R Millán. Noninvasive Brain–Machine Interfaces for Robotic Devices. ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS 4, 2020. DOI BibTeX

    @article{10.1146/annurev-control-012720-093904,
    	doi = "10.1146/annurev-control-012720-093904",
    	volume = 4,
    	journal = "ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS",
    	year = 2020,
    	title = "Noninvasive Brain–Machine Interfaces for Robotic Devices",
    	author = "Tonin, Luca and Millán, José del R.",
    	id = "11577_3360239"
    }
    
  27. Daniele Evangelista, Alexander Walch, Marco Antonelli, Andrea Brandolese, Christian Eitzinger, Alberto Pretto and Emanuele Menegatti. Flexible and reconfigurable robotic inspection in manufacturing. In Proceedings of 2nd I-RIM Conference. 2020. BibTeX

    @conference{,
    	11577_3395902keywords = "inspection robots,machine vision,quality inspection",
    	abstract = "Having a generic and flexible software infrastructure for inspection robots is the main goal of the SPIRIT project. SPIRIT is a EU funded project whose main tangible result is a software framework that includes an offline and an inline framework. The offline part is used for simulating the robotic workcell and configure the whole inspection task, e.g. select the robot and the inspection sensor and finally compute robot trajectory. The inline part instead is responsible for reproducing the simulated task on the real robot, it also includes innovative features like local and global re-planning, real-time data mapping for backprojecting sensor information on the 3D CAD model of the inspected part. In the project four different demonstrators have been developed, they all belongs to different industrial settings, i.e. automotive, aerospace and manufacture industry. Such high level of diversity demonstrates how the SPIRIT framework is general and flexible enough to be used in heterogeneous environments without prototyping and developing the entire workcell from scratch each time.",
    	booktitle = "Proceedings of 2nd I-RIM Conference",
    	year = 2020,
    	title = "Flexible and reconfigurable robotic inspection in manufacturing",
    	author = "Evangelista, Daniele and Walch, Alexander and Antonelli, Marco and Brandolese, Andrea and Eitzinger, Christian and Pretto, Alberto and Menegatti, Emanuele",
    	id = "11577_3395902"
    }
    
  28. Mulham Fawakherji, Ali Youssef, Domenico D Bloisi, Alberto Pretto and Daniele Nardi. Crop and Weed Classification Using Pixel-wise Segmentation on Ground and Aerial Images. INTERNATIONAL JOURNAL OF ROBOTIC COMPUTING 2:39–57, 2020. DOI BibTeX

    @article{10.35708/RC1869-126258,
    	pages = "39--57",
    	doi = "10.35708/RC1869-126258",
    	volume = 2,
    	journal = "INTERNATIONAL JOURNAL OF ROBOTIC COMPUTING",
    	year = 2020,
    	title = "Crop and Weed Classification Using Pixel-wise Segmentation on Ground and Aerial Images",
    	author = "Fawakherji, Mulham and Youssef, Ali and Bloisi, Domenico D. and Pretto, Alberto and Nardi, Daniele",
    	id = "11577_3395908"
    }
    
  29. D Evangelista, M Antonelli, A Pretto, C Eitzinger, M Moro, C Ferrari and E Menegatti. SPIRIT - A Software Framework for the Efficient Setup of Industrial Inspection Robots. In 2020 IEEE International Workshop on Metrology for Industry 4.0 and IoT, MetroInd 4.0 and IoT 2020 - Proceedings. 2020, 622–626. DOI BibTeX

    @conference{10.1109/MetroInd4.0IoT48571.2020.9138179,
    	pages = "622--626",
    	isbn = "978-1-7281-4892-2",
    	doi = "10.1109/MetroInd4.0IoT48571.2020.9138179",
    	keywords = "inspection robots; machine vision; quality inspection",
    	abstract = "SPIRIT aims to develop an 'inspection skill' for robots that takes the step from programming of complex inspection tasks to configuring such tasks. Result of the project is a software framework that includes an 'offline framework' with features such as model-based automatic coverage planning for complex parts, automatic robot program generation and an 'inline framework' that deals with sensor data mapping to transfer sensor measurements to the 3D object model. At the heart of the project is an accurate process-specific model that represents the sensor data acquisition process. This representation is sufficiently accurate to allow automatic planning in off-line settings using simulated workcells and then reproducing the inspection procedure on the real one, with some adaptations and corrections. More in detail, the 'offline framework' will include a generic interface to allow the easy exchange of process models (for different inspection technologies), of the CAD model of the part (for a different type of product to be inspected) or of the work-cell model (for a different robot kinematic structure). The generic 'inline framework' will provide the backbone for the execution of the actual inspection process. Relying on such a proven frameworks will reduce the risks of implementing complex inspection tasks and thus help the deployment of inspection robots. In this work initial results from the two main use cases are presented, they consist of inspection tasks for the automotive and aerospace industry respectively.",
    	booktitle = "2020 IEEE International Workshop on Metrology for Industry 4.0 and IoT, MetroInd 4.0 and IoT 2020 - Proceedings",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2020,
    	title = "SPIRIT - A Software Framework for the Efficient Setup of Industrial Inspection Robots",
    	author = "Evangelista, D. and Antonelli, M. and Pretto, A. and Eitzinger, C. and Moro, M. and Ferrari, C. and Menegatti, E.",
    	id = "11577_3360869"
    }
    
  30. D Evangelista, M Terreran, A Pretto, M Moro, C Ferrari and E Menegatti. 3D Mapping of X-Ray Images in Inspections of Aerospace Parts. In IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2020-. 2020, 1223–1226. DOI BibTeX

    @conference{10.1109/ETFA46521.2020.9212135,
    	pages = "1223--1226",
    	isbn = "978-1-7281-8956-7",
    	doi = "10.1109/ETFA46521.2020.9212135",
    	keywords = "3D metrol-ogy; Industrial inspection; Inspection of composite parts; X-ray imaging",
    	abstract = "In this work we present an industrial system for the inspection of composite parts in the aerospace industry, based on X-ray sensors and robotic manipulators. Such system is designed to identify any type of defects such as, missing gluing, core cell deformation, cracks or foreign objects, which may occur between layers of which these objects are composed. The inspection process involves back-projection of X-ray images onto the 3D CAD model of the inspected part, to directly locate the defects on the part itself. The complete system has been implemented in a real industrial workcell that involves two synchronized robots equipped with a X-ray source-detector system. The two robots move autonomously along a pre-computed trajectory without any human intervention, and the back-projection of the acquired images is efficiently performed at run-time using the proposed algorithm. The experiments demonstrate that the X-ray images back-projection is successful and can effectively replace standard manually guided inspections. This has a high impact on the factory automation cycle since it helps to reduce the effort and time needed for each inspection task. This work is part of a EU funded project called SPIRIT.",
    	booktitle = "IEEE International Conference on Emerging Technologies and Factory Automation, ETFA",
    	volume = "2020-",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2020,
    	title = "3D Mapping of X-Ray Images in Inspections of Aerospace Parts",
    	author = "Evangelista, D. and Terreran, M. and Pretto, A. and Moro, M. and Ferrari, C. and Menegatti, E.",
    	id = "11577_3360871"
    }
    


2019

  1. F Ceola, E Tosello, L Tagliapietra, G Nicola and S Ghidoni. Robot task planning via deep reinforcement learning: A tabletop object sorting application. In Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics 2019-. 2019, 486–492. DOI BibTeX

    @conference{10.1109/SMC.2019.8914278,
    	pages = "486--492",
    	isbn = "978-1-7281-4569-3",
    	doi = "10.1109/SMC.2019.8914278",
    	abstract = "This paper proposes a Deep Reinforcement Learning powered approach for tabletop object sorting. Once perceived the environment, the system creates a semantic representation of the scene, describing the pose and category of each recognized object. This image is then provided as input to the trained Deep Neural Network in charge of choosing the correct action to be performed to successfully achieve the sorting task. Obtained results prove the capability of the proposed system, including its intrinsic robustness to failures and unpredictable interactions with humans or other environmental agents. Moreover, the use of semantic images makes the Deep Neural Network independent from the type of objects to be sorted and from their final placement location. Finally, the system is scalable, being capable of sorting as many known objects as recognized by the perception system. Currently, the system can sort objects belonging to two predefined categories while treating all the others as obstacles. Future works will extend the system making it capable of sorting potentially any type and number of object categories.",
    	booktitle = "Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics",
    	volume = "2019-",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2019,
    	title = "Robot task planning via deep reinforcement learning: A tabletop object sorting application",
    	author = "Ceola, F. and Tosello, E. and Tagliapietra, L. and Nicola, G. and Ghidoni, S.",
    	id = "11577_3334033"
    }
    
  2. Loris Nanni, Michelangelo Paci, Gianluca Maguolo and Stefano Ghidoni. Deep learning for actinic keratosis classification. AIMS ELECTRONICS AND ELECTRICAL ENGINEERING 4:47–56, 2019. URL, DOI BibTeX

    @article{10.3934/ElectrEng.2020.1.47,
    	pages = "47--56",
    	doi = "10.3934/ElectrEng.2020.1.47",
    	url = "https://www.aimspress.com/fileOther/PDF/ElectrEng/ElectronEng-04-01-047.pdf",
    	volume = 4,
    	journal = "AIMS ELECTRONICS AND ELECTRICAL ENGINEERING",
    	publisher = "AIMS Press",
    	year = 2019,
    	title = "Deep learning for actinic keratosis classification",
    	author = "Nanni, Loris and Paci, Michelangelo and Maguolo, Gianluca and Ghidoni, Stefano",
    	id = "11577_3334029"
    }
    
  3. Loris Nanni, Sheryl Brahnam, Stefano Ghidoni and Alessandra Lumini. Bioimage Classification with Handcrafted and Learned Features. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, pages 1–1, 2019. URL, DOI BibTeX

    @article{10.1109/TCBB.2018.2821127,
    	pages = "1--1",
    	doi = "10.1109/TCBB.2018.2821127",
    	url = "http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8857",
    	keywords = "Biomedical imaging; Classification; Databases; Deep Learning; Feature extraction; Machine learning; Microscopy imaging; Support Vector Machine; Support vector machines; Task analysis; Training; Biotechnology; Genetics; Applied Mathematics",
    	journal = "IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2019,
    	title = "Bioimage Classification with Handcrafted and Learned Features",
    	author = "Nanni, Loris and Brahnam, Sheryl and Ghidoni, Stefano and Lumini, Alessandra",
    	id = "11577_3287883"
    }
    
  4. Stefano Ghidoni, Monica Fedeli and Massimiliano Barolo. Sharing Active Learning Practices to Improve Teaching: Peer Observation of Active Teaching in a School of Engineering. Volume 8, pages 199–213, Springer International Publishing, 2019. DOI BibTeX

    @inbook{10.1007/978-3-030-29872-2,
    	pages = "199--213",
    	isbn = "978-3-030-29871-5",
    	doi = "10.1007/978-3-030-29872-2",
    	keywords = "Peer Observation, Active Learning, Higher Education, Faculty Development",
    	abstract = "Promoting faculty development in a School of Engineering is quite unusual within the Italian academic context. Engineering education is strongly 7 content-oriented, and the assumption that—in essence—effective teaching amounts to delivering maximum content is deeply rooted in most engineering instructors. Stated differently, one common assumption among instructors is that whenever one educator masters the content of a course, no need for improvement in teaching is needed or even possible. Under this perspective, a class is seen as the way to feed the students with new content, thus making learning an almost entirely self-guided process to be activated by each student individually outside the classroom. To overcome this instructor-centered educational model and promote modernization of the teaching practice, in 2016 the School of Engineering of the University of Padova (UniPD) pioneered for Italy a faculty development program named Teaching for Learning (T4L). The program kicked off with a two-and-a-half-day retreat workshop for engineering instructors recruited on a voluntary basis. The principles of active learning were introduced and practiced during the workshop under the guidance of national and international experts in adult learning and teaching in higher education. The retreat involved thirty instructors and was extremely successful, to the point that the T4L workshop experience rapidly spread across UniPD, engaging a tenfold greater number of instructors across all disciplines in the subsequent 2 years. The T4L@Engineering program continued in the two following years with the objective of sharing active teaching/learning practices both among the retreat participants and among other engineering instructors who did not participate in the residential retreat. Several half-day workshops were organized, including some dealing with the use of digital technologies to promote active learning. Paralleling these activities was a “peer observation of active teaching” (POAT) process, which was conceptualized, designed, and tested in a small group, and finally proposed to the entire community of engineering instructors through a call for volunteers.",
    	booktitle = "Connecting Adult Learning and Knowledge Management Strategies for Learning and Change in Higher Education and Organizations",
    	volume = 8,
    	address = "Switzerland",
    	publisher = "Springer International Publishing",
    	year = 2019,
    	title = "Sharing Active Learning Practices to Improve Teaching: Peer Observation of Active Teaching in a School of Engineering",
    	author = "Ghidoni, Stefano and Fedeli, Monica and Barolo, Massimiliano",
    	id = "11577_3318609"
    }
    
  5. Alberto DALLA LIBERA, Elisa Tosello, Gianluigi Pillonetto, Stefano Ghidoni and Ruggero Carli. Proprioceptive Robot Collision Detection through Gaussian Process Regression. In 2019 American Control Conference (ACC). 2019, 19–24. DOI BibTeX

    @conference{10.23919/ACC.2019.8814361,
    	pages = "19--24",
    	isbn = "978-153867926-5",
    	doi = "10.23919/ACC.2019.8814361",
    	booktitle = "2019 American Control Conference (ACC)",
    	year = 2019,
    	title = "Proprioceptive Robot Collision Detection through Gaussian Process Regression",
    	author = "DALLA LIBERA, Alberto and Tosello, Elisa and Pillonetto, Gianluigi and Ghidoni, Stefano and Carli, Ruggero",
    	id = "11577_3316493"
    }
    
  6. F Agatolio, M Moro, E Menegatti and M Pivetti. A critical reflection on the expectations about the impact of educational robotics on problem solving capability. In Advances in Intelligent Systems and Computing 867. 2019, 877–888. URL, DOI BibTeX

    @conference{10.1007/978-3-030-01370-7_68,
    	pages = "877--888",
    	isbn = "978-3-030-01369-1",
    	doi = "10.1007/978-3-030-01370-7_68",
    	url = "https://www.researchgate.net/publication/326688761_A_critical_reflection_on_the_expectations_about_the_impact_of_educational_robotics_on_problem_solving_capability",
    	keywords = "Educational robotics; Learning support tool; Problem solving capability",
    	abstract = "This paper discuss the outcomes of an experimental course we run during the 2016-17 school year in two secondary junior schools. The aim of the experiment was to validate the use of educational robotics as a mindtool, investigating the impact of robotics on problem solving capability. The course lasted about four months and it was evaluated through a self-assessment with pre and post-activity questionnaires. The emerged results encouraged us to reflect about the role of metacognition and the importance to take it into account for the evaluation of problem solving. In the paper, the activity with the students is described and the analysis of the pre and post-activity questionnaires are discussed and conclusions are drawn.",
    	booktitle = "Advances in Intelligent Systems and Computing",
    	volume = 867,
    	publisher = "Springer Verlag",
    	year = 2019,
    	title = "A critical reflection on the expectations about the impact of educational robotics on problem solving capability",
    	author = "Agatolio, F. and Moro, M. and Menegatti, E. and Pivetti, M.",
    	id = "11577_3389349"
    }
    
  7. M Terreran, M Antonello and S Ghidoni. Boat hunting with semantic segmentation for flexible and autonomous manufacturing. In 2019 European Conference on Mobile Robots, ECMR 2019 - Proceedings. 2019, 1–8. DOI BibTeX

    @conference{10.1109/ECMR.2019.8870921,
    	pages = "1--8",
    	isbn = "978-1-7281-3605-9",
    	doi = "10.1109/ECMR.2019.8870921",
    	keywords = "Scene understanding, Semantic segmentation, Autonomous manufacturing",
    	abstract = "Customized mass production of boats and other vehicles requires highly complex manufacturing processes that need a high amount of automation. To enhance the efficiency of such systems, sensing is of paramount importance to provide robots with detailed information about the working environment. In this paper, we propose the use of semantic segmentation to detect the key elements involved in production, to boost automation in the production process. Our main focus is on the sanding process of these tools by means of a robot. We demonstrate the potential of these techniques in an industrial environment featuring a lower degree of variability with respect to the domestic scenes typically considered in the literature. In the production environment, however, higher performances are required to address challenging manufacturing operations successfully. In this work, we also show that exploiting contextual cues and multiple points of view can further boost the reliability of our system, which provides useful data to the other robot modules in charge of navigation, work station recognition, and other tasks. All the methods have been thoroughly validated on the IASLAB RGB-D COROMA Dataset, that was created on purpose. It consists of 46589 RGB-D frames, whose annotation was speeded up thanks to our optimized annotation pipeline.",
    	booktitle = "2019 European Conference on Mobile Robots, ECMR 2019 - Proceedings",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2019,
    	title = "Boat hunting with semantic segmentation for flexible and autonomous manufacturing",
    	author = "Terreran, M. and Antonello, M. and Ghidoni, S.",
    	id = "11577_3333357"
    }
    
  8. Stefano Tortora, Stefano Michieletto, Francesca Stival and Emanuele Menegatti. Fast human motion prediction for human-robot collaboration with wearable interface. In 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM). 2019, 457–462. URL, DOI BibTeX

    @conference{10.1109/CIS-RAM47153.2019.9095779,
    	pages = "457--462",
    	isbn = "978-1-7281-3458-1",
    	doi = "10.1109/CIS-RAM47153.2019.9095779",
    	url = "https://ieeexplore.ieee.org/document/9095779",
    	abstract = "In this paper, we propose a novel human-robot interface capable to anticipate the user intention while performing reaching movements on a working bench in order to plan the action of a collaborative robot. The system integrates two levels of prediction: motion intention prediction, to detect movements onset and offset; motion direction prediction, based on Gaussian Mixture Model (GMM) trained with IMU and EMG data following an evidence accumulation approach. Novel dynamic stopping criteria have been proposed to flexibly adjust the trade-off between early anticipation and accuracy. Results show that our system outperforms previous methods, achieving a real-time classification accuracy of 94.3±2.9% after 160.0msec±80.0msec from movement onset. The proposed interface can find many applications in the Industry 4.0 framework, where it is crucial for autonomous and collaborative robots to understand human movements as soon as possible to avoid accidents and injuries.",
    	booktitle = "2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM)",
    	year = 2019,
    	title = "Fast human motion prediction for human-robot collaboration with wearable interface",
    	author = "Tortora, Stefano and Michieletto, Stefano and Stival, Francesca and Menegatti, Emanuele",
    	id = "11577_3342646"
    }
    
  9. Kenji Koide, Jun Miura and Emanuele Menegatti. A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS 16 (2), 2019. URL, DOI BibTeX

    @article{10.1177/1729881419841532,
    	doi = "10.1177/1729881419841532",
    	url = "https://journals.sagepub.com/doi/10.1177/1729881419841532",
    	keywords = "3-D LIDAR, people detection and tracking, behavior analysis",
    	abstract = "It is important to measure and analyze people behavior to design systems which interact with people. This article describes a portable people behavior measurement system using a three-dimensional LIDAR. In this system, an observer carries the system equipped with a three-dimensional Light Detection and Ranging (LIDAR) and follows persons to be measured while keeping them in the sensor view. The system estimates the sensor pose in a three-dimensional environmental map and tracks the target persons. It enables long-term and wide-area people behavior measurements which are hard for existing people tracking systems. As a field test, we recorded the behavior of professional caregivers attending elderly persons with dementia in a hospital. The preliminary analysis of the behavior reveals how the caregivers decide the attending position while checking the surrounding people and environment. Based on the analysis result, empirical rules to design the behavior of attendant robots are proposed.",
    	volume = "16 (2)",
    	journal = "INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS",
    	year = 2019,
    	title = "A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement",
    	author = "Koide, Kenji and Miura, Jun and Menegatti, Emanuele",
    	id = "11577_3357770"
    }
    
  10. G Beraldo, E Menegatti, V De Tommasi, R Mancin and F Benini. A preliminary investigation of using humanoid social robots as non-pharmacological techniques with children. In Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO 2019-. 2019, 393–400. URL, DOI BibTeX

    @conference{10.1109/ARSO46408.2019.8948760,
    	pages = "393--400",
    	isbn = "978-1-7281-3176-4",
    	doi = "10.1109/ARSO46408.2019.8948760",
    	url = "https://ieeexplore.ieee.org/abstract/document/8948760",
    	abstract = "This paper presents an observational preliminary study to examine the potentialities of the humanoid social robots as non-pharmacological techniques for children. The aim of the robots is to entertain the patients, interact through voice, gestures and funny animations, in order to decrease their negative feelings, and, at the same time, to promote positive emotions. This study exploited two different humanoid social robots, Pepper and Sanbot Elf, and involved twenty-eight hospitalized patients, age 3-19, assigned randomly to one of them. We analysed the feelings of the patients in terms of several different positive and negative moods, focusing on their variation in three different times: before, during and after the procedure. We found a significant decrease of the negative feelings by a factor between 31.25% and 53.65%, and an increase of positive emotions up to 73.33%. Generally both the robots were appreciated by the most of the children: the 78.57% of the participants evaluated the experience positive and 85.71% declared that would like to play again with the robots, if they had to repeat the procedure.",
    	booktitle = "Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO",
    	volume = "2019-",
    	publisher = "IEEE Computer Society",
    	year = 2019,
    	title = "A preliminary investigation of using humanoid social robots as non-pharmacological techniques with children",
    	author = "Beraldo, G. and Menegatti, E. and De Tommasi, V. and Mancin, R. and Benini, F.",
    	id = "11577_3335626"
    }
    
  11. A Malaguti, M Carraro, M Guidolin, L Tagliapietra, E Menegatti and S Ghidoni. Real-time Tracking-by-Detection of Human Motion in RGB-D Camera Networks. In Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics. 2019, 3198–3204. DOI BibTeX

    @conference{10.1109/SMC.2019.8914539,
    	pages = "3198--3204",
    	isbn = "978-1-7281-4569-3",
    	doi = "10.1109/SMC.2019.8914539",
    	keywords = "Human-Computer Interaction, Human-Machine Interface, Multi-User Interaction",
    	abstract = "This paper presents a novel real-time tracking system capable of improving body pose estimation algorithms in distributed camera networks. The first stage of our approach introduces a linear Kalman filter operating at the body joints level, used to fuse single-view body poses coming from different detection nodes of the network and to ensure temporal consistency between them. The second stage, instead, refines the Kalman filter estimates by fitting a hierarchical model of the human body having constrained link sizes in order to ensure the physical consistency of the tracking. The effectiveness of the proposed approach is demonstrated through a broad experimental validation, performed on a set of sequences whose ground truth references are generated by a commercial marker-based motion capture system. The obtained results show how the proposed system outperforms the considered state-of-the-art approaches, granting accurate and reliable estimates. Moreover, the developed methodology constrains neither the number of persons to track, nor the number, position, synchronization, frame-rate, and manufacturer of the RGB-D cameras used. Finally, the real-time performances of the system are of paramount importance for a large number of real-world applications.",
    	booktitle = "Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2019,
    	title = "Real-time Tracking-by-Detection of Human Motion in RGB-D Camera Networks",
    	author = "Malaguti, A. and Carraro, M. and Guidolin, M. and Tagliapietra, L. and Menegatti, E. and Ghidoni, S.",
    	id = "11577_3332968"
    }
    
  12. Gloria Beraldo, Enrico Termine and Emanuele Menegatti. Shared-Autonomy Navigation for Mobile Robots Driven by a Door Detection Module. In AI*IA 2019: AI*IA 2019 – Advances in Artificial Intelligence 11946. 2019, 511–527. URL, DOI BibTeX

    @conference{10.1007/978-3-030-35166-3_36,
    	pages = "511--527",
    	isbn = "978-3-030-35165-6",
    	doi = "10.1007/978-3-030-35166-3_36",
    	url = "https://link.springer.com/chapter/10.1007/978-3-030-35166-3_36",
    	keywords = "Mobile robots navigation, Robot perception, Human-centered systems",
    	abstract = "Shared-autonomy approaches are the most appealing for what concerns the control of assistive devices such as wheelchairs and mobile robots, designed to aid disabled and elderly people. In this paper, we propose a shared-autonomy navigation for mobile robots, that combines the user’s interaction as well as the robots’ perception and the environment knowledge, with the information of important landmarks, namely the doors. In order to facilitate the control of the robot, our system exploits a door detection module, aiming to detect doors and especially to identify their open/close status, making the robot pass through narrow doorways without any user’s intervention. We tested the proposed system on a real mobile robot to verify the feasibility.",
    	booktitle = "AI*IA 2019: AI*IA 2019 – Advances in Artificial Intelligence",
    	volume = 11946,
    	year = 2019,
    	title = "Shared-Autonomy Navigation for Mobile Robots Driven by a Door Detection Module",
    	author = "Beraldo, Gloria and Termine, Enrico and Menegatti, Emanuele",
    	id = "11577_3335625"
    }
    
  13. J C Lock, A G Tramontano, S Ghidoni and N Bellotto. ActiVis: Mobile object detection and active guidance for people with visual impairments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 11752. 2019, 649–660. DOI BibTeX

    @conference{10.1007/978-3-030-30645-8_59,
    	pages = "649--660",
    	isbn = "978-3-030-30644-1",
    	doi = "10.1007/978-3-030-30645-8_59",
    	keywords = "Active vision; Object detection; Vision impairment",
    	abstract = "The ActiVis project aims to deliver a mobile system that is able to guide a person with visual impairments towards a target object or area in an unknown indoor environment. For this, it uses new developments in object detection, mobile computing, action generation and human-computer interfacing to interpret the user’s surroundings and present effective guidance directions. Our approach to direction generation uses a Partially Observable Markov Decision Process (POMDP) to track the system’s state and output the optimal location to be investigated. This system includes an object detector and an audio-based guidance interface to provide a complete active search pipeline. The ActiVis system was evaluated in a set of experiments showing better performance than a simpler unguided case.",
    	booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
    	volume = 11752,
    	publisher = "Springer Verlag",
    	year = 2019,
    	title = "ActiVis: Mobile object detection and active guidance for people with visual impairments",
    	author = "Lock, J. C. and Tramontano, A. G. and Ghidoni, S. and Bellotto, N.",
    	id = "11577_3342233"
    }
    
  14. Stefano Tortora, Gloria Beraldo, Luca Tonin and Emanuele Menegatti. Entropy-based Motion Intention Identification for Brain-Computer Interface. In Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics. 2019, 2791–2798. DOI BibTeX

    @conference{10.1109/SMC.2019.8914486,
    	pages = "2791--2798",
    	isbn = "978-1-7281-4569-3",
    	doi = "10.1109/SMC.2019.8914486",
    	booktitle = "Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics",
    	year = 2019,
    	title = "Entropy-based Motion Intention Identification for Brain-Computer Interface",
    	author = "Tortora, Stefano and Beraldo, Gloria and Tonin, Luca and Menegatti, Emanuele",
    	id = "11577_3318793"
    }
    
  15. Luca Tonin, Gloria Beraldo, Stefano Tortora, Luca Tagliapietra, Jose R Millan and Emanuele Menegatti. ROS-Neuro: A common middleware for BMI and robotics. The acquisition and recorder packages. In Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics. 2019, 2767–2772. DOI BibTeX

    @conference{10.1109/SMC.2019.8914364,
    	pages = "2767--2772",
    	isbn = "978-1-7281-4569-3",
    	doi = "10.1109/SMC.2019.8914364",
    	booktitle = "Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics",
    	year = 2019,
    	title = "ROS-Neuro: A common middleware for BMI and robotics. The acquisition and recorder packages",
    	author = "Tonin, Luca and Beraldo, Gloria and Tortora, Stefano and Tagliapietra, Luca and Millan, Jose del R. and Menegatti, Emanuele",
    	id = "11577_3318795"
    }
    
  16. C Schneider, M Pereira, L Tonin and J R Millan. Real-time EEG Feedback on Alpha Power Lateralization Leads to Behavioral Improvements in a Covert Attention Task. BRAIN TOPOGRAPHY, 2019. URL, DOI BibTeX

    @article{10.1007/s10548-019-00725-9,
    	doi = "10.1007/s10548-019-00725-9",
    	url = "www.wkap.nl/journalhome.htm/0896-0267",
    	keywords = "Alpha band lateralization; Brain-computer interface; Closed-loop; Covert visuospatial attention; EEG; Hemispatial neglect",
    	journal = "BRAIN TOPOGRAPHY",
    	publisher = "Springer New York LLC",
    	year = 2019,
    	title = "Real-time EEG Feedback on Alpha Power Lateralization Leads to Behavioral Improvements in a Covert Attention Task",
    	author = "Schneider, C. and Pereira, M. and Tonin, L. and Millan, J. R.",
    	id = "11577_3318791"
    }
    
  17. Luca Tonin, Felix Christian Bauer and Jose R Millan. The role of the control framework for continuous teleoperation of a brain–machine interface-driven mobile robot. IEEE TRANSACTIONS ON ROBOTICS 36:78–91, 2019. DOI BibTeX

    @article{10.1109/TRO.2019.2943072,
    	pages = "78--91",
    	doi = "10.1109/TRO.2019.2943072",
    	volume = 36,
    	journal = "IEEE TRANSACTIONS ON ROBOTICS",
    	year = 2019,
    	title = "The role of the control framework for continuous teleoperation of a brain–machine interface-driven mobile robot",
    	author = "Tonin, Luca and Bauer, Felix Christian and Millan, Jose del R",
    	id = "11577_3313388"
    }
    
  18. D Evangelista, M Imperoli, E Menegatti and A Pretto. FlexSight - A Flexible and Accurate System for Object Detection and Localization for Industrial Robots. In 2019 IEEE International Workshop on Metrology for Industry 4.0 and IoT, MetroInd 4.0 and IoT 2019 - Proceedings. 2019, 58–63. DOI BibTeX

    @conference{10.1109/METROI4.2019.8792902,
    	pages = "58--63",
    	isbn = "978-1-7281-0429-4",
    	doi = "10.1109/METROI4.2019.8792902",
    	keywords = "Deep Learning; Object Detection; Stereo Matching; Structured light cameras; Texture-less Objects",
    	abstract = "We present a novel smart camera - the FlexSight C1 - designed to enable an industrial robot to detect and localize several types of objects and parts in an accurate and reliable way. The C1 integrates all the sensors and a powerful mini computer with a complete Operating System running robust 3D reconstruction and object localization algorithms on-board, so it can be directly connected to the robot that is guided directly by the device during the production cycle without any external computers in the loop.In this paper, we describe the FlexSight C1 hardware configuration along with the algorithms designed to face the model based localization problem of textureless objects, namely: (1) an improved version of the PatchMatch Stereo matching algorithm for depth estimation; (2) an object detection pipeline based on deep transfer learning with synthetic data. All the presented algorithms have been tested on publicly available datasets, showing effective results and improved runtime performance.",
    	booktitle = "2019 IEEE International Workshop on Metrology for Industry 4.0 and IoT, MetroInd 4.0 and IoT 2019 - Proceedings",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2019,
    	title = "FlexSight - A Flexible and Accurate System for Object Detection and Localization for Industrial Robots",
    	author = "Evangelista, D. and Imperoli, M. and Menegatti, E. and Pretto, A.",
    	id = "11577_3335634"
    }
    
  19. Stefano Tortora, Michele Moro and Emanuele Menegatti. Dual-Myo Real-Time Control of a Humanoid Arm for Teleoperation. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2019, 624–625. URL, DOI BibTeX

    @conference{10.1109/HRI.2019.8673259,
    	pages = "624--625",
    	isbn = "978-1-5386-8555-6",
    	doi = "10.1109/HRI.2019.8673259",
    	url = "https://ieeexplore.ieee.org/abstract/document/8673259",
    	keywords = "kinematic models, motion tracking, humanoid",
    	abstract = "In this paper, we propose a ROS-based system to reconstruct the motion of human upper limb based on data collected with two Myo armbands in a hybrid manner. The inertial sensors' information are fused to reconstruct shoulder and elbow kinematics. Electromyographic (EMG) signals are used to estimate wrist kinematics, to fully capture the motion of the 5-DoF (degree of freedom) user's arm. The system shows a good pose estimation accuracy compared to the XSens suit with an average RMSE of 6.61 ° ±3.31 ° and a R 2 of 0.90±0.07.",
    	booktitle = "2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)",
    	year = 2019,
    	title = "Dual-Myo Real-Time Control of a Humanoid Arm for Teleoperation",
    	author = "Tortora, Stefano and Moro, Michele and Menegatti, Emanuele",
    	id = "11577_3297574"
    }
    
  20. Kenji Koide and Emanuele Menegatti. General Hand-Eye Calibration Based on Reprojection Error Minimization. IEEE ROBOTICS AND AUTOMATION LETTERS 4:1021–1028, 2019. URL, DOI BibTeX

    @article{10.1109/LRA.2019.2893612,
    	pages = "1021--1028",
    	doi = "10.1109/LRA.2019.2893612",
    	url = "http://ieeexplore.ieee.org/servlet/opac?punumber=7083369",
    	keywords = "Calibration and identification; industrial robots; Control and Systems Engineering; Human-Computer Interaction; Biomedical Engineering; Mechanical Engineering; Control and Optimization; Artificial Intelligence; Computer Science Applications1707 Computer Vision and Pattern Recognition; 1707",
    	volume = 4,
    	journal = "IEEE ROBOTICS AND AUTOMATION LETTERS",
    	publisher = "Institute of Electrical and Electronics Engineers Inc.",
    	year = 2019,
    	title = "General Hand-Eye Calibration Based on Reprojection Error Minimization",
    	author = "Koide, Kenji and Menegatti, Emanuele",
    	id = "11577_3296422"
    }
    
  21. Elisa Tosello, Nicola Castaman and Emanuele Menegatti. Using robotics to train students for Industry 4.0. In 12th IFAC Symposium on Advances in Control Education ACE 2019 52. 2019, 153–158. URL, DOI BibTeX

    @conference{10.1016/j.ifacol.2019.08.185,
    	pages = "153--158",
    	doi = "10.1016/j.ifacol.2019.08.185",
    	url = "http://www.sciencedirect.com/science/article/pii/S2405896319305221",
    	abstract = {This paper presents the master course on Autonomous Robotics that we offer at the School of Engineering of the University of Padova (Italy). Its novelty is the assignment of a lab project carefully designed to train students on autonomous and industrial robotics in the framework of Industry 4.0: the "Industry 4.0 Robotics Challenge". Students have to program both a manipulator and a mobile robot, together with a 3D vision system, in order to collaborate in the fulfillment of a pick-place-transport industrial task. We adopt a constructionist approach: project-based learning and team-based learning are applied to robotics and Industry 4.0. The project is organized as a challenge to motivate students to propose innovative ideas. A survey on students' satisfaction is reported at the end of the paper. We made the description of both the hardware and software setup, together with tutorials and wikis, publicly available to let other robotics instructors replicate our proposal and make it a point of reference for teaching robotics in the frame of Industry 4.0.},
    	booktitle = "12th IFAC Symposium on Advances in Control Education ACE 2019",
    	volume = 52,
    	publisher = "Elsevier B.V.",
    	year = 2019,
    	title = "Using robotics to train students for Industry 4.0",
    	author = "Tosello, Elisa and Castaman, Nicola and Menegatti, Emanuele",
    	id = "11577_3309363"
    }
    
  22. Gloria Beraldo, Stefano Tortora and Emanuele Menegatti. Towards a Brain-Robot Interface for children. In 2019 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2019). 2019, 2799–2805. URL, DOI BibTeX

    @conference{10.1109/SMC.2019.8914612,
    	pages = "2799--2805",
    	isbn = "978-1-7281-4570-9",
    	doi = "10.1109/SMC.2019.8914612",
    	url = "https://www.researchgate.net/publication/334965256_Towards_a_Brain-Robot_Interface_for_children",
    	abstract = "Brain-Computer Interface systems have been widely studied and explored with adults demonstrating the possibility to achieve augmentative communication and control directly from the users’ brain. Nevertheless, the study and the exploitation of the BCI in children seems to be limited. In this paper we propose and present for the first time a Brain-Robot Interface enabling children to mentally drive a robot. With this regards, we exploit the combination of a P300-based Brain-Computer Interface and a shared-autonomy approach to achieve a reliable and safe robot navigation. We tested our system in a pilot study involving five children. Our preliminary results highlight the advantages of using an accumulation framework, thanks to which the performance of the children reached the 81.67 % ± 12.7 on average in terms of accuracy. During the experiments, the shared-autonomy approach involved a low-level intelligent control on board of the robot to avoid obstacles, enabling an effective navigation also with a small number of commands.",
    	booktitle = "2019 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2019)",
    	year = 2019,
    	title = "Towards a Brain-Robot Interface for children",
    	author = "Beraldo, Gloria and Tortora, Stefano and Menegatti, Emanuele",
    	id = "11577_3305641"
    }
    
  23. Kenji Koide and Emanuele Menegatti. General Robot-Camera Synchronization based on Reprojection Error Minimization. In Proceeding of ARW & OAGM JOINT WORKSHOP ON “VISION AND ROBOTICS” 2019. 2019, 119–122. DOI BibTeX

    @conference{10.3217/978-3-85125-663-5-23,
    	pages = "119--122",
    	isbn = "978-3-85125-663-5",
    	doi = "10.3217/978-3-85125-663-5-23",
    	abstract = "This paper describes a synchronization method to estimate the time offset between a robot arm and a camera mounted on the robot (i.e., robot-camera synchronization) based on reprojection error minimization. In this method, we detect a calibration pattern (e.g., checkerboard) from camera images while projecting the pattern onto the image space with robot hand poses and forward kinematics. Then, we estimate the delay of the camera data by finding the robot-camera time offset which minimizes the reprojection error between the visually detected and the projected patterns. Since the proposed method does not rely on any camera-specific algorithms, it can be easily applied to any new camera models, such as RGB, infrared, and X-ray cameras, by changing only the projection model. Through experiments on a real system, we confirmed that the proposed method shows a good synchronization accu- racy and contributes to the accuracy of a continuous scan data mapping task.",
    	booktitle = "Proceeding of ARW & OAGM JOINT WORKSHOP ON “VISION AND ROBOTICS” 2019",
    	year = 2019,
    	title = "General Robot-Camera Synchronization based on Reprojection Error Minimization",
    	author = "Koide, Kenji and Menegatti, Emanuele",
    	id = "11577_3298185"
    }
    
  24. Elisa Tosello, Nicola Castaman, Stefano Michieletto and Emanuele Menegatti. Teaching Robot Programming for Industry 4.0. In Educational Robotics in the Context of the Maker Movement. 2019. BibTeX

    @conference{,
    	11577_3309398isbn = "978-3-030-18141-3",
    	booktitle = "Educational Robotics in the Context of the Maker Movement",
    	year = 2019,
    	title = "Teaching Robot Programming for Industry 4.0",
    	author = "Tosello, Elisa and Castaman, Nicola and Michieletto, Stefano and Menegatti, Emanuele",
    	id = "11577_3309398"
    }
    


2018

  1. G Beraldo and E Menegatti. Socially Assistive Robots for Inclusion. In 5th Italian Workshop on Artificial Intelligence and Robotics 2018 (AIRO 2018). November 2018. BibTeX

    @inproceedings{beraldo2,
    	author = "G. Beraldo and E. Menegatti",
    	booktitle = "5th Italian Workshop on Artificial Intelligence and Robotics 2018 (AIRO 2018)",
    	title = "Socially {A}ssistive {R}obots for {I}nclusion",
    	year = 2018,
    	month = "November"
    }
    
  2. G Beraldo, Di S Battista, S Badaloni, E Menegatti and M Pivetti. Sex differences in expectations and perception of a social robot. In 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). September 2018, 1-6. BibTeX

    @inproceedings{beraldo,
    	author = "G. Beraldo and S. Di Battista and S. Badaloni and E. Menegatti and M. Pivetti",
    	booktitle = "2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)",
    	title = "Sex differences in expectations and perception of a social robot",
    	year = 2018,
    	pages = "1-6",
    	month = "September"
    }
    
  3. G Beraldo, M Antonello, A Cimolato, E Menegatti and L Tonin. Brain-Computer Interface Meets ROS: A Robotic Approach to Mentally Drive Telepresence Robots. In 2018 IEEE International Conference on Robotics and Automation (ICRA). May 2018, 1-6. DOI BibTeX

    @inproceedings{beraldo1,
    	author = "G. Beraldo and M. Antonello and A. Cimolato and E. Menegatti and L. Tonin",
    	booktitle = "2018 IEEE International Conference on Robotics and Automation (ICRA)",
    	title = "Brain-{C}omputer {I}nterface {M}eets {ROS}: {A} {R}obotic {A}pproach to {M}entally {D}rive {T}elepresence {R}obots",
    	year = 2018,
    	pages = "1-6",
    	keywords = "brain;brain-computer interfaces;collision avoidance;control engineering computing;geriatrics;handicapped aids; learning (artificial intelligence);medical robotics;medical signal processing;mobile robots;operating systems (computers); patient rehabilitation;position control;robot programming;telerobotics;video streaming;noninvasive Brain-Computer Interface; Robot Operating System;telepresence robot;mobile device;human brain signals;severe physical disabilities;elderly people;BCI user; robot position control;obstacle avoidance;video streaming;Navigation;Robot sensing systems;Task analysis;Telepresence; Brain-computer interfaces",
    	doi = "10.1109/ICRA.2018.8460578",
    	issn = "2577-087X",
    	month = "May"
    }
    
  4. G Beraldo, E Menegatti, De V Tommasi, R Mancin and F Benini. Social Robots as Alternative Non-Pharmacological Techniques for procedures in children. In Proc of Workshop on Social Robots in Therapy: Focusing on Autonomy and Ethical Challenges (SREC). March 2018. BibTeX

    @inproceedings{8460578,
    	author = "G. Beraldo and E. Menegatti and V. De Tommasi and R. Mancin and F. Benini",
    	booktitle = "Proc of Workshop on Social Robots in Therapy: Focusing on Autonomy and Ethical Challenges (SREC)",
    	title = "{S}ocial {R}obots as {A}lternative {N}on-{P}harmacological {T}echniques for procedures in children",
    	year = 2018,
    	month = "March"
    }
    
  5. Gloria Beraldo, Nicola Castaman, Roberto Bortoletto, Enrico Pagello, José R Millán, Luca Tonin and Emanuele Menegatti. ROS-health: An open-source framework for neurorobotics. In 2018 IEEE International Conference on Simulation, Modeling, and Programming for Autonomous Robots, SIMPAR 2018, Brisbane, Australia, May 16-19, 2018. 2018, 174–179. URL, DOI BibTeX

    @inproceedings{DBLP:conf/simpar/BeraldoCBPMTM18,
    	author = "Gloria Beraldo and Nicola Castaman and Roberto Bortoletto and Enrico Pagello and Jos{\'{e}} del R. Mill{\'{a}}n and Luca Tonin and Emanuele Menegatti",
    	title = "{ROS}-health: {A}n open-source framework for neurorobotics",
    	booktitle = "2018 {IEEE} International Conference on Simulation, Modeling, and Programming for Autonomous Robots, {SIMPAR} 2018, Brisbane, Australia, May 16-19, 2018",
    	pages = "174--179",
    	year = 2018,
    	crossref = "DBLP:conf/simpar/2018",
    	url = "https://doi.org/10.1109/SIMPAR.2018.8376288",
    	doi = "10.1109/SIMPAR.2018.8376288",
    	timestamp = "Fri, 02 Nov 2018 09:38:59 +0100",
    	biburl = "https://dblp.org/rec/bib/conf/simpar/BeraldoCBPMTM18",
    	bibsource = "dblp computer science bibliography, https://dblp.org"
    }
    
  6. Francesca Stival, Stefano Michieletto, Andrea De Agnoi and Enrico Pagello. Toward a better robotic hand prosthesis control: using EMG and IMU features for a subject independent multi joint regression model. In 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob). 2018. BibTeX

    @inproceedings{Stival2018,
    	abstract = "The interest on wearable prosthetic devices has boost the research for a robust framework to help injured subjects to regain their lost functionality. A great number of solutions exploit physiological human signals, such as Electromyography (EMG), to naturally control the prosthesis, reproducing what happens in the human limbs. In this paper, we propose for the first time a way to integrate EMG signals with Inertial Measurement Unit (IMU) information, as a way to improve subject-independent models for controlling robotic hands. EMG data are very sensitive to both physical and physiological variations, and this is particularly true between different subjects. The introduction of IMUs aims at enriching the subject-independent model, making it more robust with information not strictly dependent from the physiological characteristics of the subject. We compare three different models: the first based on EMG solely, the second merging data from EMG and the 2 best IMUs available, and the third using EMG and IMUs information corresponding to the same 3 electrodes. The three techniques are tested on two different movements executed by 35 healthy subjects, by using a leave-one-out approach. The framework is able to estimate online the bending angles of the joints involved in the motion, obtaining an accuracy up to 0.8634. The resulting joint angles are used to actuate a robotic hand in a simulated environment.",
    	author = "Stival, Francesca and Michieletto, Stefano and {De Agnoi}, Andrea and Pagello, Enrico",
    	booktitle = "7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)",
    	isbn = 9781538681824,
    	title = "{Toward a better robotic hand prosthesis control: using EMG and IMU features for a subject independent multi joint regression model}",
    	year = 2018
    }
    
  7. Francesca Stival, Stefano Michieletto and Enrico Pagello. Subject Independent EMG Analysis by using Low-Cost hardware. In Systems, Man, and Cybernetics (SMC), 2018 IEEE International Conference on. 2018. BibTeX

    @inproceedings{stival2018subject,
    	author = "Stival, Francesca and Michieletto, Stefano and Pagello, Enrico",
    	booktitle = "Systems, Man, and Cybernetics (SMC), 2018 IEEE International Conference on",
    	organization = "IEEE",
    	title = "Subject Independent EMG Analysis by using Low-Cost hardware",
    	year = 2018
    }
    
  8. Francesca Stival, Stefano Michieletto, Enrico Pagello, Henning Müller and Manfredo Atzori. Quantitative Hierarchical Representation and Comparison of Hand Grasps from Electromyography and Kinematic Data. In in Workshop Proceedings of the 15th International Conference on Autonomous Systems IAS-15, Workshop on Learning Applications for Intelligent Autonomous Robots (LAIAR-2018). 2018. BibTeX

    @inproceedings{stival2018quantitative,
    	author = {Stival, Francesca and Michieletto, Stefano and Pagello, Enrico and M{\"{u}}ller, Henning and Atzori, Manfredo},
    	booktitle = "in Workshop Proceedings of the 15th International Conference on Autonomous Systems IAS-15, Workshop on Learning Applications for Intelligent Autonomous Robots (LAIAR-2018)",
    	organization = "ISBN: 978-3-00-059946-0",
    	title = "Quantitative Hierarchical Representation and Comparison of Hand Grasps from Electromyography and Kinematic Data",
    	year = 2018
    }
    
  9. Francesca Stival, Michele Moro and Enrico Pagello. A first approach to a taxonomy-based classification framework for hand grasps. , 2018. BibTeX

    @article{stival2018first,
    	title = "A first approach to a taxonomy-based classification framework for hand grasps",
    	author = "Stival, Francesca and Moro, Michele and Pagello, Enrico",
    	year = 2018
    }
    
  10. Mattia Guidolin, Marco Carraro, Stefano Ghidoni and Emanuele Menegatti. A Limb-based Approach for Body Pose Recognition Using a Predefined Set of Poses. In Workshop Proceedings of the 15th International Conference on Autonomous Systems IAS-15. 2018. BibTeX

    @inproceedings{guidolin2018limb,
    	title = "A Limb-based Approach for Body Pose Recognition Using a Predefined Set of Poses",
    	author = "Guidolin, Mattia and Carraro, Marco and Ghidoni, Stefano and Menegatti, Emanuele",
    	booktitle = "Workshop Proceedings of the 15th International Conference on Autonomous Systems IAS-15",
    	year = 2018
    }
    
  11. Stefano Tortora, Stefano Michieletto and Emanuele Menegatti. Synergy-based Gaussian Mixture Model to anticipate reaching direction identification for robotic applications. In Proc. of the IAS-15 Workshop on Learning Applications for Intelligent Autonomous Robots (LAIAR-2018). 2018, 13. BibTeX

    @inproceedings{Tortora2018,
    	author = "Tortora, Stefano and Michieletto, Stefano and Menegatti, Emanuele",
    	booktitle = "Proc. of the IAS-15 Workshop on Learning Applications for Intelligent Autonomous Robots (LAIAR-2018)",
    	pages = 13,
    	title = "{Synergy-based Gaussian Mixture Model to anticipate reaching direction identification for robotic applications}",
    	year = 2018
    }
    
  12. Stefano Tortora, Stefano Michieletto and Emanuele Menegatti. Synergy-based Classification to Anticipate Reaching Direction Identification in Stroke subject for Robotic Arm Teleoperation. In SCHOOL AND SYMPOSIUM ON ADVANCED NEUROREHABILITATION (SSNR2018). 2018, 36. BibTeX

    @inproceedings{tortorasynergy,
    	title = "Synergy-based Classification to Anticipate Reaching Direction Identification in Stroke subject for Robotic Arm Teleoperation",
    	author = "Tortora, Stefano and Michieletto, Stefano and Menegatti, Emanuele",
    	booktitle = "SCHOOL AND SYMPOSIUM ON ADVANCED NEUROREHABILITATION (SSNR2018)",
    	pages = 36,
    	year = 2018
    }
    
  13. Yongheng Zhao and Emanuele Menegatti. MS3D: Mean-Shift Object Tracking Boosted by Joint Back Projection of Color and Depth. In International Conference on Intelligent Autonomous Systems. 2018, 222–236. BibTeX

    @inproceedings{zhao2018ms3d,
    	title = "MS3D: Mean-Shift Object Tracking Boosted by Joint Back Projection of Color and Depth",
    	author = "Zhao, Yongheng and Menegatti, Emanuele",
    	booktitle = "International Conference on Intelligent Autonomous Systems",
    	pages = "222--236",
    	year = 2018,
    	organization = "Springer"
    }
    
  14. Stefano Michieletto, Francesca Stival, Francesco Castelli and Enrico Pagello. Automated and Flexible Coil Winding Robotic Framework. In ISR 2018; 50th International Symposium on Robotics. 2016, 1–4. BibTeX

    @inproceedings{michieletto2016automated,
    	title = "Automated and Flexible Coil Winding Robotic Framework",
    	author = "Michieletto, Stefano and Stival, Francesca and Castelli, Francesco and Pagello, Enrico",
    	booktitle = "ISR 2018; 50th International Symposium on Robotics",
    	pages = "1--4",
    	year = 2016,
    	organization = "VDE"
    }
    


2017

  1. Francesco Castelli, Stefano Michieletto, Stefano Ghidoni and Enrico Pagello. A machine learning-based visual servoing approach for fast robot control in industrial setting. International Journal of Advanced Robotic Systems 14(6), 2017. DOI BibTeX

    @article{Castelli2017,
    	abstract = "Industry 4.0 aims to make collaborative robotics accessible and effective inside factories. Human–robot interaction is enhanced by means of advanced perception systems which allow a flexible and reliable production. We are one of the contenders of a challenge with the intent of improve cooperation in industry. Within this competition, we developed a novel visual servoing system, based on a machine learning technique, for the automation of the winding of copper wire during the production of electric motors. Image-based visual servoing systems are often limited by the speed of the image processing module that runs at a frequency on the order of magnitude lower with respect to the robot control speed. In this article, a solution to this problem is proposed: the visual servoing function is synthesized using the Gaussian mixture model (GMM) machine learning system, which guarantees an extremely fast response. Issues related to data size reduction and collection of the data set needed to properly train the learner are discussed, and the performance of the proposed method is compared against the standard visual servoing algorithm used for training the GMM. The system has been developed and tested for a path following application on an aluminium bar to simulate the real stator teeth of a generic electric motor. Experimental results demonstrate that the proposed method is able to reproduce the visual servoing function with a minimal error while guaranteeing extremely high working frequency.",
    	author = "Castelli, Francesco and Michieletto, Stefano and Ghidoni, Stefano and Pagello, Enrico",
    	doi = "10.1177/1729881417738884",
    	issn = 17298814,
    	journal = "International Journal of Advanced Robotic Systems",
    	keywords = "Gaussian mixture model,Visual learning,computer vision,learning and adaptive systems,robot programming by demonstration,sensor-based control,visual control of robotic systems,visual servoing",
    	number = 6,
    	title = "{A machine learning-based visual servoing approach for fast robot control in industrial setting}",
    	volume = 14,
    	year = 2017
    }
    
  2. Francesca Stival, Stefano Michieletto and Enrico Pagello. How to Deploy a Wire with a Robotic Platform: Learning from Human Visual Demonstrations. Procedia Manufacturing 11:224–232, 2017. DOI BibTeX

    @article{Stival2017,
    	abstract = "In this paper, we address the problem of deploying a wire along a specific path selected by an unskilled user. The robot has to learn the selected path and pass a wire through the peg table by using the same tool. The main contribution regards the hybrid use of Cartesian positions provided by a learning procedure and joint positions obtained by inverse kinematics and motion planning. Some constraints are introduced to deal with non-rigid material without breaks or knots. We took into account a series of metrics to evaluate the robot learning capabilities, all of them over performed the targets.",
    	author = "Stival, Francesca and Michieletto, Stefano and Pagello, Enrico",
    	doi = "10.1016/j.promfg.2017.07.230",
    	issn = 23519789,
    	journal = "Procedia Manufacturing",
    	keywords = "Industry 4.0,Inverse Kinematics,Manipulators,Programming by demonstration,Robot learning",
    	pages = "224--232",
    	title = "{How to Deploy a Wire with a Robotic Platform: Learning from Human Visual Demonstrations}",
    	volume = 11,
    	year = 2017
    }
    
  3. Morris Antonello, Andrea Gobbi, Stefano Michieletto, Stefano Ghidoni and Emanuele Menegatti. A fully automatic hand-eye calibration system. In 2017 European Conference on Mobile Robots, ECMR 2017. 2017. DOI BibTeX

    @inproceedings{Antonello2017,
    	author = "Antonello, Morris and Gobbi, Andrea and Michieletto, Stefano and Ghidoni, Stefano and Menegatti, Emanuele",
    	booktitle = "2017 European Conference on Mobile Robots, ECMR 2017",
    	doi = "10.1109/ECMR.2017.8098681",
    	isbn = 9781538610961,
    	title = "{A fully automatic hand-eye calibration system}",
    	year = 2017
    }
    
  4. Francesca Stival, Stefano Michieletto and Enrico Pagello. How to Deploy a Wire with a Robotic Platform: Learning from Human Visual Demonstrations. Procedia Manufacturing 11:224 - 232, 2017. 27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM2017, 27-30 June 2017, Modena, Italy. URL, DOI BibTeX

    @article{stival2017how,
    	title = "How to Deploy a Wire with a Robotic Platform: Learning from Human Visual Demonstrations",
    	journal = "Procedia Manufacturing",
    	volume = 11,
    	pages = "224 - 232",
    	year = 2017,
    	note = "27th International Conference on Flexible Automation and Intelligent Manufacturing, FAIM2017, 27-30 June 2017, Modena, Italy",
    	issn = "2351-9789",
    	doi = "https://doi.org/10.1016/j.promfg.2017.07.230",
    	url = "http://www.sciencedirect.com/science/article/pii/S2351978917304389",
    	author = "Stival, Francesca and Michieletto, Stefano and Pagello, Enrico",
    	keywords = "Robot learning, Industry 4.0, Manipulators, Inverse Kinematics, Programming by demonstration"
    }
    
  5. Stefano Michieletto, Francesca Stival, Francesco Castelli, Mony Khosravi, Alberto Landini, Stefano Ellero, Roberto Landò, Nicolò Boscolo, Stefano Tonello, Bogdan Varaticeanu, Paul Minciunescu and Enrico Pagello. Flexicoil: Flexible robotized coils winding for electric machines manufacturing industry. In ICRA workshop on Industry of the future: Collaborative, Connected, Cognitive. 2017. BibTeX

    @inproceedings{michieletto2017flexicoil,
    	title = "Flexicoil: Flexible robotized coils winding for electric machines manufacturing industry",
    	author = "Michieletto, Stefano and Stival, Francesca and Castelli, Francesco and Khosravi, Mony and Landini, Alberto and Ellero, Stefano and Landò, Roberto and Boscolo, Nicolò and Tonello, Stefano and Varaticeanu, Bogdan and Minciunescu, Paul and Pagello, Enrico",
    	booktitle = "ICRA workshop on Industry of the future: Collaborative, Connected, Cognitive",
    	year = 2017
    }
    
  6. Fabian Just, Özhan Özen, Stefano Tortora, Robert Riener and Georg Rauter. Feedforward model based arm weight compensation with the rehabilitation robot ARMin. In Rehabilitation Robotics (ICORR), 2017 International Conference on. 2017, 72–77. BibTeX

    @inproceedings{just2017feedforward,
    	title = "Feedforward model based arm weight compensation with the rehabilitation robot ARMin",
    	author = {Just, Fabian and {\"O}zen, {\"O}zhan and Tortora, Stefano and Riener, Robert and Rauter, Georg},
    	booktitle = "Rehabilitation Robotics (ICORR), 2017 International Conference on",
    	pages = "72--77",
    	year = 2017,
    	organization = "IEEE"
    }
    
  7. Yongheng Zhao, Marco Carraro, Matteo Munaro and Emanuele Menegatti. Robust multiple object tracking in RGB-D camera networks. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2017, 6625–6632. BibTeX

    @inproceedings{zhao2017robust,
    	title = "Robust multiple object tracking in RGB-D camera networks",
    	author = "Zhao, Yongheng and Carraro, Marco and Munaro, Matteo and Menegatti, Emanuele",
    	booktitle = "2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)",
    	pages = "6625--6632",
    	year = 2017,
    	organization = "IEEE"
    }
    


2016

  1. Marco Carraro, Matteo Munaro and Emanuele Menegatti. Skeleton estimation and tracking by means of depth data fusion from depth camera networks. Robotics and Autonomous Systems 110, October 2018. DOI BibTeX

    @article{article3,
    	author = "Carraro, Marco and Munaro, Matteo and Menegatti, Emanuele",
    	doi = "10.1016/j.robot.2018.09.009",
    	journal = "Robotics and Autonomous Systems",
    	month = 10,
    	title = "Skeleton estimation and tracking by means of depth data fusion from depth camera networks",
    	volume = 110,
    	year = 2018,
    	bdsk-url-1 = "http://dx.doi.org/10.1016/j.robot.2018.09.009"
    }
    
  2. Luca Tonin, Andrea Cimolato and Emanuele Menegatti. Do not Move! Entropy Driven Detection of Intentional Non-control During Online SMR-BCI Operations. Volume 15, pages 989–993, October 2017. DOI BibTeX

    @inbook{cite-key,
    	author = "Tonin, Luca and Cimolato, Andrea and Menegatti, Emanuele",
    	date = "2017/10/01",
    	date-added = "2019-01-22 13:17:38 +0000",
    	date-modified = "2019-01-22 13:17:38 +0000",
    	doi = "10.1007/978-3-319-46669-9{\_}160",
    	isbn = "978-3-319-46668-2",
    	journal = "Biosystems and Biorobotics",
    	month = 10,
    	pages = "989--993",
    	title = "Do not Move! Entropy Driven Detection of Intentional Non-control During Online SMR-BCI Operations",
    	ty = "CHAP",
    	volume = 15,
    	year = 2017,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-46669-9%7B%5C_%7D160"
    }
    
  3. Matteo Comin and Morris Antonello. On the comparison of regulatory sequences with multiple resolution Entropic Profiles. BMC Bioinformatics 17, December 2016. DOI BibTeX

    @article{article4,
    	author = "Comin, Matteo and Antonello, Morris",
    	doi = "10.1186/s12859-016-0980-2",
    	journal = "BMC Bioinformatics",
    	month = 12,
    	title = "On the comparison of regulatory sequences with multiple resolution Entropic Profiles",
    	volume = 17,
    	year = 2016,
    	bdsk-url-1 = "http://dx.doi.org/10.1186/s12859-016-0980-2"
    }
    
  4. Marco Carraro, Matteo Munaro and Emanuele Menegatti. A Powerful and Cost-Efficient Human Perception System for Camera Networks and Mobile Robotics. July 2016. DOI BibTeX

    @inproceedings{inproceedings0,
    	author = "Carraro, Marco and Munaro, Matteo and Menegatti, Emanuele",
    	doi = "10.1007/978-3-319-48036-7_35",
    	journal = "Advances in Intelligent Systems and Computing",
    	month = 07,
    	title = "A Powerful and Cost-Efficient Human Perception System for Camera Networks and Mobile Robotics",
    	volume = 531,
    	year = 2016,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-48036-7_35"
    }
    
  5. F Stival, S Michieletto and E Pagello. Online subject-independent modeling of sEMG signals for the motion of a single robot joint. In 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob) (). June 2016, 1110-1116. DOI BibTeX

    @inproceedings{7523780,
    	author = "F. Stival and S. Michieletto and E. Pagello",
    	booktitle = "2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob)",
    	title = "Online subject-independent modeling of sEMG signals for the motion of a single robot joint",
    	year = 2016,
    	volume = "",
    	number = "",
    	pages = "1110-1116",
    	keywords = "electromyography;Gaussian processes;medical robotics;medical signal processing;mixture models;motion control;regression analysis;trajectory control;wavelet transforms;online subject-independent modeling;sEMG signals;single robot joint motion;robotic devices;physiological human signals;robotic platform;multiuser interface;robot motion trajectory;regression technique;Gaussian mixture model;GMM;surface electromyography;wavelet transform;simulated hand robot;humanoid robot;human motion remapping;Electromyography;Robots;Wavelet transforms;Data models;Training;Adaptation models",
    	doi = "10.1109/BIOROB.2016.7523780",
    	issn = "2155-1782",
    	month = "June"
    }
    
  6. N Castaman, E Tosello and E Pagello. A Sampling-Based Tree Planner for Navigation Among Movable Obstacles. In Proceedings of ISR 2016: 47st International Symposium on Robotics (). June 2016, 1-8. DOI BibTeX

    @inproceedings{7559130,
    	author = "N. Castaman and E. Tosello and E. Pagello",
    	booktitle = "Proceedings of ISR 2016: 47st International Symposium on Robotics",
    	title = "A Sampling-Based Tree Planner for Navigation Among Movable Obstacles",
    	year = 2016,
    	volume = "",
    	number = "",
    	pages = "1-8",
    	keywords = "",
    	doi = "",
    	issn = "",
    	month = "June"
    }
    
  7. Marco Carraro, Matteo Munaro and Emanuele Menegatti. Cost-efficient RGB-D smart camera for people detection and tracking. Journal of Electronic Imaging 25:041007, April 2016. DOI BibTeX

    @article{article0,
    	author = "Carraro, Marco and Munaro, Matteo and Menegatti, Emanuele",
    	doi = "10.1117/1.JEI.25.4.041007",
    	journal = "Journal of Electronic Imaging",
    	month = 04,
    	pages = 041007,
    	title = "Cost-efficient RGB-D smart camera for people detection and tracking",
    	volume = 25,
    	year = 2016,
    	bdsk-url-1 = "http://dx.doi.org/10.1117/1.JEI.25.4.041007"
    }
    
  8. Filippo Basso, Riccardo Levorato, Matteo Munaro and Emanuele Menegatti. A Distributed Calibration Algorithm for Color and Range Camera Networks. Volume 625, February 2016. DOI BibTeX

    @inbook{inbook1,
    	author = "Basso, Filippo and Levorato, Riccardo and Munaro, Matteo and Menegatti, Emanuele",
    	doi = "10.1007/978-3-319-26054-9_16",
    	month = 02,
    	title = "A Distributed Calibration Algorithm for Color and Range Camera Networks",
    	volume = 625,
    	year = 2016,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-26054-9_16"
    }
    
  9. Stefano Michieletto, Francesca Stival, Francesco Castelli and Enrico Pagello. Teaching door abembly tasks in uncertain environment. In 47th International Symposium on Robotics, ISR 2016. 2016, 638–644. BibTeX

    @conference{11577_3256688,
    	abstract = "The paper describes our experience in the benchmarking phase of the European Robotics Challenges project. The main focus is on the original solution proposed for solving a door assembly task. The proposal has to deal with tolerances in the door and module positions, never seen before doors, fast and usable human-machine interfaces, legacy hardware in industrial scenarios, and valuable results in benchmarking activities.",
    	author = "Michieletto, Stefano and Stival, Francesca and Castelli, Francesco and Pagello, Enrico",
    	booktitle = "47th International Symposium on Robotics, ISR 2016",
    	keywords = "Artificial Intelligence; Human-Computer Interactio",
    	pages = "638--644",
    	publisher = "VDE Verlag GmbH",
    	title = "{Teaching door abembly tasks in uncertain environment}",
    	year = 2016
    }
    
  10. Francesca Stival, Stefano Michieletto and Enrico Pagello. Online subject-independent modeling of sEMG signals for the motion of a single robot joint. In Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics 2016-July. 2016, 1110–1116. DOI BibTeX

    @inproceedings{Stival2016,
    	author = "Stival, Francesca and Michieletto, Stefano and Pagello, Enrico",
    	booktitle = "Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics",
    	doi = "10.1109/BIOROB.2016.7523780",
    	isbn = 9781509032877,
    	issn = 21551774,
    	pages = "1110--1116",
    	title = "{Online subject-independent modeling of sEMG signals for the motion of a single robot joint}",
    	volume = "2016-July",
    	year = 2016
    }
    
  11. Elisa Tosello, Stefano Michieletto and Enrico Pagello. Training master students to program both virtual and real autonomous robots in a teaching laboratory. In IEEE Global Engineering Education Conference, EDUCON 10-13-Apri. 2016, 621–630. DOI BibTeX

    @inproceedings{Tosello2016,
    	author = "Tosello, Elisa and Michieletto, Stefano and Pagello, Enrico",
    	booktitle = "IEEE Global Engineering Education Conference, EDUCON",
    	doi = "10.1109/EDUCON.2016.7474615",
    	isbn = 9781467386333,
    	issn = 21659567,
    	keywords = "Aldebaran NAO,Constructivism,Educational Robotics,Humanoid,Lego Mindstorms NXT,Project-Based Learning,ROS,Vstone Robovie-X",
    	pages = "621--630",
    	title = "{Training master students to program both virtual and real autonomous robots in a teaching laboratory}",
    	volume = "10-13-Apri",
    	year = 2016
    }
    
  12. Stefano Michieletto, Elisa Tosello, Enrico Pagello and Emanuele Menegatti. Teaching humanoid robotics by means of human teleoperation through RGB-D sensors. Robotics and Autonomous Systems 75:671–678, 2016. DOI BibTeX

    @article{Michieletto2016,
    	abstract = "This paper presents a graduate course project on humanoid robotics offered by the University of Padova. The target is to safely lift an object by teleoperating a small humanoid. Students have to map human limbs into robot joints, guarantee the robot stability during the motion, and teleoperate the robot to perform the correct movement. We introduce the following innovative aspects with respect to classical robotic classes: i) the use of humanoid robots as teaching tools; ii) the simplification of the stable locomotion problem by exploiting the potential of teleoperation; iii) the adoption of a Project-Based Learning constructivist approach as teaching methodology. The learning objectives of both course and project are introduced and compared with the students' background. Design and constraints students have to deal with are reported, together with the amount of time they and their instructors dedicated to solve tasks. A set of evaluation results are provided in order to validate the authors' purpose, including the students' personal feedback. A discussion about possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels.",
    	author = "Michieletto, Stefano and Tosello, Elisa and Pagello, Enrico and Menegatti, Emanuele",
    	doi = "10.1016/j.robot.2015.09.023",
    	isbn = "0921-8890",
    	issn = 09218890,
    	journal = "Robotics and Autonomous Systems",
    	keywords = "Constructivism,Educational robotics,Humanoid,Kinect,NAO,Project-Based Learning,ROS,Robovie-X,Teleoperation",
    	pages = "671--678",
    	title = "{Teaching humanoid robotics by means of human teleoperation through RGB-D sensors}",
    	volume = 75,
    	year = 2016
    }
    
  13. Roberto Bortoletto, Stefano Michieletto, Enrico Pagello and Davide Piovesan. Human muscle-tendon stiffness estimation during normal gait cycle based on Gaussian mixture model. In Advances in Intelligent Systems and Computing 302. 2016, 1185–1197. DOI BibTeX

    @inproceedings{Bortoletto2016,
    	author = "Bortoletto, Roberto and Michieletto, Stefano and Pagello, Enrico and Piovesan, Davide",
    	booktitle = "Advances in Intelligent Systems and Computing",
    	doi = "10.1007/978-3-319-08338-4_86",
    	isbn = 9783319083377,
    	issn = 21945357,
    	keywords = "Gait cycle,Gaussian mixture model,Muscle stiffness",
    	pages = "1185--1197",
    	title = "{Human muscle-tendon stiffness estimation during normal gait cycle based on Gaussian mixture model}",
    	volume = 302,
    	year = 2016
    }
    
  14. Stefano Michieletto, Francesca Stival, Francesco Castelli and Enrico Pagello. Automated and Flexible Coil Winding Robotic Framework. In ISR 2018; 50th International Symposium on Robotics. 2016, 1–4. BibTeX

    @inproceedings{michieletto2016automated,
    	title = "Automated and Flexible Coil Winding Robotic Framework",
    	author = "Michieletto, Stefano and Stival, Francesca and Castelli, Francesco and Pagello, Enrico",
    	booktitle = "ISR 2018; 50th International Symposium on Robotics",
    	pages = "1--4",
    	year = 2016,
    	organization = "VDE"
    }
    
  15. Matteo Munaro, Radu Rusu and Emanuele Menegatti. 3D robot perception with Point Cloud Library. Robotics and Autonomous Systems 78, January 2016. DOI BibTeX

    @article{article1,
    	author = "Munaro, Matteo and Rusu, Radu and Menegatti, Emanuele",
    	doi = "10.1016/j.robot.2015.12.008",
    	journal = "Robotics and Autonomous Systems",
    	month = 01,
    	title = "3D robot perception with Point Cloud Library",
    	volume = 78,
    	year = 2016,
    	bdsk-url-1 = "http://dx.doi.org/10.1016/j.robot.2015.12.008"
    }
    
  16. Matteo Munaro, Filippo Basso and Emanuele Menegatti. OpenPTrack: Open source multi-camera calibration and people tracking for RGB-D camera networks. Robotics and Autonomous Systems 75, November 2015. DOI BibTeX

    @article{article2,
    	author = "Munaro, Matteo and Basso, Filippo and Menegatti, Emanuele",
    	doi = "10.1016/j.robot.2015.10.004",
    	journal = "Robotics and Autonomous Systems",
    	month = 11,
    	title = "OpenPTrack: Open source multi-camera calibration and people tracking for RGB-D camera networks",
    	volume = 75,
    	year = 2015,
    	bdsk-url-1 = "http://dx.doi.org/10.1016/j.robot.2015.10.004"
    }
    
  17. A Bisson, Stefano Michieletto, V Ferrara, Fabrizio Romanelli and Emanuele Menegatti. An intuitive teleoperation of industrial robots: Approach manipulators by using visual tracking over a distributed system. pages 243-261, January 2015. DOI BibTeX

    @inbook{inbook0,
    	author = "Bisson, A and Michieletto, Stefano and Ferrara, V and Romanelli, Fabrizio and Menegatti, Emanuele",
    	doi = "10.4018/978-1-4666-8693-9.ch009",
    	month = 01,
    	pages = "243-261",
    	title = "An intuitive teleoperation of industrial robots: Approach manipulators by using visual tracking over a distributed system",
    	year = 2015,
    	bdsk-url-1 = "http://dx.doi.org/10.4018/978-1-4666-8693-9.ch009"
    }
    
  18. Morris Antonello, Matteo Munaro and Emanuele Menegatti. Efficient Measurement of Fibre Orientation for Mapping Carbon Fibre Parts with a Robotic System. July 2014. BibTeX

    @inproceedings{inproceedings1,
    	author = "Antonello, Morris and Munaro, Matteo and Menegatti, Emanuele",
    	month = 07,
    	title = "Efficient Measurement of Fibre Orientation for Mapping Carbon Fibre Parts with a Robotic System",
    	year = 2014
    }
    


2015

  1. Riccardo Valentini, Stefano Michieletto, Federico Toffano, Fabiola Spolaor, Zimi Sawacha and Enrico Pagello. EMG signal analysis for online estimation of a joint angle validated through human kinematic. September 2015, S13-S14. DOI BibTeX

    @inproceedings{article3,
    	author = "Valentini, Riccardo and Michieletto, Stefano and Toffano, Federico and Spolaor, Fabiola and Sawacha, Zimi and Pagello, Enrico",
    	doi = "10.1016/j.gaitpost.2015.07.034",
    	journal = "Gait & Posture",
    	month = 09,
    	pages = "S13-S14",
    	title = "EMG signal analysis for online estimation of a joint angle validated through human kinematic",
    	volume = 42,
    	year = 2015,
    	bdsk-url-1 = "http://dx.doi.org/10.1016/j.gaitpost.2015.07.034"
    }
    
  2. Matteo Munaro, Edmond Wai Yan So, Stefano Tonello and Emanuele Menegatti. Efficient Completeness Inspection Using Real-Time 3D Color Reconstruction with a Dual-Laser Triangulation System. Advances in Computer Vision and Pattern Recognition 48:201-225, September 2015. DOI BibTeX

    @article{article1,
    	author = "Munaro, Matteo and Wai Yan So, Edmond and Tonello, Stefano and Menegatti, Emanuele",
    	doi = "10.1007/978-1-4471-6741-9_7",
    	journal = "Advances in Computer Vision and Pattern Recognition",
    	month = 09,
    	pages = "201-225",
    	title = "Efficient Completeness Inspection Using Real-Time 3D Color Reconstruction with a Dual-Laser Triangulation System",
    	volume = 48,
    	year = 2015,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-1-4471-6741-9_7"
    }
    
  3. M Antonello, S Ghidoni and E Menegatti. Autonomous robotic system for thermographic detection of defects in upper layers of carbon fiber reinforced polymers. In 2015 IEEE International Conference on Automation Science and Engineering (CASE) (). August 2015, 634-639. DOI BibTeX

    @inproceedings{7294149,
    	author = "M. Antonello and S. Ghidoni and E. Menegatti",
    	booktitle = "2015 IEEE International Conference on Automation Science and Engineering (CASE)",
    	title = "Autonomous robotic system for thermographic detection of defects in upper layers of carbon fiber reinforced polymers",
    	year = 2015,
    	volume = "",
    	number = "",
    	pages = "634-639",
    	keywords = "blades;carbon fibre reinforced plastics;infrared imaging;robots;autonomous robotic system;thermographic defect detection;carbon fiber reinforced polymers;pulsed phase thermography;car side blades;unsharp masking;Carbon;Brightness;Cameras;Bonding;Noise;Robot vision systems",
    	doi = "10.1109/CoASE.2015.7294149",
    	issn = "2161-8070",
    	month = "Aug"
    }
    
  4. Loris Nanni, Matteo Munaro, Stefano Ghidoni, Emanuele Menegatti and Sheryl Brahnam. Ensemble of Different Approaches for a Reliable Person Re-identification System. Applied Computing and Informatics 24, March 2015. DOI BibTeX

    @article{article0,
    	author = "Nanni, Loris and Munaro, Matteo and Ghidoni, Stefano and Menegatti, Emanuele and Brahnam, Sheryl",
    	doi = "10.1016/j.aci.2015.02.002",
    	journal = "Applied Computing and Informatics",
    	month = 03,
    	title = "Ensemble of Different Approaches for a Reliable Person Re-identification System",
    	volume = 24,
    	year = 2015,
    	bdsk-url-1 = "http://dx.doi.org/10.1016/j.aci.2015.02.002"
    }
    
  5. Riccardo Valentini, Stefano Michieletto, Fabiola Spolaor, Zimi Sawacha and Enrico Pagello. Processing of sEMG signals for online motion of a single robot joint through GMM modelization. In IEEE International Conference on Rehabilitation Robotics 2015-Septe. 2015, 943–949. DOI BibTeX

    @inproceedings{Valentini2015,
    	abstract = "This paper evaluates the use of Gaussian Mixture Model (GMM) trained through Electromyography (EMG) signals to online estimate the bending angle of a single human joint. The parameters involved in the evaluation are the number of Gaussian components, the channel used for model, the feature extraction method, and the size of the training set. The feature extraction is performed through Wavelet Transform by investigating several kind of configuration. Two set of experimental data are collected to validate the proposed framework from 6 different healthy subjects. Trained GMMs are validated by comparing the joint angle estimated through Gaussian Mixture Regression (GMR) with the one measured on new unseen data. The goodness of the estimated date are evaluated by means of Normalized Mean Square Error (NMSE), while the time performances of the retrieval system are measured at each phase in order to analyze possible critical situations. Achieved results show that our framework is able to obtain high performances in both accuracy and computation time. The whole procedure is tested on a real humanoid robot by remapping the human motion to the robotic platform in order to verify the proper execution of the original movement.",
    	author = "Valentini, Riccardo and Michieletto, Stefano and Spolaor, Fabiola and Sawacha, Zimi and Pagello, Enrico",
    	booktitle = "IEEE International Conference on Rehabilitation Robotics",
    	doi = "10.1109/ICORR.2015.7281325",
    	isbn = 9781479918072,
    	issn = 19457901,
    	pages = "943--949",
    	title = "{Processing of sEMG signals for online motion of a single robot joint through GMM modelization}",
    	volume = "2015-Septe",
    	year = 2015
    }
    
  6. Francesca Stival, Stefano Michieletto and Enrico Pagello. Subject-Independent Modeling of sEMG Signals for the Motion of a Single Robot Joint. In Workshop of Robotics: Science and Systems 2015 on Combining AI Reasoning and Cognitive Science with Robotics. 2015. URL BibTeX

    @conference{11577_3168471,
    	author = "Stival, Francesca and Michieletto, Stefano and Pagello, Enrico",
    	booktitle = "Workshop of Robotics: Science and Systems 2015 on Combining AI Reasoning and Cognitive Science with Robotics",
    	title = "{Subject-Independent Modeling of sEMG Signals for the Motion of a Single Robot Joint}",
    	url = "http://cogrobo.sabanciuniv.edu/wp-content/uploads/AI-CogSci-Robo{\_}2015{\_}poster{\_}1.pdf",
    	year = 2015
    }
    
  7. A Bisson, Stefano Michieletto, V Ferrara, Fabrizio Romanelli and Emanuele Menegatti. An intuitive teleoperation of industrial robots: Approach manipulators by using visual tracking over a distributed system. pages 243-261, January 2015. DOI BibTeX

    @inbook{article4,
    	author = "Bisson, A and Michieletto, Stefano and Ferrara, V and Romanelli, Fabrizio and Menegatti, Emanuele",
    	doi = "10.4018/978-1-4666-8693-9.ch009",
    	month = 01,
    	pages = "243-261",
    	title = "An intuitive teleoperation of industrial robots: Approach manipulators by using visual tracking over a distributed system",
    	year = 2015,
    	bdsk-url-1 = "http://dx.doi.org/10.4018/978-1-4666-8693-9.ch009"
    }
    
  8. Matteo Comin and Morris Antonello. Fast Alignment-free Comparison for Regulatory Sequences using Multiple Resolution Entropic Profiles. In BIOINFORMATICS. 2015. BibTeX

    @inproceedings{Comin2015FastAC,
    	title = "Fast Alignment-free Comparison for Regulatory Sequences using Multiple Resolution Entropic Profiles",
    	author = "Matteo Comin and Morris Antonello",
    	booktitle = "BIOINFORMATICS",
    	year = 2015
    }
    
  9. Marco Carraro, Morris Antonello, Luca Tonin and Emanuele Menegatti. An Open Source Robotic Platform for Ambient Assisted Living. In AIRO@AI*IA. 2015. BibTeX

    @inproceedings{Carraro2015AnOS,
    	title = "An Open Source Robotic Platform for Ambient Assisted Living",
    	author = "Marco Carraro and Morris Antonello and Luca Tonin and Emanuele Menegatti",
    	booktitle = "AIRO@AI*IA",
    	year = 2015
    }
    
  10. Matteo Lora, Stefano Ghidoni, Matteo Munaro and Emanuele Menegatti. A geometric approach to multiple viewpoint human body pose estimation. 2015 European Conference on Mobile Robots (ECMR), pages 1-6, 2015. BibTeX

    @article{Lora2015AGA,
    	title = "A geometric approach to multiple viewpoint human body pose estimation",
    	author = "Matteo Lora and Stefano Ghidoni and Matteo Munaro and Emanuele Menegatti",
    	journal = "2015 European Conference on Mobile Robots (ECMR)",
    	year = 2015,
    	pages = "1-6"
    }
    
  11. S Zennaro, Matteo Munaro, Simone Milani, Pietro Zanuttigh, A Bernardi, Stefano Ghidoni and Emanuele Menegatti. Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications. 2015 IEEE International Conference on Multimedia and Expo (ICME), pages 1-6, 2015. BibTeX

    @article{Zennaro2015PerformanceEO,
    	title = "Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications",
    	author = "S. Zennaro and Matteo Munaro and Simone Milani and Pietro Zanuttigh and A. Bernardi and Stefano Ghidoni and Emanuele Menegatti",
    	journal = "2015 IEEE International Conference on Multimedia and Expo (ICME)",
    	year = 2015,
    	pages = "1-6"
    }
    


2014

  1. Matteo Munaro and Emanuele Menegatti. Fast RGB-D people tracking for service robots. Autonomous Robots 37(3):227–242, October 2014. URL, DOI BibTeX

    @article{Munaro2014_2,
    	author = "Munaro, Matteo and Menegatti, Emanuele",
    	title = "Fast RGB-D people tracking for service robots",
    	journal = "Autonomous Robots",
    	year = 2014,
    	month = "Oct",
    	day = 01,
    	volume = 37,
    	number = 3,
    	pages = "227--242",
    	abstract = "Service robots have to robustly follow and interact with humans. In this paper, we propose a very fast multi-people tracking algorithm designed to be applied on mobile service robots. Our approach exploits RGB-D data and can run in real-time at very high frame rate on a standard laptop without the need for a GPU implementation. It also features a novel depth-based sub-clustering method which allows to detect people within groups or even standing near walls. Moreover, for limiting drifts and track ID switches, an online learning appearance classifier is proposed featuring a three-term joint likelihood. We compared the performances of our system with a number of state-of-the-art tracking algorithms on two public datasets acquired with three static Kinects and a moving stereo pair, respectively. In order to validate the 3D accuracy of our system, we created a new dataset in which RGB-D data are acquired by a moving robot. We made publicly available this dataset which is not only annotated by hand, but the ground-truth position of people and robot are acquired with a motion capture system in order to evaluate tracking accuracy and precision in 3D coordinates. Results of experiments on these datasets are presented, showing that, even without the need for a GPU, our approach achieves state-of-the-art accuracy and superior speed.",
    	issn = "1573-7527",
    	doi = "10.1007/s10514-014-9385-0",
    	url = "https://doi.org/10.1007/s10514-014-9385-0"
    }
    
  2. Nicolo Boscolo, Elisa Tosello, Stefano Tonello, Matteo Finotto, Roberto Bortoletto and Emanuele Menegatti. A Constraint Based Motion Optimization System for Quality Inspection Process Improvement. October 2014. DOI BibTeX

    @inproceedings{Constraint,
    	author = "Boscolo, Nicolo and Tosello, Elisa and Tonello, Stefano and Finotto, Matteo and Bortoletto, Roberto and Menegatti, Emanuele",
    	doi = "10.1007/978-3-319-11900-7_46",
    	month = 10,
    	title = "A Constraint Based Motion Optimization System for Quality Inspection Process Improvement",
    	year = 2014,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-11900-7_46"
    }
    
  3. Roberto Bortoletto, Enrico Pagello and D Piovesan. Lower Limb Stiffness Estimation during Running: The Effect of Using Kinematic Constraints in Muscle Force Optimization Algorithms. October 2014. DOI BibTeX

    @inproceedings{limbs,
    	author = "Bortoletto, Roberto and Pagello, Enrico and Piovesan, D",
    	doi = "10.1007/978-3-319-11900-7_31",
    	month = 10,
    	title = "Lower Limb Stiffness Estimation during Running: The Effect of Using Kinematic Constraints in Muscle Force Optimization Algorithms",
    	volume = 8810,
    	year = 2014,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-11900-7_31"
    }
    
  4. Riccardo Levorato. Probabilistic 2D Acoustic Source Localization Using Direction of Arrivals in Robot Sensor Networks. October 2014. DOI BibTeX

    @inproceedings{Acoustic,
    	author = "Levorato, Riccardo",
    	doi = "10.1007/978-3-319-11900-7_40",
    	month = 10,
    	title = "Probabilistic 2D Acoustic Source Localization Using Direction of Arrivals in Robot Sensor Networks",
    	year = 2014,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-11900-7_40"
    }
    
  5. Matteo Munaro, Morris Antonello, Michele Moro, Carlo Ferrari, Giorgio Clemente, Enrico Pagello and Emanuele Menegatti. FibreMap: Automatic Mapping of Fibre Orientation for Draping of Carbon Fibre Parts. July 2014, . BibTeX

    @inproceedings{FiberMap,
    	author = "Munaro, Matteo and Antonello, Morris and Moro, Michele and Ferrari, Carlo and Clemente, Giorgio and Pagello, Enrico and Menegatti, Emanuele",
    	year = 2014,
    	month = 07,
    	pages = "",
    	title = "FibreMap: Automatic Mapping of Fibre Orientation for Draping of Carbon Fibre Parts"
    }
    
  6. Zhengjie Fan, Elisa Tosello, Michele Palmia and Enrico Pagello. Applying Semantic Web Technologies to Multi-Robot Coordination. July 2014. BibTeX

    @inproceedings{Zhengiie,
    	author = "Fan, Zhengjie and Tosello, Elisa and Palmia, Michele and Pagello, Enrico",
    	month = 07,
    	title = "Applying Semantic Web Technologies to Multi-Robot Coordination",
    	year = 2014
    }
    
  7. Stefano Michieletto, Luca Tonin, Mauro Antonello, Roberto Bortoletto, Fabiola Spolaor, Enrico Pagello and Emanuele Menegatti. GMM-based Single-joint Angle Estimation using EMG signals. July 2014. DOI BibTeX

    @inproceedings{single,
    	author = "Michieletto, Stefano and Tonin, Luca and Antonello, Mauro and Bortoletto, Roberto and Spolaor, Fabiola and Pagello, Enrico and Menegatti, Emanuele",
    	doi = "10.1007/978-3-319-08338-4_85",
    	month = 07,
    	title = "GMM-based Single-joint Angle Estimation using EMG signals",
    	volume = 302,
    	year = 2014,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-08338-4_85"
    }
    
  8. Roberto Bortoletto, Enrico Pagello and D Piovesan. How different human muscle models affect the estimation of lower limb joint stiffness during running. July 2014. BibTeX

    @inproceedings{Limbs2,
    	author = "Bortoletto, Roberto and Pagello, Enrico and Piovesan, D",
    	month = 07,
    	title = "How different human muscle models affect the estimation of lower limb joint stiffness during running",
    	year = 2014
    }
    
  9. Roberto Bortoletto, Stefano Michieletto, Enrico Pagello and D Piovesan. Human Muscle-Tendon Stiffness Estimation During Normal Gait Cycle Based on Gaussian Mixture Model. Volume 302, pages pp 1185-1197, July 2014. DOI BibTeX

    @inbook{BUBU,
    	author = "Bortoletto, Roberto and Michieletto, Stefano and Pagello, Enrico and Piovesan, D",
    	doi = "10.1007/978-3-319-08338-4_86",
    	month = 07,
    	pages = "pp 1185-1197",
    	title = "Human Muscle-Tendon Stiffness Estimation During Normal Gait Cycle Based on Gaussian Mixture Model",
    	volume = 302,
    	year = 2014,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-319-08338-4_86"
    }
    
  10. Filippo Basso, Riccardo Levorato and Emanuele Menegatti. Online Calibration for Networks of Cameras and Depth Sensors. June 2014. BibTeX

    @inproceedings{Calibration,
    	author = "Basso, Filippo and Levorato, Riccardo and Menegatti, Emanuele",
    	journal = "OMNIVIS 2014",
    	month = 06,
    	title = "Online Calibration for Networks of Cameras and Depth Sensors",
    	year = 2014
    }
    
  11. M Munaro, S Ghidoni, D T Dizmen and E Menegatti. A feature-based approach to people re-identification using skeleton keypoints. In 2014 IEEE International Conference on Robotics and Automation (ICRA) (). May 2014, 5644-5651. DOI BibTeX

    @inproceedings{6907689,
    	author = "M. Munaro and S. Ghidoni and D. T. Dizmen and E. Menegatti",
    	booktitle = "2014 IEEE International Conference on Robotics and Automation (ICRA)",
    	title = "A feature-based approach to people re-identification using skeleton keypoints",
    	year = 2014,
    	volume = "",
    	number = "",
    	pages = "5644-5651",
    	keywords = "image colour analysis;image recognition;image resolution;mobile robots;video surveillance;feature-based approach;people re-identification;skeleton keypoints;skeletal information;skeleton joints;compact feature-based signature;visible joint;state-of-the-art 2D feature descriptor;3D feature descriptor;public datasets;RGB-D sensor;public video surveillance dataset;resolution images;recognition accuracy;mobile robotics;Joints;Three-dimensional displays;Target tracking;Robots;Testing;Training",
    	doi = "10.1109/ICRA.2014.6907689",
    	issn = "1050-4729",
    	month = "May"
    }
    
  12. M Munaro, A Basso, A Fossati, Van L Gool and E Menegatti. 3D reconstruction of freely moving persons for re-identification with a depth sensor. In 2014 IEEE International Conference on Robotics and Automation (ICRA) (). May 2014, 4512-4519. DOI BibTeX

    @inproceedings{6907518,
    	author = "M. Munaro and A. Basso and A. Fossati and L. Van Gool and E. Menegatti",
    	booktitle = "2014 IEEE International Conference on Robotics and Automation (ICRA)",
    	title = "3D reconstruction of freely moving persons for re-identification with a depth sensor",
    	year = 2014,
    	volume = "",
    	number = "",
    	pages = "4512-4519",
    	keywords = "bone;cameras;image classification;image colour analysis;image sensors;object tracking;orthopaedics;RGB-D reidentification;skeleton feature descriptor;warped point cloud;skeletal tracking algorithm;long-term person reidentification;consumer depth sensor;3D freely moving person reconstruction;Three-dimensional displays;Training;Standards;Joints;Shape;Solid modeling",
    	doi = "10.1109/ICRA.2014.6907518",
    	issn = "1050-4729",
    	month = "May"
    }
    
  13. F Basso, A Pretto and E Menegatti. Unsupervised intrinsic and extrinsic calibration of a camera-depth sensor couple. In 2014 IEEE International Conference on Robotics and Automation (ICRA) (). May 2014, 6244-6249. DOI BibTeX

    @inproceedings{6907780,
    	author = "F. Basso and A. Pretto and E. Menegatti",
    	booktitle = "2014 IEEE International Conference on Robotics and Automation (ICRA)",
    	title = "Unsupervised intrinsic and extrinsic calibration of a camera-depth sensor couple",
    	year = 2014,
    	volume = "",
    	number = "",
    	pages = "6244-6249",
    	keywords = "calibration;cameras;estimation theory;image sensors;parameter estimation;reliability;unsupervised intrinsic calibration;unsupervised extrinsic calibration;camera-depth sensor couple;availability;RGB camera;Microsoft Kinect;robot;extrinsic parameter estimation;intrinsic parameter estimation;3D structure estimation;Robot sensing systems;Calibration;Three-dimensional displays;Cameras;Estimation;Systematics",
    	doi = "10.1109/ICRA.2014.6907780",
    	issn = "1050-4729",
    	month = "May"
    }
    
  14. Tosello Elisa, Bortoletto Roberto, Michieletto Stefano, Pagello Enrico and Menegatti Emanuele. An Integrated System to approach the Programming of Humanoid Robotics. In Proc. of Workshops of 13th Intelligent Autonomous Systems Conference. 2014, 93–100. URL BibTeX

    @conference{11577_2835795,
    	abstract = "This paper describes a set of laboratory experiences focused on humanoid robots offered at the University of Padua. Instructors developed an integrated system through which students can work with robots. The aim is to improve the educational experience introducing a new learning tool, namely a humanoid robot, and the Robots Operating System (ROS) in a constructivist framework. This approach to robotics teaching lets students exploiting up-to-date robotic technologies and to deal with multidisciplinary problems, applying a scientic approach. By using humanoid robots, students are able to compare human movements to robot motion. The comparison brings out human/robot similarities, pushing students to solve complex motion problems in a more natural way while discovering robot limitations. In this paper, the learning objectives of the project, and the tools used by the students are presented. A set of evaluation results are provided in order to validate the authors' purpose. Finally, a discussion about designed experiences and possible future improvements is reported, hoping to encourage further spread of educational robotics in schools at all levels.",
    	author = "Elisa, Tosello and Roberto, Bortoletto and Stefano, Michieletto and Enrico, Pagello and Emanuele, Menegatti",
    	booktitle = "Proc. of Workshops of 13th Intelligent Autonomous Systems Conference",
    	keywords = "Simulation; Humanoid Robots; Teaching Robotics; RO",
    	pages = "93--100",
    	publisher = "IT+Robotics srl",
    	title = "{An Integrated System to approach the Programming of Humanoid Robotics}",
    	url = "http://www.terecop.eu/TRTWR-RIE2014/files/00{\_}WFr1/00{\_}WFr1{\_}12.pdf",
    	year = 2014
    }
    
  15. Michieletto Stefano, Tosello Elisa, Romanelli Fabrizio, Ferrara Valentina and Menegatti Emanuele. ROS-I Interface for COMAU Robots. In Simulation, Modeling, and Programming for Autonomous Robots 8810. 2014, 243–254. DOI BibTeX

    @conference{11577_3156335,
    	abstract = "The following paper presents the ROS-I interface developed to control Comau manipulators. Initially, the Comau controller allowed users to command a real robot thanks to motion primitives formulated through a Comau motion planning library. Now, either a ROS or a non ROS -compliant platform can move either a real or a virtual Comau robot using any motion planning library. Comau modules have been wrapped within ROS and a virtual model of a Comau robot has been created. The manufacturer controller has been innovatively used to drive both the real and the simulated automata.",
    	author = "Stefano, Michieletto and Elisa, Tosello and Fabrizio, Romanelli and Valentina, Ferrara and Emanuele, Menegatti",
    	booktitle = "Simulation, Modeling, and Programming for Autonomous Robots",
    	doi = "10.1007/978-3-319-11900-7_21",
    	pages = "243--254",
    	publisher = "Davide Brugali, Jan F. Broenink, Torsten Kroeger, Bruce A. MacDonald",
    	title = "{ROS-I Interface for COMAU Robots}",
    	volume = 8810,
    	year = 2014
    }
    
  16. Stefano Ghidoni, Salvatore M Anzalone, Matteo Munaro, Stefano Michieletto and Emanuele Menegatti. A distributed perception infrastructure for robot assisted living. Robotics and Autonomous Systems 62(9):1316–1328, 2014. DOI BibTeX

    @article{Ghidoni2014,
    	abstract = "This paper presents an ambient intelligence system designed for assisted living. The system processes the audio and video data acquired from multiple sensors spread in the environment to automatically detect dangerous events and generate automatic warning messages. The paper presents the distributed perception infrastructure that has been implemented by means of an open-source software middleware called NMM. Different processing nodes have been developed which can cooperate to extract high level information about the environment. Examples of implemented nodes running algorithms for people detection or face recognition are presented. Experiments on novel algorithms for people fall detection and sound classification and localization are discussed. Eventually, we present successful experiments in two test bed scenarios. {\textcopyright} 2014 Elsevier B.V. All rights reserved.",
    	author = "Ghidoni, Stefano and Anzalone, Salvatore M. and Munaro, Matteo and Michieletto, Stefano and Menegatti, Emanuele",
    	doi = "10.1016/j.robot.2014.03.022",
    	issn = 09218890,
    	journal = "Robotics and Autonomous Systems",
    	keywords = "Ambient intelligence,Assisted living,Autonomous robots,Camera network,Distributed sensing,Intelligent autonomous systems",
    	number = 9,
    	pages = "1316--1328",
    	title = "{A distributed perception infrastructure for robot assisted living}",
    	volume = 62,
    	year = 2014
    }
    
  17. Stefano Michieletto, Stefano Ghidoni, Enrico Pagello, Michele Moro and Emanuele Menegatti. WHY TEACH ROBOTICS USING ROS?. Journal of Automation, Mobile Robotics & Intelligent Systems 8(1):60–68, 2014. URL BibTeX

    @article{Michieletto2014,
    	abstract = "This paper focuses on the key role played by the adoption of a framework in teaching robotics with a computer science approach in the master in Computer Engineering. The framework adopted is the Robot Operating System (ROS), which is becoming a standard de facto inside the robotics community. The educational activities proposed in this paper are based on a constructionist approach. The Mindstorms NXT robot kit is adopted to trigger the learning challenge. The ROS framework is exploited to drive the students programming methodology during the laboratory activities and to allow students to exercise with the major computer programming paradigms and the best programming practices. The major robotics topics students are involved with are: acquiring data from sensors, connecting sensors to the robot, and navigate the robot to reach the final goal. The positive effects given by this approach are highlighted in this paper by comparing the work recently produced by students with the work produced in the previous years in which ROS was not yet adopted and many different software tools and languages were used. The results of a questionnaire are reported showing that we achieved the didactical objectives we expected as instructors. [ABSTRACT FROM AUTHOR]",
    	author = "Michieletto, Stefano and Ghidoni, Stefano and Pagello, Enrico and Moro, Michele and Menegatti, Emanuele",
    	issn = 18978649,
    	journal = "Journal of Automation, Mobile Robotics {\&} Intelligent Systems",
    	keywords = "COMPUTER engineering,COMPUTER programming,COMPUTER science,COMPUTER software,Educational robotics,LEGO NXT robot,ROBOTICS in education,ROS,teaching robotics",
    	number = 1,
    	pages = "60--68",
    	title = "{WHY TEACH ROBOTICS USING ROS?}",
    	url = "10.14313/JAMRIS{\_}1-2014/8{\%}5Cnhttp://search.ebscohost.com/login.aspx?direct=true{\&}db=a9h{\&}AN=94617426{\&}lang=es{\&}site=ehost-live",
    	volume = 8,
    	year = 2014
    }
    
  18. Tosello Elisa, Michieletto Stefano, Bisson Andrea and Pagello Enrico. A Learning from Demonstration Framework for Manipulation Tasks. In ISR/Robotik 2014; 41st International Symposium on Robotics; Proceedings of. 2014, 1–7. BibTeX

    @conference{11577_3156336,
    	abstract = "This paper presents a Robot Learning from Demonstration (RLfD) framework for teaching manipulation tasks in an industrial environment: the system is able to learn a task performed by a human demonstrator and reproduce it through a manipulator robot. An RGB-D sensor acquires the scene (human in action); a skeleton tracking algorithm extracts the useful information from the images acquired (positions and orientations of skeleton joints); and this information is given as input to the motion re-targeting system that remaps the skeleton joints into the manipulator ones. After the remapping, a model for the robot motion controller is retrieved by applying first a Gaussian Mixture Model (GMM) and then a Gaussian Mixture Regression (GMR) on the collected data. Two types of controller are modeled: a position controller and a velocity one. The former was presented in [10] inclusive of simulation tests, and here it has been upgraded extended the proves to a real robot. The latter is proposed for the first time in this work and tested both in simulation and with the real robot. Experiments were performed using a Comau Smart5 SiX manipulator robot and let to show a comparison between the two controllers starting from natural human demonstrations.",
    	author = "Elisa, Tosello and Stefano, Michieletto and Andrea, Bisson and Enrico, Pagello",
    	booktitle = "ISR/Robotik 2014; 41st International Symposium on Robotics; Proceedings of",
    	pages = "1--7",
    	title = "{A Learning from Demonstration Framework for Manipulation Tasks}",
    	year = 2014
    }
    
  19. G Pozzato, S Michieletto, E Menegatti, F Dominio, G Marin, L Minto, S Milani and P Zanuttigh. Human-Robot Interaction with Depth-Based Gesture Recognition. In Proceedings of Real-Time gesture recognition for human-robot interaction workshop. 2014, 379–383. BibTeX

    @conference{11577_2926699,
    	abstract = "Human robot interaction is a very heterogeneous research field and it is attracting a growing interest. A key building block for a proper interaction between humans and robots is the automatic recognition and interpretation of gestures performed by the user. Consumer depth cameras (like MS Kinect) have made possible an accurate and reliable interpretation of human gestures. In this paper a novel framework for gesture- based human-robot interaction is proposed. Both hand gestures and full-body gestures are recognized through the use of depth information, and a human-robot interaction scheme based on these gestures is proposed. In order to assess the feasibility of the proposed scheme, the paper presents a simple application based on the well-known rock-scissors-paper game.",
    	author = "Pozzato, G and Michieletto, S and Menegatti, E and Dominio, F and Marin, G and Minto, L and Milani, S and Zanuttigh, P",
    	booktitle = "Proceedings of Real-Time gesture recognition for human-robot interaction workshop",
    	pages = "379--383",
    	publisher = "IT+Robotics srl",
    	title = "{Human-Robot Interaction with Depth-Based Gesture Recognition}",
    	year = 2014
    }
    
  20. Shaogang Gong, Marco Cristani, Shuicheng Yan and Chen Change Loy (eds.). One-Shot Person Re-identification with a Consumer Depth Camera. pages 161–181, Springer London, 2014. URL, DOI BibTeX

    @inbook{Munaro2014,
    	author = "Munaro, Matteo and Fossati, Andrea and Basso, Alberto and Menegatti, Emanuele and Van Gool, Luc",
    	editor = "Gong, Shaogang and Cristani, Marco and Yan, Shuicheng and Loy, Chen Change",
    	title = "One-Shot Person Re-identification with a Consumer Depth Camera",
    	booktitle = "Person Re-Identification",
    	year = 2014,
    	publisher = "Springer London",
    	address = "London",
    	pages = "161--181",
    	abstract = "In this chapter, we propose a comparison between two techniques for one-shot person re-identification from soft biometric cues. One is based upon a descriptor composed of features provided by a skeleton estimation algorithm; the other compares body shapes in terms of whole point clouds. This second approach relies on a novel technique we propose to warp the subject's point cloud to a standard pose, which allows to disregard the problem of the different poses a person can assume. This technique is also used for composing 3D models which are then used at testing time for matching unseen point clouds. We test the proposed approaches on an existing RGB-D re-identification dataset and on the newly built BIWI RGBD-ID dataset. This dataset provides sequences of RGB, depth, and skeleton data for 50 people in two different scenarios and it has been made publicly available to foster advancement in this new research branch.",
    	isbn = "978-1-4471-6296-4",
    	doi = "10.1007/978-1-4471-6296-4_8",
    	url = "https://doi.org/10.1007/978-1-4471-6296-4_8"
    }
    
  21. Matteo Munaro, Alex Horn, Randy Illum, Jeff Burke and Radu Bogdan Rusu. OpenPTrack : People Tracking for Heterogeneous Networks of Color-Depth Cameras. 2014. BibTeX

    @inproceedings{Munaro2014OpenPTrackP,
    	title = "OpenPTrack : People Tracking for Heterogeneous Networks of Color-Depth Cameras",
    	author = "Matteo Munaro and Alex Horn and Randy Illum and Jeff Burke and Radu Bogdan Rusu",
    	year = 2014
    }
    
  22. Matteo Comin and Morris Antonello. Fast Entropic Profiler: An Information Theoretic Approach for the Discovery of Patterns in Genomes. IEEE/ACM Transactions on Computational Biology and Bioinformatics 11:500-509, 2014. BibTeX

    @article{Comin2014FastEP,
    	title = "Fast Entropic Profiler: An Information Theoretic Approach for the Discovery of Patterns in Genomes",
    	author = "Matteo Comin and Morris Antonello",
    	journal = "IEEE/ACM Transactions on Computational Biology and Bioinformatics",
    	year = 2014,
    	volume = 11,
    	pages = "500-509"
    }
    
  23. Mauro Antonello, Alberto Pretto and Emanuele Menegatti. Fast incremental objects identification and localization using cross-correlation on a 6 DoF voting scheme. 2014 International Conference on Computer Graphics Theory and Applications (GRAPP), pages 1-6, 2014. BibTeX

    @article{Antonello2014FastIO,
    	title = "Fast incremental objects identification and localization using cross-correlation on a 6 DoF voting scheme",
    	author = "Mauro Antonello and Alberto Pretto and Emanuele Menegatti",
    	journal = "2014 International Conference on Computer Graphics Theory and Applications (GRAPP)",
    	year = 2014,
    	pages = "1-6"
    }
    
  24. David Tedaldi, Alberto Pretto and Emanuele Menegatti. A robust and easy to implement method for IMU calibration without external equipments. September 2013. DOI BibTeX

    @inproceedings{Robust,
    	author = "Tedaldi, David and Pretto, Alberto and Menegatti, Emanuele",
    	doi = "10.1109/ICRA.2014.6907297",
    	journal = "Proceedings - IEEE International Conference on Robotics and Automation",
    	month = 09,
    	title = "A robust and easy to implement method for IMU calibration without external equipments",
    	year = 2013,
    	bdsk-url-1 = "http://dx.doi.org/10.1109/ICRA.2014.6907297"
    }
    


2013

  1. David Camacho, Sang-Wook Kim and Bogdan Trawiński (eds.). Calibration of Rotating 2D Laser Range Finder Using Circular Path on Plane Constraints. pages 155–163, Springer International Publishing, 2015. URL, DOI BibTeX

    @inbook{Kurnianggoro2015,
    	abstract = "Rotating a 2D laser range finder is a common way to obtain the 3D data of a scene.In the real system, the rotation axis is not always coincide with the sensor's axis. Due to this matter, it is important to know the rotation axis of a rotating 2D laser range finder system to obtains an accurate 3D measurement.In this paper, a method for determining the rotational axis and its relation to the sensor's position is presented. The rotational axis is approximated by analyzing the surface normal of several sensor's positions with different rotation angle.The sensor's positions are projected to the rotational plane to obtain a circular path. The circular path is approximated using a circular fitting method. A comprehensive analysis from a simulation system is presented in this paper to show the performance of the proposed calibration method.",
    	address = "Cham",
    	author = "Kurnianggoro, Laksono and Hoang, Van-Dung and Jo, Kang-Hyun",
    	booktitle = "New Trends in Computational Collective Intelligence",
    	doi = "10.1007/978-3-319-10774-5_15",
    	editor = "Camacho, David and Kim, Sang-Wook and Trawi{\'{n}}ski, Bogdan",
    	isbn = "978-3-319-10774-5",
    	pages = "155--163",
    	publisher = "Springer International Publishing",
    	title = "Calibration of Rotating 2D Laser Range Finder Using Circular Path on Plane Constraints",
    	url = "https://doi.org/10.1007/978-3-319-10774-5_15",
    	year = 2015,
    	bdsk-url-1 = "https://doi.org/10.1007/978-3-319-10774-5_15",
    	bdsk-url-2 = "http://dx.doi.org/10.1007/978-3-319-10774-5_15"
    }
    
  2. Francesco Ferrati, Roberto Bortoletto, Emanuele Menegatti and Enrico Pagello. Socio-economic impact of medical lower-limb Exoskeletons. November 2013. DOI BibTeX

    @inproceedings{arso,
    	author = "Ferrati, Francesco and Bortoletto, Roberto and Menegatti, Emanuele and Pagello, Enrico",
    	doi = "10.1109/ARSO.2013.6705500",
    	journal = "Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO",
    	month = 11,
    	title = "Socio-economic impact of medical lower-limb Exoskeletons",
    	year = 2013,
    	bdsk-url-1 = "http://dx.doi.org/10.1109/ARSO.2013.6705500"
    }
    
  3. A Pretto, S Tonello and E Menegatti. Flexible 3D localization of planar objects for industrial bin-picking with monocamera vision system. In 2013 IEEE International Conference on Automation Science and Engineering (CASE) (). August 2013, 168-175. DOI BibTeX

    @inproceedings{6654067,
    	author = "A. Pretto and S. Tonello and E. Menegatti",
    	booktitle = "2013 IEEE International Conference on Automation Science and Engineering (CASE)",
    	title = "Flexible 3D localization of planar objects for industrial bin-picking with monocamera vision system",
    	year = 2013,
    	volume = "",
    	number = "",
    	pages = "168-175",
    	keywords = "CAD;cameras;control engineering computing;industrial robots;materials handling;object detection;pose estimation;production engineering computing;robot vision;flexible 3D localization;planar objects;industrial bin-picking;monocamera vision system;flexible vision system;industrial robots;conveyor belt;planar shape;2D image analysis;6 degrees-of-freedom pose;single camera solution;industrial 3D cameras;laser triangulation systems;laser range finders;localization algorithm;CAD data;computer aided design;localization software;candidates selection step;voting scheme;best match selection step;refinement step;robust iterative optimize-and-score procedure;search-in-the-stack strategy;production plants;Image edge detection;Cameras;Three-dimensional displays;Solid modeling;Search problems;Optimization;Feature extraction",
    	doi = "10.1109/CoASE.2013.6654067",
    	issn = "2161-8070",
    	month = "Aug"
    }
    
  4. Loris Nanni, Sheryl Brahnam, Stefano Ghidoni and Emanuele Menegatti. Automated crowd detection in stadium arenas. April 2013. BibTeX

    @inproceedings{crowd,
    	author = "Nanni, Loris and Brahnam, Sheryl and Ghidoni, Stefano and Menegatti, Emanuele",
    	month = 04,
    	title = "Automated crowd detection in stadium arenas",
    	year = 2013
    }
    
  5. Munaro Matteo, Ballin Gioia, Michieletto Stefano and Menegatti Emanuele. 3D flow estimation for human action recognition from colored point clouds. BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES 5:42–51, 2013. URL, DOI BibTeX

    @article{11577_2668256,
    	abstract = "Motion perception and classification are key elements exploited by humans for recognizing actions. The same principles can serve as a basis for building cognitive architectures which can recognize human actions, thus enhancing challenging applications such as human robot interaction, visual surveillance, content-based video analysis and motion capture. In this paper, we propose an autonomous system for real-time human action recognition based on 3D motion flow estimation. We exploit colored point cloud data acquired with a Microsoft Kinect and we summarize the motion information by means of a 3D grid-based descriptor. Finally, temporal sequences of descriptors are classified with the Nearest Neighbor technique. We also present a newly created public dataset for RGB-D human action recognition which contains 15 actions performed by 12 different people. Our overall system is tested on this dataset and on the dataset used in Ballin, Munaro, and Menegatti (2012), showing the effectiveness of the proposed approach in recognizing about 90{\%} of the actions.",
    	author = "Matteo, Munaro and Gioia, Ballin and Stefano, Michieletto and Emanuele, Menegatti",
    	doi = "10.1016/j.bica.2013.05.008",
    	journal = "BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES",
    	keywords = "Action recognition; Colored point clouds; RGB-D da",
    	pages = "42--51",
    	publisher = "Elsevier Ltd",
    	title = "{3D flow estimation for human action recognition from colored point clouds}",
    	url = "http://www.sciencedirect.com/science/article/pii/S2212683X13000431",
    	volume = 5,
    	year = 2013
    }
    
  6. Munaro Matteo, Michieletto Stefano and Menegatti Emanuele. An evaluation of 3D motion flow and 3D pose estimation for human action recognition. In RSS Workshops: RGB-D: Advanced Reasoning with Depth Cameras.. 2013. URL BibTeX

    @conference{11577_2718093,
    	abstract = "Modern human action recognition algorithms which exploit 3D information mainly classify video sequences by extract- ing local or global features from the RGB-D domain or classifying the skeleton information provided by a skeletal tracker. In this paper, we propose a comparison between two techniques which share the same classification process, while differing in the type of descriptor which is classified. The former exploits an improved version of a recently proposed approach for 3D motion flow estimation from colored point clouds, while the latter relies on the estimated skeleton joints positions. We compare these methods on a newly created dataset for RGB-D human action recognition which contains 15 actions performed by 12 different people.",
    	author = "Matteo, Munaro and Stefano, Michieletto and Emanuele, Menegatti",
    	booktitle = "RSS Workshops: RGB-D: Advanced Reasoning with Depth Cameras.",
    	keywords = "service robotics; Action recognition; 3D motion fl",
    	title = "{An evaluation of 3D motion flow and 3D pose estimation for human action recognition}",
    	url = "http://www.cs.washington.edu/ai/Mobile{\_}Robotics/rgbd-workshop-2013/papers/Munaro{\_}et{\_}al.pdf",
    	year = 2013
    }
    
  7. Michieletto Stefano, Rizzi Alberto and Menegatti Emanuele. Robot learning by observing humans activities and modeling failures. In IROS workshops: Cognitive Robotics Systems (CRS2013). 2013. URL BibTeX

    @conference{11577_2717878,
    	abstract = "People needs are varied and different. Service robotics aims to help people to satisfy these needs, but not all the feasible programs can be preloaded into a robot. The robot have to learn new tasks depending on the circumstances. A solution to this challenge could be Robot Learning from Demonstration (RLfD). In this paper, a RLfD framework is described in its entire pipeline. The data are acquired from a low cost RGB-D sensor, so the user can act naturally with no need of additional hardware. The information are subsequently elaborated to adapt to the robot structure and modeled to overcome the differences between human and robot. Experiments are performed using input data coming from a publicly available dataset of human actions, and a humanoid robot, an Aldebaran NAO, is shown to successfully replicate an action based on human demonstrations and some further trials automatically generated from the learned model.",
    	author = "Stefano, Michieletto and Alberto, Rizzi and Emanuele, Menegatti",
    	booktitle = "IROS workshops: Cognitive Robotics Systems (CRS2013)",
    	keywords = "Robot Learning from Demonstration; RGB-D sensor; G",
    	title = "{Robot learning by observing humans activities and modeling failures}",
    	url = "http://crs2013.org/papers/Michieletto.pdf",
    	year = 2013
    }
    
  8. S Michieletto, S Ghidoni, E Pagello, M Moro and E Menegatti. Why teach robotics using ROS. In Proceedings of the 4th International Conference on Robotics in Education. 2013, 145–151. URL BibTeX

    @conference{11577_2836673,
    	abstract = "This paper focuses on the key role played by the adoption of a framework in teaching robotics with a computer science approach in the master in Computer Engineeering. The framework adopted is the Robot Operating System (ROS), which is becoming a standard de facto inside the robotics community. The educational activities proposed in this paper are based on a constructionist approach. The Mindstorms NXT robot kit is adopted to trigger the learning challenge. The ROS framework is exploited to drive the students programming methodology during the laboratory activities and to allow students to exercise with the major computer programming paradigms and the best programming practices. The major robotics topics students are involved with are: acquiring data from sensors, connecting sensors to the robot, and navigate the robot to reach the final goal. The positive effects given by this approach are highlighted in this paper by comparing the work recently produced by students with the work produced in the previous years in which ROS was not yet adopted and many different software tools and languages were used. The results of a questionnaire are reported showing that we achieved the didactical objectives we expected as instructors.",
    	author = "Michieletto, S and Ghidoni, S and Pagello, E and Moro, M and Menegatti, E",
    	booktitle = "Proceedings of the 4th International Conference on Robotics in Education",
    	keywords = "Educational robotics; ROS; LEGO Mindstorms NXT; te",
    	pages = "145--151",
    	publisher = "Faculty of Electrical, Electronic, Computer and Control Engineering, {\L}{\'{o}}d{\'{z}}, Poland",
    	title = "{Why teach robotics using ROS}",
    	url = "http://rie2013.eu/",
    	year = 2013
    }
    
  9. Michieletto Stefano, Zanin Davide and Menegatti Emanuele. NAO robot simulation for service robotics purposes. In Proc. of European Modelling Symposium EMS2013 (EMS2013). 2013, 448–453. DOI BibTeX

    @conference{11577_2718088,
    	abstract = "Humanoids playing soccer are required to solve a great variety of tasks: from perception to body motion, from decision making to team coordination. On the other hand, results from this community are sometimes underestimated or unexploited because of the dedicated software developed. In particular simulators are often designed for a specific robotics platform or in some other cases the integration with existing software and frameworks is hard to implement and time consuming. In this paper we introduce a novel virtual model to simulate the humanoid robot Aldebaran NAO. The URDF (Unified Robot Description Format) standard has been followed in order to maintain the model as general purpose as possible. Related plug-ins to make it works in Gazebo and V-REP simulation environments were also developed in order to test the model under ROS (Robot Operating System), a very common robotics framework.",
    	author = "Stefano, Michieletto and Davide, Zanin and Emanuele, Menegatti",
    	booktitle = "Proc. of European Modelling Symposium EMS2013 (EMS2013)",
    	doi = "10.1109/EMS.2013.80",
    	keywords = "computer simulations; Aldebaran NAO; virtual model",
    	pages = "448--453",
    	publisher = "IEEE",
    	title = "{NAO robot simulation for service robotics purposes}",
    	year = 2013
    }
    
  10. Bisson Andrea, Busatto Andrea, Michieletto Stefano and Menegatti Emanuele. Stabilize Humanoid Robot Teleoperated by a RGB-D Sensor. In CEUR WORKSHOP PROCEEDINGS 1107. 2013, 97–102. URL BibTeX

    @conference{11577_2717682,
    	abstract = "An easy way to let a robot execute complex actions is to let the robot copy human moves. Useful information are read by sensors and elaborated to convert them into robot movements. This work focuses on keeping the robot balanced while it is performing an action: grasp an object laying on the ground in front of the robot. Experiments are performed with a human user moving in front of the sensor using a humanoid robot performing the same action, the Vstone Robovie-X.",
    	author = "Andrea, Bisson and Andrea, Busatto and Stefano, Michieletto and Emanuele, Menegatti",
    	booktitle = "CEUR WORKSHOP PROCEEDINGS",
    	keywords = "RGB-D sensor; Vstone Robovie-X; humanoids; stabili",
    	pages = "97--102",
    	title = "{Stabilize Humanoid Robot Teleoperated by a RGB-D Sensor}",
    	url = "http://ceur-ws.org/Vol-1107/paper12.pdf",
    	volume = 1107,
    	year = 2013
    }
    
  11. Edmond Wai Yan So, Matteo Munaro, Stefano Michieletto, Stefano Tonello and Emanuele Menegatti. 3DComplete: Efficient completeness inspection using a 2.5D color scanner. Computers in Industry 64(9):1237–1252, 2013. DOI BibTeX

    @article{YanSo2013,
    	abstract = "In this paper, we present a low-cost and highly configurable quality inspection system capable of capturing 2.5D color data, created using off-the-shelf machine vision components, open-source software libraries, and a combination of standard and novel algorithms for 2.5D data processing. The system uses laser triangulation to capture 3D depth, in parallel with a color camera and a line light projector to capture color texture, which are then combined into a color 2.5D model in real-time. Using many examples of completeness inspection tasks that are extremely difficult to solve with current 2D-based methods, we demonstrate how the 2.5D images and point clouds generated by our system can be used to solve these complex tasks effectively and efficiently. Our system is currently being integrated into a real production environment, showing that completeness inspection incorporating 3D technology can be readily achieved in a short time at low costs. {\textcopyright} 2013 Elsevier B.V. All rights reserved.",
    	author = "{Yan So}, Edmond Wai and Munaro, Matteo and Michieletto, Stefano and Tonello, Stefano and Menegatti, Emanuele",
    	doi = "10.1016/j.compind.2013.03.014",
    	issn = 01663615,
    	journal = "Computers in Industry",
    	keywords = "3D reconstruction,Completeness inspection,Image and range data fusion,Laser triangulation",
    	number = 9,
    	pages = "1237--1252",
    	title = "{3DComplete: Efficient completeness inspection using a 2.5D color scanner}",
    	volume = 64,
    	year = 2013
    }
    
  12. Edmond Wai Yan So, Matteo Munaro, Stefano Michieletto, Mauro Antonello and Emanuele Menegatti. Real-time 3D model reconstruction with a dual-laser triangulation system for assembly line completeness inspection. In Advances in Intelligent Systems and Computing 194 AISC(VOL. 2). 2013, 707–716. DOI BibTeX

    @inproceedings{So2013,
    	author = "So, Edmond Wai Yan and Munaro, Matteo and Michieletto, Stefano and Antonello, Mauro and Menegatti, Emanuele",
    	booktitle = "Advances in Intelligent Systems and Computing",
    	doi = "10.1007/978-3-642-33932-5_66",
    	isbn = 9783642339318,
    	issn = 21945357,
    	number = "VOL. 2",
    	pages = "707--716",
    	title = "{Real-time 3D model reconstruction with a dual-laser triangulation system for assembly line completeness inspection}",
    	volume = "194 AISC",
    	year = 2013
    }
    
  13. Matteo Munaro, Filippo Basso and Stefano Michieletto. A software architecture for RGB-D people tracking based on ros framework for a mobile robot. Frontiers of Intelligent łdots, 2013. URL BibTeX

    @article{Munaro2013a,
    	abstract = "This paper describes the software architecture of a distributed multi-people tracking algorithm for mobile platforms equipped with a RGB-D sensor. Our approach features an efficient point cloud depth-based clus-tering, an HOG-like classification to robustly initialize a person tracking and a person classifier with online learning to drive data association. We explain in details how ROS functionalities and tools play an important role in the possibility of the software to be real time, distributed and easy to configure and debug. Tests are presented on a challenging real-world indoor environment and track-ing results have been evaluated with the CLEAR MOT metrics. Our algo-rithm proved to correctly track 96{\%} of people with very limited ID switches and few false positives, with an average frame rate above 20 fps. We also test and discuss its applicability to robot-people following tasks and we re-port experiments on a public RGB-D dataset proving that our software can be distributed in order to increase the framerate and decrease the data ex-change when multiple sensors are used.",
    	author = "Munaro, Matteo and Basso, Filippo and Michieletto, Stefano",
    	journal = "Frontiers of Intelligent {\ldots}",
    	keywords = "1 introduction and related,autonomous service robots have,data,in dynamic and populated,mobile robots,people tracking,real-time,rgb-d,robot operating system,to move and act,work",
    	title = "{A software architecture for RGB-D people tracking based on ros framework for a mobile robot}",
    	url = "http://link.springer.com/chapter/10.1007/978-3-642-35485-4{\_}5",
    	year = 2013
    }
    
  14. Filippo Basso, Matteo Munaro, Stefano Michieletto, Enrico Pagello and Emanuele Menegatti. Fast and robust multi-people tracking from RGB-D data for a mobile robot. In Advances in Intelligent Systems and Computing 193 AISC(VOL. 1). 2013, 265–276. DOI BibTeX

    @inproceedings{Basso2013,
    	abstract = "This paper proposes a fast and robust multi-people tracking algorithm for mobile platforms equipped with a RGB-D sensor. Our approach features an efficient point cloud depth-based clustering, an HOG-like classification to robustly initialize a person tracking and a person classifier with online learning to manage the person ID matching even after a full occlusion. For people detection, we make the assumption that people move on a ground plane. Tests are presented on a challenging real-world indoor environment and results have been evaluated with the CLEAR MOT metrics. Our algorithm proved to correctly track 96{\%} of people with very limited ID switches and few false positives, with an average frame rate of 25 fps. Moreover, its applicability to robot-people following tasks have been tested and discussed.",
    	author = "Basso, Filippo and Munaro, Matteo and Michieletto, Stefano and Pagello, Enrico and Menegatti, Emanuele",
    	booktitle = "Advances in Intelligent Systems and Computing",
    	doi = "10.1007/978-3-642-33926-4_25",
    	isbn = 9783642339257,
    	issn = 21945357,
    	keywords = "People tracking,RGB-D data,mobile robots,real-time",
    	number = "VOL. 1",
    	pages = "265--276",
    	title = "{Fast and robust multi-people tracking from RGB-D data for a mobile robot}",
    	volume = "193 AISC",
    	year = 2013
    }
    
  15. Pozzato Gabriele, Michieletto Stefano and Menegatti Emanuele. Towards Smart Robots: Rock-Paper-Scissors Gaming versus Human Players. In CEUR WORKSHOP PROCEEDINGS 1107. 2013, 89–95. URL BibTeX

    @conference{11577_2717681,
    	abstract = "In this project a human robot interaction system was developed in order to let people naturally play rock-paper-scissors games against a smart robotic opponent. The robot does not perform random choices, the system is able to analyze the previous rounds trying to forecast the next move. A Machine Learning algorithm based on Gaussian Mixture Model (GMM) allows us to increase the percentage of robot victories. This is a very important aspect in the natural interaction between human and robot, in fact, people do not like playing against “stupid” machines, while they are stimulated in confronting with a skilled opponent.",
    	author = "Gabriele, Pozzato and Stefano, Michieletto and Emanuele, Menegatti",
    	booktitle = "CEUR WORKSHOP PROCEEDINGS",
    	keywords = "Gaussian Mixture Model; Machine Learning; human ro",
    	pages = "89--95",
    	title = "{Towards Smart Robots: Rock-Paper-Scissors Gaming versus Human Players}",
    	url = "http://ceur-ws.org/Vol-1107/paper11.pdf",
    	volume = 1107,
    	year = 2013
    }
    
  16. Michieletto Stefano, Chessa Nicola and Menegatti Emanuele. Learning how to approach industrial robot tasks from natural demonstrations. In 2013 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO). 2013, 255–260. DOI BibTeX

    @conference{11577_2718084,
    	abstract = "In the last years, Robot Learning from Demonstration (RLfD) has become a major topic in robotics research. The main reason for this is that programming a robot can be a very difficult and time spending task. The RLfD paradigm has been applied to a great variety of robots, but it is still difficult to make the robot learn a task properly. Often the teacher is not an expert in the field, and viceversa an expert could not know well enough the robot to be a teacher. With this paper, we aimed at closing this gap by proposing a novel motion re-targeting technique to make a manipulator learn from natural demonstrations. A RLfD framework based on Gaussian Mixture Models (GMM) and Gaussian Mixture Regressions (GMR) was set to test the accuracy of the system in terms of precision and repeatability. The robot used during the experiments is a Comau Smart5 SiX and a novel virtual model of this manipulator has also been developed to simulate an industrial scenario which allows valid experimentation while avoiding damages to the real robot.",
    	author = "Stefano, Michieletto and Nicola, Chessa and Emanuele, Menegatti",
    	booktitle = "2013 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)",
    	doi = "10.1109/ARSO.2013.6705538",
    	keywords = "Robot Learning from Demonstration; industrial mani",
    	pages = "255--260",
    	title = "{Learning how to approach industrial robot tasks from natural demonstrations}",
    	year = 2013
    }
    
  17. Stefano Michieletto, Nicola Chessa and Emanuele Menegatti. Learning how to approach industrial robot tasks from natural demonstrations. In Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO. 2013, 255–260. DOI BibTeX

    @inproceedings{Michieletto2013,
    	abstract = "In the last years, Robot Learning from Demon-$\backslash$nstration (RLfD) has become a major topic in robotics research. The main reason for this is that programming a robot can be a very difficult and time spending task.$\backslash$nThe RLfD paradigm has been applied to a great variety of robots, but it is still difficult to make the robot learn a task properly. Often the teacher is not an expert in the field, and viceversa an expert could not know well enough the robot to be a teacher.$\backslash$nWith this paper, we aimed at closing this gap by proposing a novel motion re-targeting technique to make a manipulator learn from natural demonstrations. A RLfD framework based$\backslash$non Gaussian Mixture Models (GMM) and Gaussian Mixture Regressions (GMR) was set to test the accuracy of the system in terms of precision and repeatability.$\backslash$nThe robot used during the experiments is a Comau Smart5 SiX and a novel virtual model of this manipulator has also been developed to simulate an industrial scenario which allows valid$\backslash$nexperimentation while avoiding damages to the real robot.",
    	author = "Michieletto, Stefano and Chessa, Nicola and Menegatti, Emanuele",
    	booktitle = "Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts, ARSO",
    	doi = "10.1109/ARSO.2013.6705538",
    	isbn = 9781479923694,
    	issn = 21627568,
    	pages = "255--260",
    	title = "{Learning how to approach industrial robot tasks from natural demonstrations}",
    	year = 2013
    }
    
  18. E So, M Munaro, S Michieletto, S Tonello and E Menegatti. 3DComplete: Efficient Completeness Inspection using a 2.5D Color Scanner. COMPUTERS IN INDUSTRY 64:1237–1252, 2013. URL, DOI BibTeX

    @article{11577_2574234,
    	abstract = "In this paper, we present a low-cost and highly configurable quality inspection system capable of capturing 2.5D color data, created using off-the-shelf machine vision components, open-source software libraries, and a combination of standard and novel algorithms for 2.5D data processing. The system uses laser triangulation to capture 3D depth, in parallel with a color camera and a line light projector to capture color texture, which are then combined into a color 2.5D model in real- time. Using many examples of completeness inspection tasks that are extremely difficult to solve with current 2D-based methods, we demonstrate how the 2.5D images and point clouds generated by our system can be used to solve these complex tasks effectively and efficiently. Our system is currently being integrated into a real production environment, showing that completeness inspection incorporating 3D technology can be readily achieved in a short time at low costs.",
    	author = "So, E and Munaro, M and Michieletto, S and Tonello, S and Menegatti, E",
    	doi = "10.1016/j.compind.2013.03.014",
    	journal = "COMPUTERS IN INDUSTRY",
    	keywords = "Completeness inspection; 3D reconstruction; Image",
    	pages = "1237--1252",
    	publisher = "Elsevier",
    	title = "{3DComplete: Efficient Completeness Inspection using a 2.5D Color Scanner}",
    	url = "http://link.springer.com/chapter/10.1007/978-1-4471-6741-9{\_}7",
    	volume = 64,
    	year = 2013
    }
    
  19. Matteo Comin and Morris Antonello. Fast Computation of Entropic Profiles for the Detection of Conservation in Genomes. In Alioune Ngom, Enrico Formenti, Jin-Kao Hao, Xing-Ming Zhao and Twan Laarhoven (eds.). Pattern Recognition in Bioinformatics. 2013, 277–288. BibTeX

    @inproceedings{10.1007/978-3-642-39159-0_25,
    	author = "Comin, Matteo and Antonello, Morris",
    	editor = "Ngom, Alioune and Formenti, Enrico and Hao, Jin-Kao and Zhao, Xing-Ming and van Laarhoven, Twan",
    	title = "Fast Computation of Entropic Profiles for the Detection of Conservation in Genomes",
    	booktitle = "Pattern Recognition in Bioinformatics",
    	year = 2013,
    	publisher = "Springer Berlin Heidelberg",
    	address = "Berlin, Heidelberg",
    	pages = "277--288",
    	abstract = "The information theory has been used for quite some time in the area of computational biology. In this paper we discuss and improve the function Entropic Profile, introduced by Vinga and Almeida in [23]. The Entropic Profiler is a function of the genomic location that captures the importance of that region with respect to the whole genome. We provide a linear time linear space algorithm called Fast Entropic Profile, as opposed to the original quadratic implementation. Moreover we propose an alternative normalization that can be also efficiently implemented. We show that Fast EP is suitable for large genomes and for the discovery of motifs with unbounded length.",
    	isbn = "978-3-642-39159-0"
    }
    
  20. Daisuke Kurabayashi, Yosuke Takahashi, Ryo Minegishi, Elisa Tosello, Enrico Pagello and Ryohei Kanzaki. Property Investigation of Chemical Plume Tracing Algorithm in an Insect Using Bio-machine Hybrid System. In Nathan F Lepora, Anna Mura, Holger G Krapp, Paul F M J Verschure and Tony J Prescott (eds.). Biomimetic and Biohybrid Systems. 2013, 131–142. BibTeX

    @inproceedings{10.1007/978-3-642-39802-5_12,
    	abstract = "In this study, we investigated an aspect of the chemical plume tracing behavior of an insect by using a bio-machine hybrid system. We implemented an experimental system by which an insect brain was connected to a robot body. We observed th neural responses to external disturbances and transitions at changes in the motor gain of the robot body. Based on the results of the experiments, we identified a simple control model for the angular velocity of the behavior. We subsequently investigated the effect of the rotational velocity by using information entropy in computer simulations.",
    	address = "Berlin, Heidelberg",
    	author = "Kurabayashi, Daisuke and Takahashi, Yosuke and Minegishi, Ryo and Tosello, Elisa and Pagello, Enrico and Kanzaki, Ryohei",
    	booktitle = "Biomimetic and Biohybrid Systems",
    	editor = "Lepora, Nathan F. and Mura, Anna and Krapp, Holger G. and Verschure, Paul F. M. J. and Prescott, Tony J.",
    	isbn = "978-3-642-39802-5",
    	pages = "131--142",
    	publisher = "Springer Berlin Heidelberg",
    	title = "Property Investigation of Chemical Plume Tracing Algorithm in an Insect Using Bio-machine Hybrid System",
    	year = 2013
    }
    
  21. Francesco Ferrati, Roberto Bortoletto and Enrico Pagello. Virtual Modelling of a Real Exoskeleton Constrained to a Human Musculoskeletal Model. In Nathan F Lepora, Anna Mura, Holger G Krapp, Paul F M J Verschure and Tony J Prescott (eds.). Biomimetic and Biohybrid Systems. 2013, 96–107. BibTeX

    @inproceedings{10.1007/978-3-642-39802-5_9,
    	abstract = "Exoskeletons represent one of the most important examples of human-oriented robotic devices. This paper describes an existing lower-limb exoskeleton designed to assist people with lower extremity paralysis or weakness during the movements of standing up and walking. Starting from the analysis of a real system developed about seven years ago, a virtual multibody model was realized in order to deeply understand how the device worked and find out some potential improvements in the actuators control and in the kinematic design. The virtual device was properly constrained to a human musculoskeletal model in order to simulate a real operating condition. The analysis of the simulation results suggested a kinematic modification of the system and a new dynamic model was developed in order to test the new design through the comparison of four different models.",
    	address = "Berlin, Heidelberg",
    	author = "Ferrati, Francesco and Bortoletto, Roberto and Pagello, Enrico",
    	booktitle = "Biomimetic and Biohybrid Systems",
    	editor = "Lepora, Nathan F. and Mura, Anna and Krapp, Holger G. and Verschure, Paul F. M. J. and Prescott, Tony J.",
    	isbn = "978-3-642-39802-5",
    	pages = "96--107",
    	publisher = "Springer Berlin Heidelberg",
    	title = "Virtual Modelling of a Real Exoskeleton Constrained to a Human Musculoskeletal Model",
    	year = 2013
    }
    
  22. M Antonello, F Gabrieli, Prof. Eng. Simonetta Cola and Emanuele Menegatti. Automated landslide monitoring through a low-cost stereo vision system. CEUR Workshop Proceedings 1107:37-41, January 2013. BibTeX

    @article{ceur,
    	author = "Antonello, M and Gabrieli, F and Cola, Prof. phD. Eng. Simonetta and Menegatti, Emanuele",
    	journal = "CEUR Workshop Proceedings",
    	month = 01,
    	pages = "37-41",
    	title = "Automated landslide monitoring through a low-cost stereo vision system",
    	volume = 1107,
    	year = 2013
    }
    


2012

  1. Salvatore Anzalone, Y Yoshikawa, Hiroshi Ishiguro, Emanuele Menegatti, Enrico Pagello and Rosario Sorbello. A Topic Recognition System for Real World Human-Robot Conversations. Intelligent Autonomous Systems 12 194:383-391, January 2013. DOI BibTeX

    @article{Anza,
    	author = "Anzalone, Salvatore and Yoshikawa, Y and Ishiguro, Hiroshi and Menegatti, Emanuele and Pagello, Enrico and Sorbello, Rosario",
    	doi = "10.1007/978-3-642-33932-5_36",
    	journal = "Intelligent Autonomous Systems 12",
    	month = 01,
    	pages = "383-391",
    	title = "A Topic Recognition System for Real World Human-Robot Conversations",
    	volume = 194,
    	year = 2013,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-33932-5_36"
    }
    
  2. Gioia Ballin, Matteo Munaro and Emanuele Menegatti. Human Action Recognition from RGB-D Frames Based on Real-Time 3D Optical Flow Estimation. In Antonio Chella, Roberto Pirrone, Rosario Sorbello and Kamilla Rún Jóhannsdóttir (eds.). Biologically Inspired Cognitive Architectures 2012. 2013, 65–74. BibTeX

    @inproceedings{10.1007/978-3-642-34274-5_17,
    	abstract = "Modern advances in the area of intelligent agents have led to the concept of cognitive robots. A cognitive robot is not only able to perceive complex stimuli from the environment, but also to reason about them and to act coherently. Computer vision-based recognition systems serve the perception task, but they also go beyond it by finding challenging applications in other fields such as video surveillance, HCI, content-based video analysis and motion capture. In this context, we propose an automatic system for real-time human action recognition. We use the Kinect sensor and the tracking system in [1] to robustly detect and track people in the scene. Next, we estimate the 3D optical flow related to the tracked people from point cloud data only and we summarize it by means of a 3D grid-based descriptor. Finally, temporal sequences of descriptors are classified with the Nearest Neighbor technique and the overall application is tested on a newly created dataset. Experimental results show the effectiveness of the proposed approach.",
    	address = "Berlin, Heidelberg",
    	author = "Ballin, Gioia and Munaro, Matteo and Menegatti, Emanuele",
    	booktitle = "Biologically Inspired Cognitive Architectures 2012",
    	editor = "Chella, Antonio and Pirrone, Roberto and Sorbello, Rosario and J{\'o}hannsd{\'o}ttir, Kamilla R{\'u}n",
    	isbn = "978-3-642-34274-5",
    	pages = "65--74",
    	publisher = "Springer Berlin Heidelberg",
    	title = "Human Action Recognition from RGB-D Frames Based on Real-Time 3D Optical Flow Estimation",
    	year = 2013
    }
    
  3. Salvatore Anzalone, Yuichiro Yoshikawa, Hiroshi Ishiguro, Emanuele Menegatti, Enrico Pagello and Rosario Sorbello. Towards Partners Profiling in Human Robot Interaction Contexts. November 2012, 4-15. DOI BibTeX

    @inproceedings{Anza2,
    	author = "Anzalone, Salvatore and Yoshikawa, Yuichiro and Ishiguro, Hiroshi and Menegatti, Emanuele and Pagello, Enrico and Sorbello, Rosario",
    	doi = "10.1007/978-3-642-34327-8_4",
    	journal = "Simulation, Modeling, and Programming for Autonomous Robots",
    	month = 11,
    	pages = "4-15",
    	title = "Towards Partners Profiling in Human Robot Interaction Contexts",
    	volume = 7628,
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-34327-8_4"
    }
    
  4. Salvatore Anzalone, Yuichiro Yoshikawa, Hiroshi Ishiguro, Emanuele Menegatti, Enrico Pagello and Rosario Sorbello. Towards Partners Profiling in Human Robot Interaction Contexts. November 2012, 4-15. DOI BibTeX

    @inproceedings{Anza3,
    	author = "Anzalone, Salvatore and Yoshikawa, Yuichiro and Ishiguro, Hiroshi and Menegatti, Emanuele and Pagello, Enrico and Sorbello, Rosario",
    	doi = "10.1007/978-3-642-34327-8_4",
    	journal = "Simulation, Modeling, and Programming for Autonomous Robots",
    	month = 11,
    	pages = "4-15",
    	title = "Towards Partners Profiling in Human Robot Interaction Contexts",
    	volume = 7628,
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-34327-8_4"
    }
    
  5. Edmond Wai Yan So, Stefano Michieletto and Emanuele Menegatti. Calibration of a dual-laser triangulation system for assembly line completeness inspection. November 2012, 138-143. DOI BibTeX

    @inproceedings{Wai,
    	author = "Wai Yan So, Edmond and Michieletto, Stefano and Menegatti, Emanuele",
    	doi = "10.1109/ROSE.2012.6402621",
    	isbn = "978-1-4673-2705-3",
    	journal = "2012 IEEE International Symposium on Robotic and Sensors Environments, ROSE 2012 - Proceedings",
    	month = 11,
    	pages = "138-143",
    	title = "Calibration of a dual-laser triangulation system for assembly line completeness inspection",
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1109/ROSE.2012.6402621"
    }
    
  6. Konstantine Tsotsos, Alberto Pretto and Stefano Soatto. Visual-inertial ego-motion estimation for humanoid platforms. November 2012, 704-711. DOI BibTeX

    @inproceedings{inproceedings,
    	author = "Tsotsos, Konstantine and Pretto, Alberto and Soatto, Stefano",
    	doi = "10.1109/HUMANOIDS.2012.6651597",
    	month = 11,
    	pages = "704-711",
    	title = "Visual-inertial ego-motion estimation for humanoid platforms",
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1109/HUMANOIDS.2012.6651597"
    }
    
  7. Matteo Munaro, Filippo Basso and Emanuele Menegatti. Tracking people within groups with RGB-D data. October 2012, 2101-2107. DOI BibTeX

    @inproceedings{basso,
    	author = "Munaro, Matteo and Basso, Filippo and Menegatti, Emanuele",
    	doi = "10.1109/IROS.2012.6385772",
    	isbn = "978-1-4673-1737-5",
    	journal = "Proceedings of the ... IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems",
    	month = 10,
    	pages = "2101-2107",
    	title = "Tracking people within groups with RGB-D data",
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1109/IROS.2012.6385772"
    }
    
  8. Andrea Bardella, Matteo Danieletto, Emanuele Menegatti, Andrea Zanella, Alberto Pretto and Pietro Zanuttigh. Autonomous robot exploration in smart environments exploiting wireless sensors and visual features. Annals of telecommunications 67:1-15, August 2012. DOI BibTeX

    @article{Bardella,
    	author = "Bardella, Andrea and Danieletto, Matteo and Menegatti, Emanuele and Zanella, Andrea and Pretto, Alberto and Zanuttigh, Pietro",
    	doi = "10.1007/s12243-012-0305-z",
    	journal = "Annals of telecommunications",
    	month = 08,
    	pages = "1-15",
    	title = "Autonomous robot exploration in smart environments exploiting wireless sensors and visual features",
    	volume = 67,
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/s12243-012-0305-z"
    }
    
  9. Roberto Bortoletto, Massimo Sartori, Fuben He and Enrico Pagello. Simulating an Elastic Bipedal Robot Based on Musculoskeletal Modeling. July 2012. DOI BibTeX

    @inproceedings{Bipede,
    	author = "Bortoletto, Roberto and Sartori, Massimo and He, Fuben and Pagello, Enrico",
    	doi = "10.1007/978-3-642-31525-1_3",
    	month = 07,
    	title = "Simulating an Elastic Bipedal Robot Based on Musculoskeletal Modeling",
    	volume = 7375,
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-31525-1_3"
    }
    
  10. Nicolo Boscolo, Riccardo De, Matteo Munaro, Alessando Farinelli and Enrico Pagello. A Distributed Kinodynamic Collision Avoidance System under ROS. June 2012. DOI BibTeX

    @inproceedings{Bosco,
    	author = "Boscolo, Nicolo and De, Riccardo and Munaro, Matteo and Farinelli, Alessando and Pagello, Enrico",
    	doi = "10.1007/978-3-642-33932-5_47",
    	journal = "Advances in Intelligent Systems and Computing",
    	month = 06,
    	title = "A Distributed Kinodynamic Collision Avoidance System under ROS",
    	volume = 194,
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-33932-5_47"
    }
    
  11. Stefano Ghidoni, Grzegorz Cielniak and Emanuele Menegatti. Texture-Based Crowd Detection and Localisation. Volume 193, June 2012. DOI BibTeX

    @inbook{Crowd2,
    	author = "Ghidoni, Stefano and Cielniak, Grzegorz and Menegatti, Emanuele",
    	doi = "10.1007/978-3-642-33926-4_69",
    	journal = "Advances in Intelligent Systems and Computing",
    	month = 06,
    	title = "Texture-Based Crowd Detection and Localisation",
    	volume = 193,
    	year = 2012,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-33926-4_69"
    }
    
  12. Basso Filippo, Munaro Matteo, Michieletto Stefano, Pagello Enrico and Menegatti Emanuele. Fast and Robust Multi-people Tracking from RGB-D Data for a Mobile Robot. In ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 193. 2012, 265–276. URL, DOI BibTeX

    @conference{11577_2533616,
    	abstract = "This paper proposes a fast and robust multi-people tracking algorithm for mobile platforms equipped with a RGB-D sensor. Our approach features an efficient point cloud depth-based clustering, an HOG-like classification to robustly initialize a person tracking and a person classifier with online learning to manage the person ID matching even after a full occlusion. For people detection, we make the assumption that people move on a ground plane. Tests are presented on a challenging real-world indoor environment and results have been evaluated with the CLEAR MOT metrics. Our algorithm proved to correctly track 96{\%} of people with very limited ID switches and few false positives, with an average frame rate of 25 fps. Moreover, its applicability to robot-people following tasks have been tested and discussed.",
    	author = "Filippo, Basso and Matteo, Munaro and Stefano, Michieletto and Enrico, Pagello and Emanuele, Menegatti",
    	booktitle = "ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING",
    	doi = "10.1007/978-3-642-33926-4_25",
    	pages = "265--276",
    	publisher = "Springer Berlin Heidelberg",
    	title = "{Fast and Robust Multi-people Tracking from RGB-D Data for a Mobile Robot}",
    	url = "http://link.springer.com/chapter/10.1007{\%}2F978-3-642-33926-4{\_}25",
    	volume = 193,
    	year = 2012
    }
    
  13. Edmond Wai Yan So, Munaro Matteo, Michieletto Stefano, Antonello Mauro and Menegatti Emanuele. Real-Time 3D Model Reconstruction with a Dual-Laser Triangulation System for Assembly Line Completeness Inspection. In ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 194. 2012, 707–716. URL, DOI BibTeX

    @conference{11577_2533672,
    	abstract = "In this paper, we present an improved version of our Dual Laser Triangulation System, a low-cost color 3D model acquisition system built with commonly available machine vision products. The system produces a color point cloud model of scanned objects that can be used to perform completeness inspection tasks on assembly lines. In particular, we show that model acquisition and reconstruction can be achieved in real-time using such a low-cost solution. Our results demonstrate that 3D-based inspection can be achieved readily and economically in a real industrial production environment.",
    	address = "Berlin",
    	author = "{Edmond Wai Yan So} and Matteo, Munaro and Stefano, Michieletto and Mauro, Antonello and Emanuele, Menegatti",
    	booktitle = "ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING",
    	doi = "10.1007/978-3-642-33932-5_66",
    	pages = "707--716",
    	publisher = "Springer Verlag",
    	title = "{Real-Time 3D Model Reconstruction with a Dual-Laser Triangulation System for Assembly Line Completeness Inspection}",
    	url = "http://link.springer.com/chapter/10.1007{\%}2F978-3-642-33932-5{\_}66",
    	volume = 194,
    	year = 2012
    }
    
  14. Edmond Wai Yan So, Stefano Michieletto and Emanuele Menegatti. Calibration of a dual-laser triangulation system for assembly line completeness inspection. In 2012 IEEE International Symposium on Robotic and Sensors Environments, ROSE 2012 - Proceedings. 2012, 138–143. DOI BibTeX

    @inproceedings{So2012,
    	author = "So, Edmond Wai Yan and Michieletto, Stefano and Menegatti, Emanuele",
    	booktitle = "2012 IEEE International Symposium on Robotic and Sensors Environments, ROSE 2012 - Proceedings",
    	doi = "10.1109/ROSE.2012.6402621",
    	isbn = 9781467327046,
    	keywords = "conveyor calibration,dual lasers,laser calibration,laser triangulation,misalignment",
    	pages = "138--143",
    	title = "{Calibration of a dual-laser triangulation system for assembly line completeness inspection}",
    	year = 2012
    }
    
  15. Stefano Michieletto and Emanuele Menegatti. Human action recognition oriented to humanoid robots action reproduction. In CEUR Workshop Proceedings 860. 2012, 35–40. BibTeX

    @inproceedings{Michieletto2012,
    	abstract = "Our research aims at providing a humanoid robot with the ability of observing, learning, and reproducing actions performed by humans in order to acquire new skills. In other words, we want to apply artificial intelligence techniques to automatically recognize a human activity in order to make a humanoid robot able to reproduce it.This system has not only to distinguish between different actions, but also to represent them in a proper manner to allow a robot to reproduce the motion trajectories the demonstrator showed and learn new skills. Since the final system is going to be integrated in an autonomous humanoid robot (specifically model Aldebran Nao), we are working with an RGB-D sensor (Microsoft Kinect) that can be easily applied to it. This objective introduces also strict real-time constrains to the action recognition algorithm: we have opted for a probabilistic approach that offers good modeling and fast recognition performances.",
    	author = "Michieletto, Stefano and Menegatti, Emanuele",
    	booktitle = "CEUR Workshop Proceedings",
    	issn = 16130073,
    	keywords = "Action recognition,Humanoid robots,Imitation learning,Programming by demonstration",
    	pages = "35--40",
    	title = "{Human action recognition oriented to humanoid robots action reproduction}",
    	volume = 860,
    	year = 2012
    }
    
  16. Roberto Bortoletto, Massimo Sartori, Fuben He and Enrico Pagello. Modeling and Simulating Compliant Movements in a Musculoskeletal Bipedal Robot. In Itsuki Noda, Noriaki Ando, Davide Brugali and James J Kuffner (eds.). Simulation, Modeling, and Programming for Autonomous Robots. 2012, 237–250. BibTeX

    @inproceedings{10.1007/978-3-642-34327-8_23,
    	abstract = "This paper describes the modeling and the simulation of a novel Elastic Bipedal Robot based on Human Musculoskeletal modeling. The geometrical organization of the robot artificial muscles is based on the organization of human muscles. In this paper we study how the robot active and passive elastic actuation structures develop force during selected motor tasks, and how we can model the contact between feet and ground. We then compare the robot dynamics to that of the human during the same motor tasks. The motivation behind this study is to reduce the development time by using a simulation environment for the purpose of developing a bipedal robot that takes advantage of the mechanisms underlying the human musculoskeletal dynamics for the generation of natural movement.",
    	address = "Berlin, Heidelberg",
    	author = "Bortoletto, Roberto and Sartori, Massimo and He, Fuben and Pagello, Enrico",
    	booktitle = "Simulation, Modeling, and Programming for Autonomous Robots",
    	editor = "Noda, Itsuki and Ando, Noriaki and Brugali, Davide and Kuffner, James J.",
    	isbn = "978-3-642-34327-8",
    	pages = "237--250",
    	publisher = "Springer Berlin Heidelberg",
    	title = "Modeling and Simulating Compliant Movements in a Musculoskeletal Bipedal Robot",
    	year = 2012
    }
    
  17. Stefano Tonello, Guido Piero Zanetti, Matteo Finotto, Roberto Bortoletto, Elisa Tosello and Emanuele Menegatti. WorkCellSimulator: A 3D Simulator for Intelligent Manufacturing. In Itsuki Noda, Noriaki Ando, Davide Brugali and James J Kuffner (eds.). Simulation, Modeling, and Programming for Autonomous Robots. 2012, 311–322. BibTeX

    @inproceedings{10.1007/978-3-642-34327-8_29,
    	abstract = "This paper presents WorkCellSimulator, a software platform that allows to manage an environment for the simulation of robot tasks. It uses the most advanced artificial intelligence algorithms in order to define the production process, by controlling one or more robot manipulators and machineries present in the work cell. The main goal of this software is to assist the user in defining customized production processes which involve specific automated cells. It has been developed by IT+Robotics, a spin-off company of the University of Padua, founded in 2005 from the collaboration between young researchers in the field of Robotics and a group of professors from the Department of Information Engineering, University of Padua.",
    	address = "Berlin, Heidelberg",
    	author = "Tonello, Stefano and Zanetti, Guido Piero and Finotto, Matteo and Bortoletto, Roberto and Tosello, Elisa and Menegatti, Emanuele",
    	booktitle = "Simulation, Modeling, and Programming for Autonomous Robots",
    	editor = "Noda, Itsuki and Ando, Noriaki and Brugali, Davide and Kuffner, James J.",
    	isbn = "978-3-642-34327-8",
    	pages = "311--322",
    	publisher = "Springer Berlin Heidelberg",
    	title = "WorkCellSimulator: A 3D Simulator for Intelligent Manufacturing",
    	year = 2012
    }
    
  18. Stefano Michieletto and Emanuele Menegatti. Human action recognition oriented to humanoid robots action reproduction. CEUR Workshop Proceedings 860:35-40, January 2012. BibTeX

    @article{EM1,
    	author = "Michieletto, Stefano and Menegatti, Emanuele",
    	journal = "CEUR Workshop Proceedings",
    	month = 01,
    	pages = "35-40",
    	title = "Human action recognition oriented to humanoid robots action reproduction",
    	volume = 860,
    	year = 2012
    }
    
  19. P Donaggio and Stefano Ghidoni. Fast calibration method for active cameras. VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications 2:79-82, January 2012. BibTeX

    @article{Pino,
    	author = "Donaggio, P and Ghidoni, Stefano",
    	journal = "VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Applications",
    	month = 01,
    	pages = "79-82",
    	title = "Fast calibration method for active cameras",
    	volume = 2,
    	year = 2012
    }
    
  20. Massimo Sartori, Monica Reggiani, Antonie J Bogert and David G Lloyd. Estimation of musculotendon kinematics in large musculoskeletal models using multidimensional B-splines. Journal of biomechanics 45(3):595–601, 2011. DOI BibTeX

    @article{2229eb24542146839c404e8a201539a5,
    	title = "Estimation of musculotendon kinematics in large musculoskeletal models using multidimensional B-splines",
    	author = "Massimo Sartori and Monica Reggiani and {van den Bogert}, {Antonie J} and Lloyd, {David G}",
    	year = 2011,
    	doi = "10.1016/j.jbiomech.2011.10.040",
    	language = "English",
    	volume = 45,
    	pages = "595--601",
    	journal = "Journal of biomechanics",
    	issn = "0021-9290",
    	publisher = "Elsevier",
    	number = 3
    }
    
  21. A Broggi, P Cerri, L Gatti, P Grisleri, H G Jung and J Lee. Scenario-driven search for pedestrians aimed at triggering non-reversible systems. In 2009 IEEE Intelligent Vehicles Symposium (). June 2009, 285-291. DOI BibTeX

    @inproceedings{5164292,
    	author = "A. Broggi and P. Cerri and L. Gatti and P. Grisleri and H. G. Jung and J. Lee",
    	booktitle = "2009 IEEE Intelligent Vehicles Symposium",
    	title = "Scenario-driven search for pedestrians aimed at triggering non-reversible systems",
    	year = 2009,
    	volume = "",
    	number = "",
    	pages = "285-291",
    	keywords = "image recognition;object detection;traffic engineering computing;scenario-driven pedestrian search;nonreversible system;pedestrian detection;automotive application;pedestrian recognition;laser scanner;camera;brake-by-wire technology;Rain;Cameras;Automotive components;Optical sensors;Intelligent sensors;Optical distortion;Image sensors;Stimulated emission;Mirrors;Automotive engineering",
    	doi = "10.1109/IVS.2009.5164292",
    	issn = "1931-0587",
    	month = "June"
    }
    


2011

  1. M Munaro and A Cenedese. Scene specific people detection by simple human interaction. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops) (). November 2011, 1250-1255. DOI BibTeX

    @inproceedings{6130394,
    	author = "M. {Munaro} and A. {Cenedese}",
    	booktitle = "2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)",
    	title = "Scene specific people detection by simple human interaction",
    	year = 2011,
    	volume = "",
    	number = "",
    	pages = "1250-1255",
    	keywords = "cameras;image classification;object detection;scene specific people detection;human interaction;static camera;appearance-based people detector;background subtraction;proportion constraint;scene classifier;Detectors;Humans;Training;Positron emission tomography;Cameras;Accuracy;Three dimensional displays",
    	doi = "10.1109/ICCVW.2011.6130394",
    	issn = "",
    	month = "Nov"
    }
    
  2. Salvatore Anzalone, Emanuele Menegatti, Enrico Pagello, Rosario Sorbello, Yuichiro Yoshikawa and Hiroshi Ishiguro. A Multimodal People Recognition System for an Intelligent Environment. September 2011, 451-456. DOI BibTeX

    @inproceedings{AnzaYea!,
    	author = "Anzalone, Salvatore and Menegatti, Emanuele and Pagello, Enrico and Sorbello, Rosario and Yoshikawa, Yuichiro and Ishiguro, Hiroshi",
    	doi = "10.1007/978-3-642-23954-0_46",
    	month = 09,
    	pages = "451-456",
    	title = "A Multimodal People Recognition System for an Intelligent Environment",
    	year = 2011,
    	bdsk-url-1 = "http://dx.doi.org/10.1007/978-3-642-23954-0_46"
    }
    
  3. Massimo Sartori, Monica Reggiani, David Lloyd and Enrico Pagello. A neuromusculoskeletal model of the human lower limb: towards EMG-driven actuation of multiple joints in powered orthoses.. July 2011. DOI BibTeX

    @inproceedings{sartori,
    	author = "Sartori, Massimo and Reggiani, Monica and Lloyd, David and Pagello, Enrico",
    	doi = "10.1109/ICORR.2011.5975441,",
    	journal = "IEEE International Conference on Rehabilitation Robotics, ICORR 2011,",
    	month = 07,
    	title = "A neuromusculoskeletal model of the human lower limb: towards EMG-driven actuation of multiple joints in powered orthoses.",
    	year = 2011,
    	bdsk-url-1 = "http://dx.doi.org/10.1109/ICORR.2011.5975441,"
    }
    
  4. B Lain, S Ghidoni and E Menegatti. Robust object tracking in a dual camera sensor. In 2011 4th International Conference on Human System Interactions, HSI 2011 (). May 2011, 150-157. DOI BibTeX

    @inproceedings{5937358,
    	author = "B. {Lain} and S. {Ghidoni} and E. {Menegatti}",
    	booktitle = "2011 4th International Conference on Human System Interactions, HSI 2011",
    	title = "Robust object tracking in a dual camera sensor",
    	year = 2011,
    	volume = "",
    	number = "",
    	pages = "150-157",
    	keywords = "image colour analysis;image sensors;object tracking;video signal processing;robust object tracking;dual camera sensor;cooperative system;target tracking;Omnidome;pan-tilt-zoom unit;omnidirectional camera;color histogram analysis;motion detector;Target tracking;Histograms;Pixel;Image color analysis;Cameras;Detectors",
    	doi = "10.1109/HSI.2011.5937358",
    	issn = "2158-2254",
    	month = "May"
    }
    
  5. S Ghidoni, M Finotto and E Menegatti. Self-learning visual inspection system for cable crimping machines. In 2011 IEEE International Conference on Robotics and Automation (). May 2011, 1-4. DOI BibTeX

    @inproceedings{5980590,
    	author = "S. {Ghidoni} and M. {Finotto} and E. {Menegatti}",
    	booktitle = "2011 IEEE International Conference on Robotics and Automation",
    	title = "Self-learning visual inspection system for cable crimping machines",
    	year = 2011,
    	volume = "",
    	number = "",
    	pages = "1-4",
    	keywords = "cable jointing;cables (electric);crimping;electric connectors;image colour analysis;inspection;production engineering computing;wires (electric);self-learning visual inspection system;cable crimping machines;connector checking;wire color sequences;color analysis technique;noise filtering",
    	doi = "10.1109/ICRA.2011.5980590",
    	issn = "1050-4729",
    	month = "May"
    }
    
  6. S M Anzalone, E Menegatti, E Pagello, Y Yoshikawa, H Ishiguro and A Chella. Audio-video people recognition system for an intelligent environment. In 2011 4th International Conference on Human System Interactions, HSI 2011 (). May 2011, 237-244. DOI BibTeX

    @inproceedings{5937372,
    	author = "S. M. {Anzalone} and E. {Menegatti} and E. {Pagello} and Y. {Yoshikawa} and H. {Ishiguro} and A. {Chella}",
    	booktitle = "2011 4th International Conference on Human System Interactions, HSI 2011",
    	title = "Audio-video people recognition system for an intelligent environment",
    	year = 2011,
    	volume = "",
    	number = "",
    	pages = "237-244",
    	keywords = "face recognition;feature extraction;middleware;speech recognition;support vector machines;audio-video people recognition system;intelligent environment;voice detection;face detection;voice recognition;face recognition;voice feature extraction;short time cepstrum analysis;eigenfaces technique;face feature extraction;support vector machine;cross-anchoring learning rule;mutual exclusivity selection principle;middleware software;Feature extraction;Cameras;Humans;Artificial intelligence;Training;Support vector machines;Speech recognition",
    	doi = "10.1109/HSI.2011.5937372",
    	issn = "2158-2254",
    	month = "May"
    }
    
  7. A Pretto, E Menegatti and E Pagello. Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation. In 2011 IEEE International Conference on Robotics and Automation (). May 2011, 3289-3296. DOI BibTeX

    @inproceedings{5980206,
    	author = "A. {Pretto} and E. {Menegatti} and E. {Pagello}",
    	booktitle = "2011 IEEE International Conference on Robotics and Automation",
    	title = "Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation",
    	year = 2011,
    	volume = "",
    	number = "",
    	pages = "3289-3296",
    	keywords = "cameras;edge detection;feature extraction;image sensors;mesh generation;motion estimation;navigation;object tracking;SLAM (robots);solid modelling;topology;omnidirectional dense large-scale mapping;dense 3D maps;omnidirectional camera;egomotion estimation;robust tracking;omnidirectional image;triangle mesh subdivision;constrained Delaunay triangulation;edgelet feature extraction;structure parameter estimation;optimization framework;car roof;Three dimensional displays;Cameras;Image edge detection;Feature extraction;Tracking;Optimization;Simultaneous localization and mapping",
    	doi = "10.1109/ICRA.2011.5980206",
    	issn = "1050-4729",
    	month = "May"
    }
    
  8. Munaro Matteo, Michieletto Stefano, So Edmond, Alberton Daniele and Menegatti Emanuele. Fast 2.5D model reconstruction of assembled parts with high occlusion for completeness inspection. In Proc. of World Academy of Science, Engineering and Technology. 2011, 1718–1724. BibTeX

    @conference{11577_2534856,
    	abstract = "In this work a dual laser triangulation system is pre- sented for fast building of 2.5D textured models of objects within a production line. This scanner is designed to produce data suitable for 3D completeness inspection algorithms. For this purpose two laser projectors have been used in order to considerably reduce the problem of occlusions in the camera movement direction. Results of reconstruction of electronic boards are presented, together with a comparison with a commercial system.",
    	author = "Matteo, Munaro and Stefano, Michieletto and Edmond, So and Daniele, Alberton and Emanuele, Menegatti",
    	booktitle = "Proc. of World Academy of Science, Engineering and Technology",
    	keywords = "3D quality inspection; 2.5D reconstruction; laser",
    	pages = "1718--1724",
    	publisher = "World Academy of Science Engineering and Technology",
    	title = "{Fast 2.5D model reconstruction of assembled parts with high occlusion for completeness inspection}",
    	year = 2011
    }
    

July 22nd, 2022- IAS-Lab teams wins the SMACT Call 4 Ideas 2022 

The team of IAS-Lab (Bacchin, Ghidoni, Menegatti) headed by Emanuele Menegatti won the first price of "SMACT Call 4 Ideas 2022" in the "Zero-waste & green transition" trajectory. This is a monetary award of 10.000 € for developing the proposed "Waste Sorting Robot based on Deep-Learning". This project is also founded by PON 2021, with the PhD scholarship of Alberto Bacchin.

(Link for more information: https://www.sostenibile.unipd.it/en/news-en/tre-progetti-unipd-selezionati-da-smact-competence-center/ )

Bacchin SMACT

 

 

 

July 6th, 2022 - IAS-Lab team wins the ADAPT competition.

The IAS-Lab team composed by Matteo Terreran (team leader), Daniele Evangelista, Davide Allegro, Emanuele Menegatti and Stefano Ghidoni won the 1st ADvanced Agile ProducTion (ADAPT) field campaign competition organized by Tampere University, Finland, in the context of the Metrics European project. The team ranked first in both the video track and the live demo track.

Curami

 Curami Matteo

 

 

 

February 7th, 2022 - Prof. Bruno Siciliano visited IAS-Lab

Afterward prof. Siciliano gave an invited talk at the Dept. of Information Engineering.

 

IMG 4906

 

 

 

June 13-16, 2022 - Zagreb, Croatia -17th International Conference on INTELLIGENT AUTONOMOUS SYSTEMS (IAS-17)

 

Screen_Shot_2021-11-09_at_11.00.08.png

  

Intelligent Autonomous Systems are increasingly applied in areas ranging from industrial applications to professional service and household domains. New technologies and application domains push forward the need for research and development resulting in new challenges to be overcome in order to apply Intelligent Autonomous Systems in a reliable and user-independent way. Recent advances in the areas of Artificial Intelligence, Machine Learning and Adaptive Control enable autonomous systems with improved robustness and flexibility.

IAS-17 will provide an inspiring forum of scientists, engineers, and students coming from all around the world to present and discuss the latest scientific results, technologies and ideas enabling Intelligent Autonomous Systems to perform in a safe, skillful and robust manner handling uncertainty and unforeseen events.

 

 

May 2021Intelligenza artificiale & Robotica per l’industria 4.0 - Corso per aziende

Sono aperte le iscrizioni al corso per aziende Intelligenza artificiale & Robotica per l’industria 4.0, organizzato dal Dipartimento di Ingegneria dell'Informazione dell'Università di Padova.

Il corso mira a formare le aziende sulle più avanzate tecnologie in ambito Intelligenza Artificiale e Robotica, creando un dialogo tra UNIPD e imprese del territorio presentando le competenze dell’Ateneo nel settore.Tutti i docenti coinvolti sono infatti inseriti in una vasta rete di professionisti a livello europeo e permettono di creare nuove opportunità di networking per la ricerca ed il business.

Screen Shot 2021 04 13 at 095756

Il programma, strutturato su tre incontri, è sviluppato da UniSMART – Fondazione dell’Università degli Studi di Padova, con la Direzione Scientifica del Prof. Emanuele Menegatti. Saranno coinvolti docenti dei Dipartimenti di Ingegneria, di Matematica, di Economia e dei Beni Culturali attivi   nell’ambito dell’Intelligenza Artificiale e della Robotica.

Le docenze si terranno in lingua italiana e con presentazione in aula di casi d’uso reali.

Scadenza iscrizione: 9 maggio.

Per maggiori informazioni: https://www.unismart.it/corsi/academy/ai-robotics/

 

 

November 16th, 2020 - WHI team wins again the Gold Metal at the BCI Race Cybathlon 2020

The WHi Team of the IAS-Lab won the gold metal at the BCI Race of the Cybathlon 2020. The dream team includes Francesco Bettella, the pilot, and the IAS-Lab members Gloria Beraldo, Luca Tonin, Stefano Tortora and Emanuele Menegatti.

The 2020 edition of Cybathlon took place in virtual mode in November 13th and 14th. It involved more than 60 teams from over 20 countries. All teams have safely pre-registered their matches, and the results have been classified until the day of the event. In the first edition of 2016, 66 drivers and 56 teams from 25 nations competed in front of an audience of 4,600 people in the Zurich arena. In 2019, the WHI Team had already participated in the Cybathlon Series held in Graz, winning the gold medal in the Brain-Computer Interface discipline.

The WHI Team had already participated in the Cybathlon Series held in Graz in 2020, also winning the gold medal.

WHI TEAM 2

 

Cybathlon is the first international neurorobotic championship where athletes with motor disabilities compete in six disciplines supported by advanced assistive technologies. In the BCI Race discipline, pilots with tetraplegia use their own brain signals to control an avatar via a brain-computer interface (BCI) system. The system acquires the pilot's electroencephalogram and translates it into three different commands to turn the avatar to the right, to the left or to turn the lights on. The pilot must send the correct command according to the point of the track where he is and make the avatar accelerate.

 

WHI TEAM 1

 

 

October 30th, 2020 - Gloria Beraldo wins Premio Torasso Award

Gloria Beraldo, Ph.D. student at DEI under the supervision of Prof. Emanuele Menegatti and prof. Luca Tonin, won the "Premio Torasso" Award - PhD student edition during the conference of the Italian Association for Artificial Intelligence, AIxIA 2020. The award is for the best video explaining the contribution to Artificial intelligence field in the PhD student's thesis. 

 

October 22th, 2020 - DIGITALmeet on technologies and implications of creating conscious robot

Prof. Emanuele Menegatti and prof. Enrico Facco of the University of Padova, prof. Antonio Chella of the University of Palermo, and prof. Hiroshi Ishiguro of the Osaka University were invited to DIGITALmeet 2020 to discuss the technologies and implications of creating conscious robot.
The debate on AI and consciuosness is very lively and vital; the crucial point is whether consciousness can be implemented in a computer or in a robot or if it is something that escapes the computational models, belonging exclusively to the world of biology.
DIGITALmeet is the biggest Italian festival on digital literacy, which is at its 7th edition.

I proff. Emanuele Menegatti e Enrico Facco dell'Università di Padova, il prof. Antonio Chella dell'Università di Palermo e il prof. Hiroshi Ishiguro dell'Università di Osaka sono stati invitati al DIGITALmeet 2020 per parlare delle tecnologie e implicazioni connesse alla creazione di robot coscienti. Il dibattito sull'Intelligenza Artificiale e sulla coscienza  è molto vivace; l'aspetto cruciale è se la coscienza può essere implementata in un computer o in un robot, oppure se essa sfugga ai modelli computazionali per appartenere solo al mondo della biologia.

 

October 14th, 2020 - Third prize as Best Student Paper Award at the IEEE SMC 10th Workshop on Brain-Machine Interface (BMI) Systems

The IAS-Lab members Gloria Beraldo, Stefano Tortora, Emanuele Menegatti and Luca Tonin won the third prize as Best Student Paper Award at the IEEE SMC 10th Workshop on Brain-Machine Interface (BMI) Systems held virtually in October 11-14 for tha paper “ROS-Neuro: implementation of a closed-loop BMI based on motor imagery”.

The goal of the Workshop is to provide a forum to present research results, and facilitate the interaction and intellectual exchange. The Workshp invites contributions reporting the latest advances, innovations and applications in BMIs, including the integration of BMIs with virtual/augmented reality, affective BMIs, hybrid BMIs, deep learning for BMIs, neurorehabilitation, new neuroimaging modalities and sensor technologies, as well as emerging neurotechnology applications, all topics which offer tremendous opportunities for multi-disciplinary research.

 

September 11th, 2020 - Best Paper Award at EFTA 2020, the conference on emerging technologies

The IAS-Lab members Daniele Evangelista, Matteo Terreran and Emanuele Menegatti togerther with professors Michele Moro and Carlo Ferrari won the Best (Work in Progress) Paper Award in the category of Emerging Technologies  at the ETFA2020-International Conference of Emerging Technologies and Factory Automation in Wien for the paper “3D Mapping of X-Ray Images in Inspection of Aerospace Parts”

 

September 17th, 2019 - Gold metal at the BCI Race in Graz for Francesco Bettella and the IAS-Lab team

The WHi Team of the IAS-Lab won the gold metal at the BCI Race of the Cybathlon series in Graz. They are Francesco Bettella, the pilot, together with the IAS-Lab members Gloria Beraldo, Luca Tonin, Stefano Tortora and Emanuele Menegatti.

Silver medal to the Austrian team Mirage62, and bronze to the British team NeuroCONCISE.

Cybathlon is the first international neurorobotic championship where athletes with motor disabilities compete in six disciplines supported by advanced assistive technologies. In the BCI Race discipline, pilots with tetraplegia use their own brain signals to control an avatar via a brain-computer interface (BCI) system. The system acquires the pilot's electroencephalogram and translates it into three different commands to turn the avatar to the right, to the left or to turn the lights on. The pilot must send the correct command according to the point of the track where he is and make the avatar accelerate.

The WHi Team is training for the 2020 Cybathlon event taking place in Zurich, Switzerland.

 

 Gold_metal.png

 

Il WHi Team dello IAS-Lab ha vinto a Graz la medaglia d'oro alla BCI Race delle Cybathlon Series. Fanno parte della squadra il pilota Francesco Bettella, insieme ai membri dello IAS-Lab, Gloria Beraldo, Luca Tonin, Stefano Tortora e Emanuele Menegatti.

Argento al team austriaco Mirage 62, e bronzo ai britannici di NeuroCONCISE.

Il Cybathlon è il primo campionato internazionale di neurorobotica dove atleti con disabilità motorie si sfidano in sei discipline, coadiuvati da avanzate tecnologie assistive. Nella disciplina della BCI Race, i piloti affetti da tetraplegia utilizzano i propri segnali cerebrali per controllare un avatar tramite un sistema di brain-computer interface (BCI). Il sistema acquisisce l’elettroencefalogramma del pilota e lo traduce in tre diversi comandi per far girare l’avatar a destra, a sinistra o per accendere le luci. Il pilota deve inviare il comando corretto a seconda del punto della pista in cui si trova per far accelerare l’avatar.

Il WHi Team si sta allenando in vista dell’evento Cybathlon 2020 che si terrà a Zurigo in Svizzera. 

 

July 2019 - Emanuele Menegatti new Associate Editor of Frontiers in Neurorobotics

Prof.Emanuele Menegatti has been invited as Associate Editor of the journal Frontiers in Neurorobotics. More information about the journal at the following link: https://blog.frontiersin.org/. We wish all the best to Emanuele for his new task.

 

June 27th, 2019 - Second prize as Best Poster Award at the workshop From BCI to Human Robot Augmentation

Our work Towards Neurorobotics: from human intention to action through intelligent assistive robotics won the second prize as Best Poster Award at the workshop From BCI to Human Robot Augmentation, held in London from 23 to 26 June 2019. Our PhD student Stefano Tortora presented the paper, and the co-authors are Gloria Beraldo, Stefano Michieletto, Francesca Stival, e Emanuele Menegatti.

 

4 giugno 2019In diretta su Caterpillar Rai Radio 2 Emanuele Menegatti illustra il progetto Soft Act

Emanuele Menegatti, intervistato dalla trasmissione Caterpillar di Rai Radio 2, parla in diretta nazionale dell'esoscheletro 'soffice' sviluppato dall'Università di Harvard, e del progetto europeo Soft Act, che cercherà di farlo muovere. Il progetto Soft Act, appena iniziato, è nato in collaborazione con la prof.ssa Alessandra De Felice del Padova Neuroscience Center. Trovate qui l'audio della trasmissione e al minuto 33:05 l'intervista, che spiega molto bene l'obbiettivo del progetto e la collaborazione con Harvard. Ancora una volta l'Italia e l'Università di Padova all'avanguardia nei progetti robotici.

 

September 4 - 6, 2019 - Settima Edizione del corso di formazione per insegnanti sulla Robotica Educativa

Aperte le iscrizioni alla VII edizione del Corso di Formazione per insegnanti Robotica Educativa a Scuola, che si terrà come sempre al Dipartimento di Ingegneria dell'Informazione dell'Univesità di Padova mercoledì 4, giovedì 5 e venerdì 6 settembre 2019. Potete vedere a questo link il programma, in cui troverete anche la scheda per registrarvi.

Connettersi con la scuola del futuro è possibile!

 

March 25th, 2019 Robotica Educativa al DEI per il Festival della Cultura Creativa

Nell'ambito del Festival della Cultura Creativa, sostenuto da BNL e in collaborazione con Scuola di Robotica, due scuole secondarie di primo grado, la M.Fanno di Saonara e la M.Todesco di Padova sono state ospiti del Dipartimento di Ingegneria dell'Informazione dell'Università di Padova per una esperienza di robotica educativa. Dopo una rapida introduzione su cos'è un robot e su come può essere utile per compiti ripetitivi o pericolosi, il prof. Emanuele Menegatti e il prof. Michele Moro, docenti del Dipartimento, hanno affidato ai ragazzi i kit Mindstorm della Lego; il compito era quello di costruire con i componenti in dotazione un robot spazzino, e di programmarlo perchè facesse un percorso prestabilito, con fermate fisse per raccogliere la spazzatura dai bidoncini.

L'idea alla base della Robotica Educativa è quella del learning by doing, per cui il robot non è il fine, ma il mezzo per l'apprendimento attivo, secondo i principi del Costruzionismo di Seymour Papert.

 

BNL SdR Occhio di pesce MIN

April 23rd, 2018 Sesta edizione del corso di formazione per insegnanti sulla Robotica Educativa

Si terrà mercoledì 5, giovedì 6 e venerdì 7 settembre 2018 presso il Dipartimento di Ingegneria dell'Informazione dell'Università di Padova il corso di formazione per insegnanti Introduzione della Robotica Educativa nella Didattica Scolastica Istituzionale. La robotica può essere uno straordinario strumento didattico multidisciplinare, e il corso è aperto a insegnanti di tutte le materie, anche umanistiche, attivi nelle scuole di ogni ordine e grado. Per i partecipanti, restarà a disposizione tutto l'anno uno sportello online per porre domande e discutere situazione applicative reali. Le schede con le domande di partecipazione dovranno arrivare entro il 4 giugno. Chi verrà selezionato sarà avvisato personalmente.

Maggiori informazioni al link http://robotics.dei.unipd.it/index.php/teaching/15-educational-robotics/140-roboticaeducativa2018

Trovate qui la scheda da compilare come domanda di partecipazione (Scheda di Registrazione)

 

May 18th, 2017

Sta per partire la 5° edizione del "Corso di apprendimento permanente per insegnanti sulla Robotica Educativa" (http://robotics.dei.unipd.it/index.php/teaching/15-educational-robotics/125-roboticaeducativa2017). Qui trovate la scheda di iscrizione: http://robotics.dei.unipd.it/index.php/teaching/15-educational-robotics/107-scheda-registrazione-education-2017. Le iscrizioni sono aperte fino al 4 giugno 2017.

 

March 22th, 2017

Desert Lion, the team of IAS-LAB, took part at the MBZIRC Challenge and... IAS-Lab is bronze medal (3rd place).
Together with the team composed by Czech Technical University in Prague, University of Pennsylvania and University of Lincoln, the IAS-LAB team take part in the Grand Challenge http://www.mbzirc.com/challenge <http://www.mbzirc.com/challenge>.
The joined team scored third: http://www.mbzirc.com/scoreboard/challenge4#viewscore <http://www.mbzirc.com/scoreboard/challenge4#viewscore> and so we are bronze medal.
Congratulation to all IAS-Lab members which worked very hard for this result.

 

September 26th, 2016

Mercoledì 19 ottobre 2016 alle ore 16, presso l'Accademia Galileiana (Sala del Guariento, Via Accademia 7), si terrà il XVI seminario annuale di tecnologie dell'informazione in cui sarà affrontato il tema  dei nuovi sviluppi assunti dalla robotica educativa in qualità di strumento di supporto all'apprendimento nella scuola del XXI secolo. Terranno il seminario i prof. Emanuele Menegatti e Michele Moro del Dipartimento di Ingegneria dell'Informazione. 

Cliccando qui potrete visionare il volantino del seminario.

 

May 27th, 2016

In Aula Magna A. Lepschy took place the workshop "Embodied-Brain System Science and Robotics Engineering for Rehabilitation", with the contribution of researchers from Tokyo and from IAS-Lab.

Here is the program of the meeting.

 

May 1st, 2016

IAS-Lab has been selected for participation and for receiving a sponsorship to participate to the Mohamed Bin Zayed International Robotics Challenge (MBZIRC), that will take place in Abu Dhabi on February 2017. Here a list of the selected teams: http://www.mbzirc.com/qualified-teams.

 

March 29th, 2016

Jeff Burke and Matteo Munaro will demonstrate OpenPTrack at the GPU Technology Conference (GTC) on April 6. Their presentation, “Real-Time Person Tracking on Jetson with OpenPTrack", will provide an overview of OpenPTrack, real-world deployments, and the technology roadmap using the NVIDIA Jetson, for which OpenPTrack has recently been optimized.
GTC is the largest annual event for GPU developers; this year it will be held April 4-7 at the San Jose McEnery Convention Center, San Jose, California.

The recording of the talk is available here: http://on-demand.gputechconf.com/gtc/2016/video/S6742.html.

 

February 16th, 2016

IAS-Lab attended the FOCUS Expert Workshop, as part of the Focus project. The meeting held at Brussels, Crowne Plaza Hotel-Le Palace.
Here is the program of the meeting.

 

December 2-3 , 2015

IAS-Lab and Litte Lab participate in Genoa to the kick-off meeting of the European project RoboESL (a part of the project Erasmus+). The aim of RoboESL is to support school to tackle early school leaving (ESL) introducing robot in the extra curriculum activities. The coordinator of the project is “Liceo Fermi” of Genoa. The other partners, in addition to the department of Information Engineering of the University of Padua, are the department of Education of the University of Latvia, School of Robotics of Genoa, the European Lab for Educational Technology (Edumotiva) and some schools of Latvia and Greece.

 

RoboESL3

 

May 5th, 2015

Matteo Munaro will give a talk entitled "OpenPTrack: Extending people tracking to RGB-D camera networks" at Microsoft on May the 11th in Redmond and at Google on May the 13th in Mountain View. 

 

April 7th, 2015

OpenPTrack is on Twitter! You can now follow this open source project for multi-camera people tracking also here: @OpenPTrack and @UCLAREMAP.

 

January 26th, 2015

IAS-Lab at the Interdepartmental Centre for Research in Viticulture and Enology of Conegliano, the final conference of the project Dodich. The speakers' presentations are available here.

 

September 25th, 2014

IAS-Lab presentation of the EuRoC project and of the possibilities it offers to Master students at the Deparment of Information Engineering of the University of Padova.

 

July 18th, 2014

At the conferenceIAS-13 (13th International Conference onIntelligentAutonomous Systems) held at theCongress CentrePadua"Papa Luciani", IAS-Labpresentsthe robotthat recognizes andfollowsthe personmovement. Here is the video published by "Il Mattino" di Padova.

 

June 24th, 2014

Michele Palmia won the prize "FONDAZIONE LUCIANO IGLESIAS" for his master thesis "Design and implementation of a system for mutual knowledge among cognition-enabled robots", with supervisor Enrico Pagello and co-supervisor Mauro Antonello.

 

June 16th, 2014

The Department of Computer Science at the Brown University (Providence, USA) uses the people detector developed at the IAS-Lab with its PR2 robot. Here it is a picture of a people following demo running our people detector at the World Science Festival 2014 in New York City. Image courtesy of Chad Jenkins and Lingzhu Xiang.

wsfpr2

 

June 4th, 2014

Today IAS-Lab meets young engineers of 1st and 3rd year of BSc in Information Engineering. We showed the current research activities of our lab and invited them to join the lab.

meeting

 

June 2nd, 2014

"Mattino di Padova" talked about IAS-Lab and its research activities and future projects.
Printed Article
Online Article

May 12th, 2014

IAS-Lab will attend the 2nd of RoboBusiness Europe that will be held at LEGOLAND® Hotel & Conference Center in Billund, DENMARK. RoboBusiness Europe is a first-of-its-kind robotics event for the continent. The purpose of RoboBusiness Europe is to provide a platform for robotic developers and end-users to meet and, thus, to bridge the development within robotics with the end-users’s needs.

 

May 6th, 2014

IAS-Lab will participate to Wired Next Fest 2014 in Milano. We will show demonstrations of our algorithms for people tracking and our robot in action.
Furthermore, on Sunday 18th May at 14:30, the current research activities of the lab will be presented during an oral talk.

 

April 28th, 2014

Elisa Tosello held an invited talk during the 2nd ROS-I Community Forum Webinar about non real-time ROS interface for real-time COMAU controllers developed by IAS-Lab.

 

April 22nd, 2014

IAS-Lab meets young innovators of the future by exposing some demos on recent technological advances in robotics at "Sperimentando 2014". An experimental laboratory on "Learning how to program a robot" will be taught by Stefano Michieletto during the same exhibit.

 

February, 23r, 2014

"Corriere delle alpi" interviewed Team EXiM (Francesco Ferrati, Roberto Bortoletto, Emanuele Menegatti and Enrico Pagello) about their future projects.

 

February 8th, 2014

Matteo Munaro and Stefano Michieletto have been selected for participation in the Kinect for Windows Developer Preview Program

 

February 5th, 2014

The weekly "TuttoScienze" by "La Stampa" interviewed Elisa Tosello about her contribution to the Robotics community.

 

February 2nd, 2014

RobotWood team sponsored by IAS-Lab and guided by Stefano Michieletto won the access to the FLL national fase during the Nord-Est interregional qualification.

 

December 7th, 2013

Elisa Tosello won Dr. Kanako Miura Travel Grant, a prize which aims to assist female PhD and MS students entering the Robotic field with travel to Humanoids 2013 in Atlanta. Read the news on the weekly "D" by "La Repubblica".

 

September 26th, 2013

Team EXiM (Francesco Ferrati, Roberto Bortoletto, Emanuele Menegatti and Enrico Pagello) won first prize (10.000 euros prize money) in the business plan competition startCup Veneto 2013. The Business Idea regards an exoskeleton for the assistance of people living with spinal cord injury and the development of a software suite for the processing of the acquired clinical data. Read the full article published on "Il gazzettino di Padova".

 

October 3rd, 2013

Workshop of Brain Computer Interface (BCI) in collaboration with g.tec Medical Engineering at the IAS-Lab. Program available here

 

August 13th, 2013

Matteo Munaro has been added among the scientists collaborating with the Open Perception foundation.
 
 

May 2nd, 2013

Matteo Munaro will give an invited talk at PCL Tutorial during the ICRA 2013 conference in Karlsruhe, Germany.
Title: "People Detection in RGB-D Data".
Date: Friday 10 May 2013, 16:25 - 16:50.
Room: Clubraum.

 

April 25th, 2013

Matteo Munaro has been selected as a developer for a PCL code sprint sponsored by the Southwest Research Institute (SwRI) and the National Institute of Standards and Technology (NIST).
The efforts will be focused on developing algorithms for human detection and tracking, out of 2D camera imagery fused with 3D point cloud data (coming from ASUS XtionPRO cameras).

 

News archiv

April 4th, 2013

Matteo Munaro and Filippo Basso released BSD licensed code for creating target-specific color classifiers with Online Adaboost (download here), as described in F. Basso, M. Munaro, S. Michieletto and E. Menegatti. Fast and robust multi-people tracking from RGB-D data for a mobile robot. In Proceedings of the 12th Intelligent Autonomous Systems (IAS) Conference, Jeju Island (Korea), 2012

 

March 10th, 2013

First Lego League Italia: RobotWood team sponsored by IAS-Lab and guided by Stefano Michieletto gained the 8th place and won the Best Robotics Performance Prize.

 

February 20th, 2013

Matteo Munaro contributed to the Point Cloud Library a module for people detection with RGB-D data.
Here it is the link to a tutorial explaining how to use it:

 

January 19th, 2013

RobotWood team sponsored by IAS-Lab and guided by Stefano Michieletto won the Centro-Sud interregional qualification and will access to the national fase.

 

January 9th, 2013

Emanuele Menegatti will give an invited talk at Workshop Robotica 2013 in Treviso.
Friday 25 January 2013
Auditorium S. Pio X 9.00 - 13.00

 

December 8th, 2012

Gioia Ballin won the prize "inTESI 2012" for her master thesis "Human Action Recognition from RGB-D Frames" with supervisor Emanuele Menegatti and co-supervisor Matteo Munaro.

Martino Minella won the prize "inTESI 2012" for her master thesis "Tecniche di analisi di difetti in pezzi metallici e in fibra di carbonio mediante analisi di immagini termiche" with supervisor Emanuele Menegatti and co-supervisor Stefano Ghidoni.

Morris Soragni won the prize "inTESI 2012" for her master thesis "Sistema client-server per la gestione della telecamera Omidome" with supervisor Emanuele Menegatti and co-supervisor Stefano Ghidoni.

 

December 4th, 2012

With more than 90 participating organisations in 21 European countries and over 200 robotics related activities offered for the public, the second European Robotics Week was a tremendous success!!
Please refer to the Press Room on www.robotics-week.eu to download pictures and other information.

 

November 18th, 2012

Edmond So, Stefano Michieletto, and Emanuele Menegatti won the Best Paper Award at IEEE International Symposium on RObotic and Sensors Environments (ROSE 2012)

 for the paper:

E. So, S. Michieletto, and E. Menegatti. "Calibration of a Dual-Laser Triangulation System for Assembly Line Completeness Inspection"; IEEE International Symposium on RObotic and Sensors Environments (ROSE 2012); Magdeburg, Germany; 2012, pp.138-143.

 

November 13th, 2012

 A tutti gli interessati,

volevamo avvisarvi che nei giorni di giovedì 29 e venerdì 30 novembre 2012 dalle ore 9:00 alle ore 18:00,
nell'ambito della 2a Settimana Robotica Europea (http://euroweek.scuoladirobotica.eu),
il laboratorio di Sistemi Autonomi Intelligenti (IAS-Lab) dell'Università di Padova si apre alla visita del pubblico,
in particolare degli studenti della scuola secondaria.
Durante la visita verranno brevemente presentate le attività di ricerca in corso nel laboratorio e si potrà assistere ad alcune dimostrazioni
con robot in azione, in particolare umanoidi. Si parlerà anche di robotica educativa con esempi realizzati mediante Lego Mindstorms NXT e Arduino.

Se desiderate partecipare con la vostra classe all'iniziativa, dovete prenotarvi
mandando un mail all'indirizzo This email address is being protected from spambots. You need JavaScript enabled to view it." data-mce-href="mailto:This email address is being protected from spambots. You need JavaScript enabled to view it.">This email address is being protected from spambots. You need JavaScript enabled to view it. con le seguenti indicazioni:
- Denominazione, tipo e indirizzo della scuola;
- Classe (1a, 2a, ecc.);
- Numero di studenti partecipanti;
- Nome accompagnatore/i;
- Data e orario preferenziale (almeno 2 opzioni).

 

Se invece preferite venire voi con la vostra famiglia o suggerirlo ai singoli studenti per una visita personale ,
vi preghiamo di avvisarli di prenotarsi con una mail al medesimo indirizzo sopra indicato
e con queste informazioni:
- Nome e Cognome;
- Città di provenienza;
- Tipo di scuola e classe frequentata;
- Data e orario preferenziale.

Se avete bisogno di maggiori informazioni sull'evento, inviate una mail di
richiesta all'indirizzo di cui sopra.

Precisiamo che il giorno 30/11/2012 l'attività sarà concomitante con l'Open Day promosso dal
Dipartimento di Ingegneria dell'Informazione dell'Università di Padova.

Vi ringraziamo per l'attenzione, cordiali saluti.
proff. Emanuele Menegatti e Michele Moro

 

November 12th, 2012

IAS-Lab joined EuRobotics Week. You can also join our events:
29/11 - AutonomousRobotics@UNIPD>

30/11 - Robotics@OpenDEI>
or play an active role in the other
European Events.

 

October 13th, 2012

Emanuele Menegatti co-organized the workshop BCI and robotics control at ROBOTICA 2012 in Milan.
Chair: Emanuele Menegatti (Università degli studi di Padova)
Chair: Piero Paolo Battaglini (Università degli studi di Trieste)
Friday 9 November 2012
Sala Vukobratovic 10.30 - 17.30

 

September 28th, 2012

IAS-Lab attended VenetoNight to show the demo "Control your robot".

 

controlyourrobot

 

 

June 19th, 2012

The programme of ROBOTICA 2012 has been published!
Robotica 2012

April 27th, 2012

Gioia Ballin, graduated with the master thesis "Human Action Recognition from RGB-D Frames" (with supervisor Emanuele Menegatti and co-supervisor Matteo Munaro), has been selected for the Google Summer of Code as a developer of the Point Cloud Library

 

April 20th, 2012

The Service Robotics - Robotics Research Campus website is now online. This is a joint project involving the IAS-Lab of the University of Padova, Telecom Italia, the Robotics Research Group of the Politecnico di Torino and the BioRob Group of the CNR-IEIIT U.O. of Genova.

 

January 11th, 2012

We organized a scientific bilateral meeting between the RACE center of the University of Tokyo - Kashiwa Campus and the IAS-Lab. Here it is the meeting program.

 

December 15th, 2011

Carlo Giuliani won the prize "inTESI 2011" for his master thesis "Riconoscimento di facce in un sistema di visione distribuito (Face recognition in a distributed vision system)", with supervisor Emanuele Menegatti and co-supervisor Salvatore Anzalone.

Daniele Alberton won the prize "inTESI 2011" for his master thesis "Sistema di visione industriale per il controllo qualità in 3D (Industrial vision system for 3D quality inspection)", with supervisor Emanuele Menegatti, related to the 3D Complete European Project.

 

November 19th, 2011

Our spin-off IT+Robotics attended the Robotica 2011 and BiMec 2011 fairs in Milan.

http://robotics.dei.unipd.it/

Si terrà mercoledì 5, giovedì 6 e venerdì 7 settembre 2018 presso il Dipartimento di Ingegneria dell'Informaziomne la Sesta Edizione del Corso di Formazione per Insegnanti Introduzione della Robotica Educativa nella Didattica Istituzionale. E' aperto a insegnanti di tutte le materie e di ogni ordine e grado. La Robotica Educativa, infatti, può essere uno straordinario strumento didattico disciplinare, anche per materie che non sembrano avere affinità con essa. Dopo il corso, resterà a disposizione uno sportello online per rispondere a domande e discutere situazioni applicative reali.

Maggiori dettagli al link http://robotics.dei.unipd.it/index.php/teaching/15-educational-robotics/140-roboticaeducativa2018

Trovate qui la Scheda di pre-iscrizione, che va inviata entro il 4 giugno. Chi verrà selezionato sarà avvisato personalmente.

 

Abstract

To retrieve the 3D coordinates of an object in the robot workspace is a fundamental capability for industrial and service applications. This can be achieved by means of a camera mounted on the robot end-effector only if the hand-eye transformation is known. The standard calibration process requires to view a calibration pattern, e.g. a checkerboard, from several different perspectives. This work extends the standard approach performing calibration pattern localization and hand-eye calibration in a fully automatic way. A two phase procedure has been developed and tested in both simulated and real scenarios, demonstrating that the automatic calibration reaches the same performance level of a standard procedure, while avoiding any human intervention. As a final contribution, the source code for an automatic and robust calibration is released.

 

Software

Here, you can find the ROS project hand_eye_calib.

 

Licence

The software is freely available for academic use.
For questions about the tool, please contact This email address is being protected from spambots. You need JavaScript enabled to view it..

 

Reference

@inproceedings{antonello2017autohandeye,
  title={A Fully Automatic Hand-Eye Calibration System},
  author={Antonello, Morris and Gobbi, Andrea and Michieletto, Stefano and Ghidoni, Stefano and Menegatti, Emanuele},
  booktitle={To appear in Mobile Robots (ECMR), 2017 European Conference on},
  year={2017},
}

The IASLAB-RGBD Fallen Person Dataset consists of several RGB-D frame sequences containing 15 different people. It has been acquired in two different laboratory environments, the Lab A and Lab B. It can be divided into two parts: the former acquired from 3 static Kinect One V2 placed on 3 different pedestals ; the latter from a Kinect One V2 mounted on our healthcare robot prototype, see "An Open Source Robotic Platform for Ambient Assisted Living" by M. Carraro, M. Antonello et al in AIRO at AI*IA 2015.

 

Both parts are briefly described in the following. They contain the training/test splits of our approach to detect fallen people.

STATIC DATASET:

  • Folder "raw": 360 RGB frames and point clouds with the camera calibrations;
  • Folder "segmented_fallen_people": point clouds of the fallen people. They have been manually segmented;
  • Folder "training_with_cad_room_and_nyudv2": random selected positives (70%), 24 point clouds from the Lab A and 31 point clouds from the NYU Depth Dataset V2 by Silberman et al;
  • Folder "test_with_lab_room": random selected positives (30%) and 32 point clouds from the Lab B.

DYNAMIC DATASET:

  • Folder "training": 4 ROS bags with 15932 RGB-D frames in total acquired during 4 robot patrollings of the Lab A;
  • Folder "test": 4 ROS bags with 9391 RGB-D frames in total acquired during 4 robot patrollings of the Lab B. This room is more similar to an apartament: spaces are smaller, it is cluttered and contains a sofa;
  • Folder "maps": 2D maps of the two environments and ground truth positions of the person centroids in the maps.

Download links:

StaticDataset

DynamicDatasetPart1

DynamicDatasetPart2

 

In the figures below, some RGB samples from both environments are reported:

 1487936496.882732   1487934554.500764   1487935479.169314   1487935432.170219

 

For questions and remarks directly related to the IASLAB-RGBD Fallen Person Dataset, please contact This email address is being protected from spambots. You need JavaScript enabled to view it. and This email address is being protected from spambots. You need JavaScript enabled to view it..

 

Licence

This dataset is freely available for academic use. 

 

References

If you use this dataset, please cite the following work:

@inproceedings{antonello2017fast,
 title={Fast and Robust detection of fallen people from a mobile robot},
 author={Antonello, Morris and Carraro, Marco and Pierobon, Marco and Menegatti, Emanuele},
 booktitle={Intelligent Robots and Systems (IROS), 2017 IEEE/RSJ International Conference on},
 year={2017},
 organization={IEEE}
}

Header

 

IT+Robotics is a spin-off company of the University of Padua. It was created in 2005 by professors working in the field of robotics and young, brilliant people coming from the information engineering department of the University of Padua.

The mission is the technology transfer from University to business. Innovation boosted by cutting edge technology, that was a prerogative of academia only few years ago, is the right way towards growth, and will let us to contrast the current crisis.

The IT+Robotics team, a mix of scientists and developers, is able to provide advanced solutions to the problems, thanks to the experience gained during years of research and industrial development in the field of autonomous robotics: real-time operating systems, artificial vision systems, software agents management, and highly realistic simulations.

 

 

Screen Shot 2018-12-12 at 12.04.50.png

 

ExiMotion srl is a start up company created in 2014, and born of the experience and knowledge of the IAS-Lab of University of Padua.

Its main mission is solving the most common problems that arise while using technological devices in the bio-medical field. ExiMotion offers solutions which are highly innovative and with a low cost for the institutions or the final user. Thanks to the scientific/engineering know-how of its team, ExiMotion s.r.l. is bridging the gap between the world of research and that of production. 

EXiMotion s.r.l. has created the Education Department, since it believes that the scientific knowledge is crucial to keep a high quality in the scientific research and in the innovating process, and also for every single nation and Europe in general. The main focus is on Educational Robotics and Creative Coding.

IAS-Lab stands for Intelligent Autonomous Systems Laboratory and it is one of the 28 laboratories of the Department of Information Engineering of the University of Padua.

The skills available at the IAS-Lab cover various methodological and algorithmic aspects of intelligent robotics and computer vision applied to robotics. These include: calibration, navigation, 3D reconstruction of environments and the interpretation of sensorial data of autonomous robots such as mobile platforms, industrial manipulators, humanoid robots and drones.

The fields of application range from service robotics to industrial robotics and from neurorobotics to agricultural robotics. 
The IAS-Lab team is composed of 5 professors and structured researchers, 2 post-docs and 5 Ph.D. students, plus a dozen master students. 
 
The equipments available at the IAS-Lab includes several fully sensorized robots (eg with RGB-D cameras, IMU, LiDAR, etc...). Among these: 2 mobile manipulators, 4 industrial manipulators and 2 humanoid robots. The laboratory, distributed in two rooms for a total of 250 square meters, is also equipped with a 3D camera network for markerless motion capture.

 




Powered by Joomla!®. Valid XHTML and CSS