Unità di Sistemi Intelligenti e Sicuri per la Telemedicina

Consigliere Citel


Il team


Ricercatori 2020


"Emotion detection using noninvasive low cost sensors," 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, 2017, pp. 125-130, doi: 10.1109/ACII.2017.8273589
D. Girardi, F. Lanubile and N. Novielli
Emotion recognition from biometrics is relevant to a wide range of application domains, including healthcare. Existing approaches usually adopt multi-electrodes sensors that could be expensive or uncomfortable to be used in real-life situations. In this study, we investigate whether we can reliably recognize high vs. low emotional valence and arousal by relying on noninvasive low cost EEG, EMG, and GSR sensors. We report the results of an empirical study involving 19 subjects. We achieve state-of-the-art classification performance for both valence and arousal even in a cross-subject classification setting, which eliminates the need for individual training and tuning of classification models.
A University-NGO partnership to sustain assistive technology projects. interactions 23, 2 (March + April 2016), 74–77. DOI:https://doi.org/10.1145/2883619.
Fabio Calefato, Filippo Lanubile, Roberto De Nicolò, and Fabrizio Lippolis. 2016

Abstract. In this forum we celebrate research that helps to successfully bring the benefits of computing technologies to children, older adults, people with disabilities, and other populations that are often ignored in the design of mass-marketed products. 

"Gait analysis for early neurodegenerative diseases classification through the Kinematic Theory of Rapid Human Movements," in IEEE Access, doi: 10.1109/ACCESS.2020.3032202.
V. Dentamaro, D. Impedovo and G. Pirlo

Abstract: Neurodegenerative diseases are particular diseases whose decline can partially or completely compromise the normal course of life of a human being. In order to increase the quality of patient’s life, a timely diagnosis plays a major role. The analysis of neurodegenerative diseases, and their stage, is also carried out by means of gait analysis. Performing early stage neurodegenerative disease assessment is still an open problem. In this paper, the focus is on modeling the human gait movement pattern by using the kinematic theory of rapid human movements and its sigma-lognormal model. The hypothesis is that the kinematic theory of rapid human movements, originally developed to describe handwriting patterns, and used in conjunction with other spatio-temporal features, can discriminate neurodegenerative diseases patterns, especially in early stages, while analyzing human gait with 2D cameras. The thesis empirically demonstrates its effectiveness in describing neurodegenerative patterns, when used in conjunction with state-of-the-art pose estimation and feature extraction techniques. The solution developed achieved 99.1% of accuracy using velocity-based, angle-based and sigma-lognormal features and left walk orientation

"Dynamic Handwriting Analysis for the Assessment of Neurodegenerative Diseases: A Pattern Recognition Perspective," in IEEE Reviews in Biomedical Engineering, vol. 12, pp. 209-220, 2019, doi: 10.1109/RBME.2018.2840679.
D. Impedovo and G. Pirlo

Abstract: Neurodegenerative diseases, for instance Alzheimer's disease (AD) and Parkinson's disease (PD), affect the peripheral nervous system, where nerve cells send messages that control muscles in order to allow movements. Sick neurons cannot control muscles properly. Handwriting involves cognitive planning, coordination, and execution abilities. Significant changes in handwriting performance are a prominent feature of AD and PD. This paper addresses the most relevant results obtained in the field of online (dynamic) analysis of handwritten trials by AD and PD patients. The survey is made from a pattern recognition point of view, so that different phases are described. Data acquisition deals not only with the device, but also with the handwriting task. Feature extraction can deal with function and parameter features. The classification problem is also discussed along with results already obtained. This paper also highlights the most profitable research directions.

Handwriting analysis to support neurodegenerative diseases diagnosis: A review. Pattern Recognition Letters, 121, 37-45 (2019).
De Stefano, C., Fontanella, F., Impedovo, D., Pirlo, G., & di Freca, A. S.

Abstract: Neurodegenerative diseases (NDs) affect millions of people worldwide, with Alzheimer’s and Parkinson’s being the most common ones, and it is expected that their incidence will dramatically increase in the next few decades. Unfortunately, these diseases cannot be cured, but an early diagnosis can help to better manage their symptoms and their evolution. These aspects explain the importance of developing support systems for the early diagnosis of neurodegenarative diseases. Handwriting is one of the abilities that is affected by NDs. For this reason, researchers have also investigated the possibility of using the handwriting alterations caused by NDs as diagnostic signs. This paper presents a review of the literature of handwriting analysis for supporting the diagnosis of Alzheimer’s and Parkinson’s disease as well as of mild cognitive impairments (MCI), with the goal of providing interested researchers with the state-of-the-art research. Moreover, with the aim of providing some guidelines on the features to use for representing handwriting and the writing tasks patients should perform, we also review some widely used approaches for modeling handwriting. Finally, open issues are also discussed to identify promising areas for future research

"A handwritingbased protocol for assessing neurodegenerative dementia", Cognitive Computation, pp. 1-11, 2019
D. Impedovo, G. Pirlo, G. Vessio and M. T. Angelillo

Abstract: Handwriting dynamics is relevant to discriminate people affected by neurodegenerative dementia from healthy subjects. This can be possible by administering simple and easy-to-perform handwriting/drawing tasks on digitizing tablets provided with electronic pens. Encouraging results have been recently obtained; however, the research community still lacks an acquisition protocol aimed at (i) collecting different traits useful for research purposes and (ii) supporting neurologists in their daily activities. This work proposes a handwriting-based protocol that integrates handwriting/drawing tasks and a digitized version of standard cognitive and functional tests already accepted, tested, and used by the neurological community. The protocol takes the form of a modular framework which facilitates the modification, deletion, and incorporation of new tasks in accordance with specific requirements. A preliminary evaluation of the protocol has been carried out to assess its usability. Successively, the protocol has been administered to more than 100 elderly MCI and match controlled subjects. The proposed protocol intends to provide a “cognitive model” for evaluating the relationship between cognitive functions and handwriting processes in healthy subjects as well as in cognitively impaired patients. The long-term goal of this research is the development of an easy-to-use and non-invasive methodology for detecting and monitoring neurodegenerative dementia during screening and follow-up.

PK-clustering Integrating Prior Knowledge in Mixed Initiative Social Network Clustering. IEEE Transactions on Visualization & Computer Graphics, vo. no. 01, pp. 1-1, 5555. DOI: 10.1109/TVCG.2020.3030347
A. Pister, P. Buono, J.D. Fekete, C. Plaisant, P. Valdivia

Abstract: In times of economical difficulties, everyone should adopt solutions that permit to get high quality training at reducedcosts thanks topossibilities offered by new Information and Communication Technologies. In medicine, it is very important to perform training inthe field, without compromising patient’s health. LARE is a system we are currently developing, whose aimis to enable surgeons to perform telementoring (i.e. remote tutoring) during a laparoscopic surgery. The surgeon in the surgery room (learner) is assisted e guided by an expert surgeon (tutor) located in another part of the world, who interacts with the learner via audio and also observes on a screen in real time, and at a very high resolution, the images that the learner is seeing in the surgery room;s/he can also annotate such images, so that s/he can indicate points on which the learner has to operate.LARE allows many people to attenda surgery in live modality; such people can also write in a chat. So far, only some components of LARE have been implemented in the current system. However, LARE has been already used, in particular during an event on February 9th 2013, when 300 surgeons assisted to two surgeries performed under the guidance of a tutorwho was about 800 km far from the surgery room. The system and the results of this event will be illustrated at the conference.

Scene extraction from telementored surgery videos. In: Proc. of International Conference on Distributed Multimedia Systems (DMS '13). Brighton (UK). August 8-10, 2013. pp. Knowledge Systems Institute, Skokie, Illinois, USA
Buono, P., Desolda, G., Lanzillotti, R. (2013)

Abstract: The huge amount of videos, available for various purposes, makes video editing software very important and popular among people. One of the uses of video in medicine is to store surgical operations for educational or legal purposes. In particular, in telemedicine, the exchange of audio and video plays a very important role. In most cases, surgeons are inexpert in video editing; moreover, the user interface of such software tools is often very complex. This paper presents a tool to extract important scenes from surgery videos. The goal is to enable surgeons to easily and quickly extract scenes of interest.

"Detecting Clinical Signs of Anaemia From Digital Images of the Palpebral Conjunctiva," in IEEE Access, vol. 7, pp. 113488-113498, 2019, doi: 10.1109/ACCESS.2019.2932274.
G. Dimauro, A. Guarini, D. Caivano, F. Girardi, C. Pasciolla and A. Iacobazzi

Abstract: The potential for visually detectable clinical signs of anaemia and their correlation with the severity of the pathology have supported research on non-invasive prevention methods. Physical examination for a suspected diagnosis of anaemia is a practice performed by a specialist to evaluate the pallor of the exposed tissues. The aim of the research presented herein is to quantify and minimize the subjective nature of the examination of the palpebral conjunctiva, suggesting a method of diagnostic support and autonomous monitoring. Here we describe the methodology and system for extracting key data from the digital image of the conjunctiva, which is also based on analysis of the dominant colour classes. Effective features have been used herein to establish the inclusion of each image in a diagnosis probability class for anaemia. The images of the conjunctiva were taken using a new low cost and easy to use device, designed to optimize the properties of independence from ambient light. The performance of the system was tested either by extracting manually the palpebral conjunctiva from images or by extracting them in a semi-automatic way based on the SLIC Superpixel algorithm. Tests were conducted on images obtained from 102 people. The dataset was unbalanced, since many more samples of healthy people were available, as often happens in the medical field. The SMOTE and ROSE algorithms were evaluated to balance the dataset, and some classification algorithms for assessing the anaemic condition were tested, yielding very good results. Taking a photo of the palpebral conjunctiva can aid the decision whether a blood sample is needed or even whether a patient should inform a physician, considerably reducing the number of candidate subjects for blood sampling. It also could highlight the suspected anaemia, allowing screening for anaemia in a large number of people, even in resource-poor settings.

Nasal cytology with deep learning techniques. International Journal of Medical Informatics, Volume 122, 2019, Pages 13-19, ISSN 1386-5056, https://doi.org/10.1016/j.ijmedinf.2018.11.010
Giovanni Dimauro, Giorgio Ciprandi, Francesca Deperte, Francesco Girardi, Enrico Ladisa, Sergio Latrofa, Matteo Gelardi

Abstract: In recent years, cytological observations in the Rhinology field are being increasingly utilized. This development has taken place over the last two decades and has proven to be fundamental in defining new nosological entities and in driving changes in the previous classification of rhinitis. The simplicity of the technique and its low invasiveness make nasal cytology a practical diagnostic tool for all rhino-allergology services. Furthermore, since it allows the monitoring of responses to treatment, this method plays an important role in guiding a more effective and less expensive diagnostic program. Microscopic observation requires prolonged effort by a specialist, but the modern scanning systems for cytological preparations and the new affordable digital microscopes allow to design a software support system, based on deep learning techniques, to relieve specialist’s tiring activity. By means of the system presented in this paper, it is possible to automatically identify and classify cells present on a nasal cytological preparation based on a digital image of the preparation itself. Thus, an interesting diagnostic support has been made available to the rhino-cytologist, who can quickly verify that the cells have been correctly classified by the software system: any few unclassified or incorrectly classified cells can be quickly sorted by the specialist itself, then one or more diagnosis can be suggested by this system, taking into consideration also the anamnesis of each patient. The final diagnosis can be defined by the specialist, also based on the result of the prick test and the observation of the nasal cavity. In the system presented herein, image processing and image segmentation techniques have been used to find images of cellular elements within the preparation. Cell classification is based on a convolutional neural network composed of three blocks of main layers. Cell identification (first step, image segmentation) exhibits sensitivity greater than 97%, while cell classification (second step, seven cytotypes) attained a mean accuracy of approximately 99% on the test set and 94% on the validation set. This complete system supports clinicians in the preparation of a rhino-cytogram report.