Evaluation of dressing activities is essential in the assessment of the performance of patients with psycho-motor impairments. However, the current practice of monitoring dressing activity (performed by the patients in front of the therapist) has a number of disadvantages when considering the personal nature of dressing activity as well as inconsistencies between the recorded performance of the activity and performance of the same activity carried out in the patients’ natural environment, such as their home. As such, a system that can evaluate dressing activities automatically and objectively would alleviate some of these issues. However, a number of challenges arise, including difficulties in correctly identifying garments, their position in the body (partially of fully worn) and their position in relation to other garments. To address these challenges, we have developed a novel method based on visual grammars to automatically detect dressing failures and explain the type of failure. Our method is based on the analysis of image sequences of dressing activities and only requires availability of a video recording device. The analysis relies on a novel technique which we call temporal–relational visual grammar; it can reliably recognize temporal dressing failures, while also detecting spatial and relational failures. Our method achieves 91% precision in detecting dressing failures performed by 11 subjects. We explain these results and discuss the challenges encountered during this work.

Detecting dressing failures using temporal–relational visual grammars

Osmani, Venet
;
Mayora, Oscar
2019-01-01

Abstract

Evaluation of dressing activities is essential in the assessment of the performance of patients with psycho-motor impairments. However, the current practice of monitoring dressing activity (performed by the patients in front of the therapist) has a number of disadvantages when considering the personal nature of dressing activity as well as inconsistencies between the recorded performance of the activity and performance of the same activity carried out in the patients’ natural environment, such as their home. As such, a system that can evaluate dressing activities automatically and objectively would alleviate some of these issues. However, a number of challenges arise, including difficulties in correctly identifying garments, their position in the body (partially of fully worn) and their position in relation to other garments. To address these challenges, we have developed a novel method based on visual grammars to automatically detect dressing failures and explain the type of failure. Our method is based on the analysis of image sequences of dressing activities and only requires availability of a video recording device. The analysis relies on a novel technique which we call temporal–relational visual grammar; it can reliably recognize temporal dressing failures, while also detecting spatial and relational failures. Our method achieves 91% precision in detecting dressing failures performed by 11 subjects. We explain these results and discuss the challenges encountered during this work.
File in questo prodotto:
File Dimensione Formato  
Detecting_Dressing_Failures_using_Visual_Grammars.pdf

accesso aperto

Tipologia: Documento in Post-print
Licenza: DRM non definito
Dimensione 5.08 MB
Formato Adobe PDF
5.08 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/316138
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact