Disputation i Elektronik med Cristian Vilar

Tors 09 dec. 2021 09.00–13.00
Sundsvall
C310 samt online via Zoom/Youtube
Lägg till i din kalender

Den 9 december presenterar Cristian Vilar sin doktorsavhandling " Semi-Autonomous Navigation of Power Wheelchairs, 2D/3D Sensing and Positioning Methods ". Välkommen att delta på plats eller digitalt. Seminariet ges på engelska.

Disputation i elektronik med Cristian Vilar.
Du kan se Cristians presentation live via vår Youtube-kanal den 9 december mellan kl. 9:00-10:00. Om du vill delta på hela seminariet måste du registrera dig på länken nedanför.

Huvudhandledare: Professor Mattias O’Nils

Opponent: Dr. Nicolas Ragot, CESI Ecole d’Ingénieurs, Rouen

Betygsnämnd:

Professor Amund Skavhaug, Norges teknisk-naturvitenskaplige universitet

Professor George Nikolakopoulos, Luleå tekniska universitet

Professor Tingting Zhang, Mittuniversitetet

Anmälan till disputationen

Sammanfattning

Autonomous driving and assistance systems have become a reality for the automotive industry to improve driving safety in the car. Hence, the cars use a variety of sensors, cameras and image processing techniques to measure their surroundings and control their direction, braking and speed for obstacle avoidance or autonomously driving applications.

Like the automotive industry, powered wheelchairs also require safety systems to ensure their operation, especially when the user has controlling limitations, but also to develop new applications to improve its usability. One of the applications is focused on developing a new contactless control of a powered wheelchair using the position of a caregiver beside it as a control reference. Contactless control can prevent control errors, but it can also provide better and more equal communication between the wheelchair user and the caregiver.

This thesis evaluates the camera requirements for a contactless powered wheelchair control and the 2D/3D image processing techniques for caregiver recognition and position measurement beside the powered wheelchair.

The research evaluates the strength and limitations of different depth camera technologies for caregiver feet detection above the ground plane to select the proper camera for the application. Then, a hand-crafted 3D object descriptor is evaluated for caregiver feet recognition and compared with respect to a state-of-the-art deep learning object detector. Results for both methods are good, however, the hand-crafted descriptor suffers from segmentation errors and consequently, their accuracy is lower. After the depth camera and image processing techniques evaluation, results show that it is possible to use only an RGB camera to recognize and measure his or her relative position.

Länk till avhandling i DIVA


Sidan uppdaterades 2021-12-07