Habib Hamam; Daniel LeBlanc; Yacine Ben Ahmed; Sid-Ahmed Selouani; Yassine Bouslimani
Description:
We propose a cost effective human-machine interface interpreting the user’s hand or head gestures. This system targets people suffering from reduced mobility and can be used in Computer and Web-based learning. This design allows using simple head movements to perform basic computer mouse operations, such as moving the mouse cursor on a computer screen. To improve functionality, the system uses voice recognition to execute repetitive movements, such as clicks and double-clicks. We will explain two different infrared sensor layouts and discuss the way our design detects and processes a user’s head or hand movements. Furthermore, we will explain the benefits and drawbacks of using voice recognition in our design. The conclusion presents our results as well as future perspectives for our design.