A team of professors and students at Quinnipiac University has developed innovative software that uses artificial intelligence (AI) to help individuals with limited mobility communicate more effectively. The software, named AccessiMove, allows users to control devices entirely through facial gestures, offering a new level of independence.
The initiative was inspired by a personal encounter. Chetan Jaiswal, an associate professor of computer science at Quinnipiac, met a young man in a wheelchair who struggled to communicate with his parents during an occupational therapy conference in 2022. This experience motivated Jaiswal to create technology that could genuinely assist those in need. “Technology should help people who actually need it,” he stated, emphasizing the importance of developing tools that offer practical support.
Jaiswal partnered with colleagues Karen Majeski, an associate professor of occupational therapy, and Brian O’Neill, also an associate professor of computer science. Together with students Michael Ruocco and Jack Duggan, they worked to create the university’s first patented hands-free input system. The software uses a standard webcam to detect head movements, winks, and other facial gestures, enabling users to interact with computers and other devices.
The technology operates through head-tilt detection and facial landmark tracing. Users can issue commands by tilting their heads in various directions or blinking to simulate mouse clicks. Jaiswal explained that these simple gestures can facilitate tasks such as opening applications or navigating the internet. “This benefits a lot of people, especially those with disabilities and motor impairment,” he noted.
The team is actively seeking partnerships and investors to further develop AccessiMove, particularly in healthcare settings. Jaiswal highlighted the relevance of local partnerships, mentioning institutions like Yale Hospital and Hartford Hospital. “We are looking for partners who want to make a difference in the world for people who need it,” he added, emphasizing the importance of collaboration in enhancing patient care.
Majeski elaborated on the software’s functionality, describing how facial gestures serve as inputs similar to a computer mouse. O’Neill clarified that the system focuses on the bridge of the user’s nose, allowing precise directional commands. For example, tilting the head to the left or right can trigger specific actions on the computer. This adaptability extends beyond personal computers; the technology can also be integrated into wheelchairs, enabling users to navigate their environment using facial movements.
The potential applications are vast, with the technology being suitable for long-term care centers, rehabilitation facilities, and remote learning environments. Jaiswal pointed out that the software is not confined to traditional computing; it can assist individuals in moving their wheelchairs. “If you look up, the chair goes forward; if you look down, the chair goes backward,” he explained, highlighting the ease of use for individuals with disabilities.
AI plays a crucial role in the system’s functionality, as it tracks facial gestures in real time. This feature enhances accessibility in various fields, including gaming, where it enables inclusive gameplay for individuals who may have difficulty using conventional controllers. “We are talking about gaming literacy for learning,” Majeski said, underscoring the educational potential of the software for children with mobility challenges.
Trials conducted with students Ruocco and Duggan confirmed the software’s effectiveness, even for users who wear glasses or have limited neck mobility. The system can be calibrated to accommodate an individual’s range of motion, making it versatile for diverse user needs.
Despite its promising capabilities, the team acknowledges the financial challenges ahead. “There is money behind bringing it to the market, but we still need funding to make it an open-source option for people with disabilities,” Majeski said, emphasizing the importance of securing adequate resources.
O’Neill highlighted the simplicity of AccessiMove, noting that it does not require specialized hardware. “It is using the webcam built into any tablet or any phone,” he stated, reinforcing the software’s accessibility.
Jaiswal envisions a future where this technology becomes a regular tool for those who need it, as well as for individuals seeking convenience. “The technology is useful in hospital settings,” he said. “Patients can use facial gestures to communicate, especially for those who can’t speak.”
As the team continues to refine AccessiMove, they remain focused on expanding its reach and impact, aiming to improve the quality of life for many individuals facing communication challenges.
