When Googles Reveled Project Gameface, it was delighted to show off a hands-free, AI-powered gaming mouse that, according to the announcement, “enables people to control a computer’s cursor using their head movement and facial gestures.” While not the first AI-based gaming tool, it was one of the first to put AI in the hands of gamers rather than developers.
Lancy Carr, a quadriplegic video game streamer who uses a head-tracking mouse as part of his gaming setup, inspired the concept. Following the loss of his previous hardware in a fire, Google stepped in to offer an open source, highly adjustable, low-cost alternative to costly new hardware powered by machine learning. While the wider existence of AI is proving polarizing, we set out to see if, when utilized for good, AI could be the future of game accessibility.
It is critical to define AI and machine learning in order to understand how they work in Gameface. When we use the terms “AI” and “machine learning,” we mean both the same thing and something completely different.
“AI is a concept,” says Laurence Moroney, Google’s AI advocacy director and one of the geniuses behind Gameface. “Machine learning is a technique for putting that concept into action.”
Machine learning, along with implementations such as big language models, falls under the purview of AI. Whereas familiar implementations such as OpenAI’s ChatGPT and StabilityAI’s Stable Diffusion are iterative, machine learning is distinguished by learning and adapting without instruction, drawing conclusions from observable patterns.
Moroney describes how this is used in a succession of machine learning models in Gameface. “The first was to detect where a face is in an image,” he explains. “The second was to be able to understand where obvious points (eyes, nose, ears, etc.) are once you had an image of a face.”
Following that, another model may map and understand gestures from those points and assign them to mouse inputs.
It is a deliberately assistive AI implementation, as opposed to others that are frequently claimed as rendering human input obsolete. Indeed, Moroney believes that AI should be used to widen “our capacity to do things that were not previously feasible.”
This feeling goes beyond Gameface’s ability to make gaming more accessible. Moroney believes that AI will have a significant impact not only on player accessibility, but also on how developers construct accessibility solutions.
“Anything that allows developers to be orders of magnitude more effective at solving previously intractable classes of problems,” he argues, “can only be beneficial in the accessibility, or any other, space.”
This is something that developers are starting to realize. Perelesoq’s creative director, Artem Koblov, want to see “more resources directed toward solving routine tasks rather than creative invention.”
This enables AI to assist in time-consuming technological operations. With the proper applications, AI could enable a leaner, more permissive development cycle that aids in the mechanical implementation of accessibility solutions while also allowing developers more time to examine them.
“As a developer, you want as many tools as possible to help you make your job easier,” explains Soft Leaf Studios creative director Conor Bradley. He highlights advancements in existing AI implementations in accessibility, such as “real-time text-to-speech and speech-to-text generation, as well as speech and image recognition.” And he sees opportunities for future growth. “Over time, I expect to see more and more games utilizing these powerful AI tools to make our games more accessible.”
Download The Radiant App To Start Watching!
Web: Watch Now
Samsung TV™: Download
Amazon Fire TV™: Download
Android TV™: Download