Our faces can unlock a smartphone, provide entry to a secure facility, and expedite passport check at airports, all while validating our identity for a variety of reasons.
An multinational team of researchers from Australia, New Zealand, and India has advanced face recognition technology by utilizing a person’s expression to operate items in virtual reality without the necessity of a portable controller or touchpad.
Human computer interface specialists employed brain processing methods to record a person’s grin, frown, and clinched jaw and used each emotion to activate particular activities in virtual reality settings in a world first study conducted by University of Queensland researcher Dr Arindam Dey. “The main goal for our effort was to make the metaverse more accessible and inclusive,” Dr. Dey explains. “At the same time, facial expressions may be employed to allow activities such as kissing and blowing air inside virtual worlds in a more realistic manner than is now possible.”
Professor Mark Billinghurst of the University of South Australia, one of the researchers participating in the project, said the system is meant to recognize distinct facial emotions using an EEG headset.
“Instead of a handheld controller conducting these activities, a grin was utilized to activate the’move’ command, a frown for the’stop’ signal, and a clench for the ‘action’ command,” Prof Billinghurst explains.
“Essentially, we are collecting and integrating typical facial emotions such as rage, pleasure, and surprise in a virtual reality environment.”
The researchers created three virtual settings – cheerful, neutral, and frightening – and assessed each person’s cognitive and physiological condition while engaged in each scenario.
They investigated whether changes in the environment caused one of the three expressions, based on emotional and physiological reactions, by duplicating three universal facial expressions: a grin, a frown, and a clench.
In the cheerful environment, for example, users were charged with going across a park with a net to collect butterflies. When the user smiled, they moved, and when they grimaced, they halted.
Participants in the neutral environment were tasked with traversing a workshop to collect things thrown around. The clinched jaw elicited an action (in this example, picking up each item), while the start and stop movement orders were started with a grin and a frown.
The same emotions were used in the terrifying scenario, in which players explored an underground base to shoot zombies.
“Overall, we anticipated handheld controllers to work better than face expressions since they are a more intuitive way, yet individuals reported feeling more engrossed in the VR experiences controlled by facial expressions.”
According to Prof Billinghurst, depending on facial emotions in a VR situation is difficult for the brain but provides consumers with a more authentic experience.
“Hopefully, with additional research, we’ll be able to make it more user pleasant,” he adds.
In addition to offering a unique method to engage with VR, the technology will enable individuals with impairments, such as amputees and those with motor neurone disease, to interact hands-free in VR, eliminating the need for devices developed for fully abled persons.
According to the researchers, the technology might be used to supplement portable controls, where facial expressions are a more natural form of connection.
The outcomes of the research were published in the International Journal of Human-Computer Studies.