MIT Researchers Have Developed A Wearable That Predict The Mood Of Its Wearer

Posted on Jun 3 2017 - 9:21pm by Daniel Fisher

Voice-based artificial intelligence assistants can easily understand human voice better than ever. But artificial intelligence can decipher our words and context to deliver fast results and actions, artificial intelligence does not understand the tone, or feelings. Researchers from MIT set out to change that, and creating a wearable that can detect the tone of a conversation. In the future, this tool might help people with anxiety or Asperger’s syndrome better deal with stressful situations.

A paper published Wednesday by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory describes how the wearable AI detects the tone of a conversation through audio and physiological data in order to determine the mood of the speaker. The information can then be used to better understand social situations.

mood perdicting wearable

mood-predicting wearables

The mood-predicting wearables actively analyze a person’s speech patterns and physiological signals to determine the tones and moods expressed in a conversation with 83 percent accuracy. The system is programmed to simply record a ‘sentiment score’ every 5 seconds during a conversation.

The MIT researchers uses an Apple iPhone 5S in their test that record the audio part of the conversations, but each test subject wear Samsung Simband. That is the company’s developer-only device platform which runs Tizen and also has space for multiple additional sensors. It isn’t the elegant of implementations, but a pair have built the system with an eye on incorporating it inside a wearable device with no external help.

However, the system is not yet ready to be deployed for social uses. It is now labels interactions as negative and positive, but the device’s goal is for artificial intelligence to be able to determine feeling accurately than that and identify tense, boring, or exciting moments.

Interestingly, the researchers say all the computing required to analyze the feeling and tone of a conversation is done locally, on a device, just to protect privacy. However, a user’s version would need clear protocols for obtaining the consent of the other people involved in the conversation, not only the wearer.