Skip to content Skip to footer

Camouflage techniques to keep the voice assistants away from your emotions

The users of voice-controlled services must be aware that without permission, companies such as Apple, Amazon, and Google collect and then grant access to users’ voice recordings to third-party contractors. Would you expect that the development of AI technology will allow reading emotions through your voice? Through this procedure, each user can be matched with suitable ads.

Raj Eiamworakul / Unsplash

Voice assistants collect users data

The resources gathered by the voice assistants are large enough to enable AI systems a precise profile and predictions’ creation, based on age, gender, level of strain, self-belief, physical condition plus personality.

Moreover, based on the user’s feelings, it is possible to receive suggestions for a particular product or restaurant. It raises some concerns that AI reads emotions with the use of a speaker’s voice tone. 

Sebastian Scholz Nuki / Unsplash

Imperial College London resolution

The expansion of voice assistants’ interest transfers to understandable concerns increase. The possibility of using voice recordings by large corporations to abuse privacy bothers users. Is it even achievable in the modern world to preserve privacy while enjoying technological conveniences?

The group of researchers in their work presented the idea of separating emotions from the voice by inserting an intermediate layer.

Published in the article Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants method serves as a “shield” of the emotional part of the voice, denying service providers’ supervision of user’s emotions based on their speech.

Sebastian Scholz (Nuki)

In short – the solution is to purge the recording from emotions before the AI system processes it and thus provide a dose of privacy for users.

Tests’ results of this method reported that speech files noted a 96% drop in emotion recognition accuracy. However, speech recognition accuracy based on average word error decreased by 35 percent.  Imperial College London researchers maintain that test results will improve if they prolonge the training epoch.

Therefore, the initiative pioneered by researches introduces an interesting opportunity to maintain privacy without endangering the promising potential of the human-machine correlation.

For this reason, we strongly support the success of this project.

DB