In the near future virtual assistants may be able to make assumptions about their users’ mood and react accordingly. This is a possible application of the AI which opens up to new opportunities and, at the same time, brings about some ethical and legal controversies
It seems ages since we first asked questions to Siri in front of our friends so as to show, in awe, its ability in answering them correctly. Indeed, AI and machine learning have progressed dramatically over the past few years. Nowadays, we can communicate with Alexa without even noticing and maybe without realizing how her ability to provide us with correct answers has actually improved. What would happen, though, if in addition to understanding our words she could also assume, thanks to the nuances and pitch of our voice, what our mood is?
The risks brought about by the fact of Alexa getting to smart
Professor Joseph Turow, a teacher at the Annenberg School for Communication of the University of Pennsylvania, has warned us against the possible risks brought about by such an evolution. To the point that, according to him, the law should already prohibit companies to analyze what we say and how we say it with the aim of customizing advertising or recommending some specific purchases of products or services. Such an approach is still not widespread, but it may soon find a practical application and therefore could easily become quite common on a large scale: better – said Turow – to set the record straight from the beginning and set limits before it is too late. According to the scholar, without boundaries, such technology may expand beyond the borders of marketing and hence invade other sectors of our lives thus jeopardizing our privacy, but also ethics and the very concept of justice: police may decide who to arrest and banks who to give a mortgage to, on the basis of the interpretation of a specific mood or way of being (Should Alexa read our moods?).
The upside: possible applications in the sanitary system
It has to be said, however, that research into AI have also led to important results also beyond marketing. For instance in medicine. Voice profiling, for instance, allows us to specifically identify some serious illnesses, from Alzheimer to Coronavirus. Already in 2008, the current manager of the Amazon Alexa division, Rohit Prasad, ran a DARPA programme that meant to use AI to understand veterans’ mental conditions starting from their voices. The target in this case was to identify in advance some symptoms of mental suffering so as to reduce the number of suicides among veterans who had come back from the Iraqi war (Amazon’s Alexa may soon know if you’re happy or sad).
Between present and future
Going back to Turow, the Professor clarified that the law should stop the commercial usage of these technologies. Some call centers, for instance, are already doing something similar: if the virtual assistant decides that the person on the other end of the line is tense or angry, it can re-address them to some people who may succeed in calming them down. Spotify has generated a huge debate by patenting a technology which allows the system to recommend special selections of songs or genres to users by monitoring and recording their voices ( and therefore identifying their age, gender and accent, in addition to their mood (New Spotify Patent Involves Monitoring Users’ Speech to Recommend Music).
Considering the recent scenario, Turow’s concerns seem to be grounded. We need to monitor the diverse spin offs of a technology which may bring about both risks and opportunities.
Do you think that the possible developments of AI and virtual assistants may be more advantageous or riskier for our privacy? Tweet @agostinellialdo
If you liked this post, you may also like “Luxury Stores: now Amazon is investing in on demand luxury”