Do you speak object? the future of human-machine relations

For a long time neglected in the digital world, “spoken” human-machine communication has now become a hot subject with the arrival of connected objects. Although generally silent when sitting before our screens in the past, we have now begun speaking to smartphones, watches, cars and even our homes. Following the example of the now famous Siri, Google Now and Cortona, the aim of human-machine oral communication is to save us time and make our lives easier. So, are you ready to speak “object”? Because just a few years from now it will be the most natural thing in the world!

Welcome to the age of voice control

According to IMS Research, 55% of new vehicles in 2019 will include speech recognition to control the car environment and make it safer. Nuance is working on just such an application, in partnership with large automotive groups like BMW.

Shopping is also likely to become less of a chore in the future thanks to voice command. During user tests of Izy by Chronodrive, 70% preferred voice control over the scan function for filling their shopping basket.

Praised to the skies by professionals and users alike (with a score of 4.5/5), Amazon Echo is poised to enjoy great success in the connected object market. Capable of controlling our home (light, temperature, etc.), ordering a pizza or Uber, playing music, reading information and a large number of other “skills” that are growing by the day, Amazon Echo is fast becoming the “smartphone” of the home, controlled by voice alone!

Me human, you object

Be that as it may, talking to and chatting with an object remains highly conceptual. While major breakthroughs have been made in terms of understanding questions asked in a natural language (for example with Amazon Echo and the Josh app), we’re still very much in the prehistory! We don’t in fact communicate with objects, we give them orders they carry out (if understood correctly!) according to scripts pre-written by humans, for example to indicate where the nearest petrol station is, lower the intensity of the light or tell us today’s weather forecast.

Can we learn how to speak “object”? Not really. Although you do need a good understanding of how voice command works and what it’s currently capable of to put it to optimum use. For this, it’s best to stick to simple, direct orders. For the moment, you should forget very complex requests and dialogues like the movie “HER”.

Establishing a dialogue

Technological advances in human-machine interaction and artificial intelligence are however occurring rapidly. One thing is sure, in the future we’ll be able to interact with machines more naturally, in the manner of two human beings.

According to scientific research (Jurafsky 2009, Landragin 2013), three major components of dialogue must be mastered to perfection to truly communicate with a machine. Machines need to improve their comprehension skills, in other words they need to understand not only the meaning and tone of a conversation but also all languages along with their singularities (accents, pronunciation, etc.). They must be better capable of analysing a series of exchanges to move beyond mere questions and answers. Finally, machines will also have to work on retransmitting data in a more natural and emotional language to avoid sounding too much like a robot. Which is precisely what start-up Viv, founded by the creator of Siri, is currently working on, having already caused a stir at TechCrunch Disrupt 2015 with the announcement of their voice assistant capable of conversation.

More generally, machines will also have to be capable of respecting certain limits and understanding double entendres and trick questions, in order to avoid the kind of problems run into by Microsoft’s Tay, which in under 24 hours had already managed to tweet racist and misogynistic remarks.

The Robin start-up is taking the experiment further, by trying to teach machines to adapt to the different personalities they are likely to meet during their career as a voice assistant.

And in the future?

Bearing in mind that 20% of Google searches on Android mobiles in the United States are performed using the Google Now function, it looks like voice command has a rosy future. The technology is evolving extremely quickly and announcements on the subject are issued frequently. The latest has come from Google, announcing its imminent release of Google Assistant, which will be integrated into Google Home (a copy of Amazon Echo) and into its Allo messaging app. This smart assistant will be capable of conversing by answering a series of questions in addition to answering others having no direct link, in other words it will be capable of following a rambling conversation.

At Exoskills, we believe voice assistants hold enormous potential and will gain strength because of the huge impact they could have in numerous personal and professional fields, and their ability to make human-machine interaction more natural. For example the smart Vi earphones for sports enthusiasts or Bagel a tape-measure which can be used in companies to increase productivity and simplify processes. Unfortunately, human-machine interactions are not always as simple as they appear in the stunning videos created to present these products, and it will be a few more years before they become completely fluid and ready for use by the general public. In fact, it could well be that the technology is ready for the public before the public is ready for the technology. Look no further than the smartwatch, proof positive that habits and reluctance can be hard to overcome!

All the examples provided in this article were discovered in our Digital Chillout. Subscribe now!



Simon Gomez

Simon Gomez

Digital Producer & Prospectiviste at Exoskills. After working at Publicis, this technology enthusiast blogger shares his vision of the new digital practices to spot innovations and opportunities of tomorrow.

Submit a comment

Your email address will not be published. Required fields are marked *