Some days it feels like we are ‘Living in the Matrix’

Share

While the outrage over the use of Facebook data by Cambridge Analytics during the US election seems to have faded, stories of technology invading ever deeper corners of our lives continue to emerge on a weekly basis.  Occasionally, I get the feeling from firsthand experience, we are entering a world that is intertwining ever deeper with our reliance on technology.  My nod to the 1990’s Sci-fi movie ‘The Matrix’ in the title of this blog is a reference to a world where people think they are living normal lives, but are in fact plugged into a massive computer simulation where occasional quirks appear, giving away the fact that what people are experiencing isn’t real.

 

                                        (Photo source: www.pexels.com)

 

In a personal anecdote, I was recently prompted by the Google Photo app on my phone asking if I would like to send a few recent photos of my friends and me that were captured in the photos to them.  Initially it seemed like a helpful suggestion, then it dawned on me that Google has the power to recognize the individual faces in the photos, associate them with specific individuals and their contact info and suggest that they might want a copy of the photo as well.  What else does Google know about my personal life and my friend’s lives? Probably quite a lot.  As the specific images involved cycling and outdoor activities, surely a database is being built about my life, habits, activity levels and personal interests, as well as who I spend time with and when.  Even if Google isn’t yet compiling all this info, it is clearly possible that by identifying my friends and me and interpreting the date, time, location and other aspects of the images, that they easily could.  This, combined with the fact that my Nest thermostat (a company owned by Alphabet/Google) also knows whether I am home or not, makes me wonder how much info about me and my personal habits is being collected.  Not to sound too paranoid, but it occasionally feels like some of the helpful features technology offers overstep the bounds privacy.  This is all in the name of more accurate search results and targeted advertising, right? 

 

In defense of the tech giants, I was recently prompted after logging into Facebook about the option to turn off facial recognition in the images that I post.  Clearly, I wasn’t the only one that was irked by the power and implications of this technology.  Though I don’t think I actually turned it off.     

 

Amazon’s Alexa was in the news a few weeks ago over a glitch causing an email to be sent by the voice recognition device that included transcribed text of the owner’s conversation to one of their contacts.  Their Alexa device was activated by the owners in a normal conversation and it misinterpreted the conversation as the exact sequence required to send a message out.  Clearly, having a device listening to your conversations at all times has its drawbacks.  Perhaps it is no coincidence that the Alexa device has an eerie resemblance to the mysterious black monolith that appears at the beginning of the classic Sci-fi film ‘2001 a Space Odyssey’. 

 

For your reference - https://en.wikipedia.org/wiki/Monolith_(Space_Odyssey)

 

(Photo source: www.pexels.com)

 

One activity that has eluded AI programmers is scheduling meetings.  The speech that people use to describe dates and times is often convoluted and nuanced.  For example, saying “I’ll be there at 7:30 or 8” often comes out in common speech as “I’ll be there at seven thirty (pause) eight”.  Does the computer interpret this as the range of times between 7:30 and 8 or 7:38?  How does a computer interpret the context of AM or PM that might be obvious to someone having a conversation?  Scheduling meetings and appointments is actually big business. 

 

The demo of Google’s intelligent assistant software Duplex seems to have passed the ‘Turing test’ with flying colors.  For those not familiar with the ‘Turing test’ (https://en.wikipedia.org/wiki/Turing_test),  it was a test stipulated by the British mathematician Alan Turing in 1950 as a test of artificial intelligence.  The theory is that a computer would have reached the state of intelligent behavior when a user testing and interacting with the computer system would not be able to tell if it was interacting with a human or a computer. 

 

(Photo source: www.pexels.com)

 

Google’s AI Blog provides several examples of their Duplex program in conversation with unknowing people taking dinner reservations or hairstyle appointments.  (https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html

 

The examples show the Duplex software is even able to interpret thick accents and handle all kinds of human imperfections, like incorrect grammar and casual speech.  The software introduces a natural sounding voice as well as human quirks of speech, such as long pauses and the contemplative sounds like ‘ummm’ and ‘yeah’.  After listening to the demos a few times and knowing it is a machine speaking, you can tell it is a computer speaking, but it is likely that most people answering a cold call from this software would be fooled, at least initially, until the conversation got off topic or deeper into complex issues outside of the limits of the program.  The demonstrations open a window into a world where conversations with computers are far more advanced than the simple canned answers and computerized voice Apple’s Siri and other similar systems provide on smartphones today. 

 

Computer systems so powerful humans can’t recognize they are interacting with a computer raises many ethical questions as to whether people should be made aware they are talking to a machine at the beginning of a conversation.  Will people feel tricked and angered by the fact that it is a computer they are interacting with when they initially thought it was a human?  Should a computer voice even be programed to sound so real that it is indistinguishable from a human voice or should the systems be made to sound obviously computerized so that people know what is happening. 

 

The Google demonstration was one of the first times I have seen or heard (outside of science fiction films) a system that is almost indistinguishable from a real person.  As with all technology, things are only going to improve over time and reach a point where computer interactions are indistinguishable from human interactions.  The Google Duplex demonstration shows that this future will soon be a reality. 

 

Moving call centres to low wage jurisdictions with English speaking populations has been a cost saving measure for the last few decades.  The advent of competent computer systems capable of handling most call centre tasks could do away with even these low wage workers.  Similar to the threat to employment for truck drivers posed by self-driving transport trucks, now call centre workers, receptionists and help line workers are faced with a similar threat. 

 

To take things a step further, at the Abundance 360 conference earlier this year I tested a demo that allowed me to fly a drone using sensors connected to my head that measured my brainwaves.  After about a minute calibrating the device and learning how to focus my thoughts to the right parts of my brain that activate the drone, I was able to get the drone to take-off and fly.  The control of the drone was relatively poor, and it promptly veered off and crashed, but it was impressive to be able control a physical device with my thoughts in a brief trial.  I’m sure with a bit of practice and some improvements to the technology, the control would get much better. 

 

Why not skip the whole cumbersome voice recognition and interaction step and simply read people’s brainwaves and thoughts directly?  Then we will be truly ‘Living in the Matrix’. 

 

 

Jeffrey controlling a drone with his brainwaves

(Photo source:  Author)

 

The opinions expressed in this report are the opinions of the author and readers should not assume they reflect the opinions or recommendations of Richardson GMP Limited or its affiliates.

 

Richardson GMP Limited, Member Canadian Investor Protection Fund.

 

Richardson is a trade-mark of James Richardson & Sons, Limited. GMP is a registered trade-mark of GMP Securities L.P. Both used under license by Richardson GMP Limited.