Artificial Intelligence, Machine Learning, Cognitive Computing, Deep Learning, etc. all seem to point to essentially the same thing: Computers being able to do perform tasks in a way that we would consider them smart.
Watching my children interact with Google Home got me thinking about how the technology is progressing and the future of work.
- How our expectations of technology grow quickly; and
- the legal and ethical issues that can come.
As background, recently I was in the US and was given a Google Home Mini by a colleague with the challenge of building something cool with it. Watching the very quick evolution of the way that they interacted with it was a bit of an eye opener.
As you’d imagine, the initial steps were pretty timid…
- “Hey Google, play music.”;
- “hey Google, what’s the temperature”; and
- “Hey Google, tell me a joke”.
These were basic requests and asking for information about the now. Then they started to test it…
- “Hey Google, what the weather for tomorrow” and
- “Hey Google, play Fine Music FM”.
The latter getting my attention as they were able to connect to an internet radio service and start to stream music; I thought I’d locked it down to just Spotify and my selection appropriate music.
What really got me thinking was how they then started to mine the internet for information. This year they are working through some projects and so my 9yo and 7yo started asking “OK Google, what is Malaria”.. “OK google, tell me more about the symptoms”… and finally when my 7yo says “Hey Google, you’re awesome” to which it replied “you’re pretty good yourself.”.
My 7yo turns to my wife and I and says, “I think she likes me!”.
Up until this point I was pretty certain that they knew it was a computer. They’ve interacted with Apple’s Siri a lot in the past, knowing it was a robot and even complained about the pronunciations and semi-robotic responses. The fact that the voice is fairly realistic, has great intonation and was responsive in a very human way truly had them baffled; signs of a well designed conversation. Even when we asked the same question via Google search on our phones and showed that it was just reading the first response, it was pretty hard to convince my 7yo that it was a robot.
Growth in Expectation
Seeing the expectation of the technology rapidly rise, going from a toy to a useable tool is dramatic. I’ve seen this in the past where a new piece of technology was deployed, when it was done well, the expectation of what is possible and what it should do can quickly outstrip the initial capability of the system.
The use of AI and Machine Learning, and having the experience learn from the interactions, is how we are able to take the initial experience and have it grow with our expectations. Sometimes this is training the system using captured information of interactions and correct responses to build the basis of the rules that the machine needs to create and the logic it needs to follow. Training AIs can be as simple as a dozen samples, like Google’s Dialogflow, or tens of thousands of samples. It tends to come down to the algorithm, the complexity of the task and the accuracy you need.
Garbage in and Garbage out: One of the biggest issues with learning algorithms is, garbage in and garbage out. The repositories that they use, be it the data sources or the training sources will ultimately affect the outcomes. Back when I was at IBM, building the training question and answer sets for a WATSON engine, required a lot of time and effort to ensure that each was validated and tested. Through triggering Applications in Google Home, I’m finding that some of the responses can be bizarre, especially if I was the one that tried to create the dialogue; nothing like my 3yo telling me that he want’s the story app I created to stop telling him his favourite story.
Legal and Ethical issues
AI has the huge potential to remove human error, introduce new levels of efficiencies, and by taking out the people, bring costs of delivering services down significantly. However, through learning algorithms, there is the risk in what is captured and how it’s used. Watching learning algorithms add to their repertoire is pretty amazing, but very quickly biases can creep in.
Learned Biases and the actions of intelligent systems isn’t a new problem, though the example I easily recall is Microsoft’s Tay becoming racist in under 24 hours and having to be taken offline. Putting guide rails on the algorithms and black-listing content and behaviours is some of how the behaviours are curbed, however, people always find a way to use things in unpredictable ways.
As with the example of how my 7yo interacted with the the AI assistant, very quickly AI will become indistinguishable from human interaction and people will make decisions and take action based on the recommendations or logic provided by AI. What should worry everyone is how misinformation, fake news and peoples agendas can shape the way people interact with and use the advice of these systems.
Today there are laws and guidelines being put in place in certain geographies to prepare for the ethical and liability issues that will eventually ensue. I tend to agree with Elon Musk’s view that there needs to be a lot more effort put into how we control and govern AI because it will quickly become a lot more complex and ingrained into society, moving past the novel toy of today.
Each platform is different in how they capture, store and potentially re-use recordings of what you say or do. I love the idea of conveniently asking questions in natural language and being able to data mine the full depth of the Internet, but at what cost. I suppose we won’t really know until something bad happens.
For now it will stay a toy that is switched on when I want it and be where I can keep an eye on what my kids ask it.