Archive

Posts Tagged ‘collaboration’

Changing expectation of AI and voice interaction

January 26th, 2018 No comments
Reading Time: 6 minutes

Artificial Intelligence, Machine Learning, Cognitive Computing, Deep Learning, etc. all seem to point to essentially the same thing: Computers being able to do perform tasks in a way that we would consider them smart.

Watching my children interact with Google Home got me thinking about how the technology is progressing and the future of work.

  1. How our expectations of technology grow quickly; and
  2. the legal and ethical issues that can come.

 

As background, recently I was in the US and was given a Google Home Mini by a colleague with the challenge of building something cool with it. Watching the very quick evolution of the way that they interacted with it was a bit of an eye opener.

As you’d imagine, the initial steps were pretty timid…

  • “Hey Google, play music.”;
  • “hey Google, what’s the temperature”; and
  • “Hey Google, tell me a joke”.

These were basic requests and asking for information about the now. Then they started to test it…

  • “Hey Google, what the weather for tomorrow” and
  • “Hey Google, play Fine Music FM”.

The latter getting my attention as they were able to connect to an internet radio service and start to stream music; I thought I’d locked it down to just Spotify and my selection appropriate music.

What really got me thinking was how they then started to mine the internet for information. This year they are working through some projects and so my 9yo and 7yo started asking  “OK Google, what is Malaria”.. “OK google, tell me more about the symptoms”… and finally when my 7yo says “Hey Google, you’re awesome” to which it replied “you’re pretty good yourself.”.

My 7yo turns to my wife and I and says, “I think she likes me!”.

Up until this point I was pretty certain that they knew it was a computer. They’ve interacted with Apple’s Siri a lot in the past, knowing it was a robot and even complained about the pronunciations and semi-robotic responses. The fact that the voice is fairly realistic, has great intonation and was responsive in a very human way truly had them baffled; signs of a well designed conversation. Even when we asked the same question via Google search on our phones and showed that it was just reading the first response, it was pretty hard to convince my 7yo that it was a robot.

Growth in Expectation

Seeing the expectation of the technology rapidly rise, going from a toy to a useable tool is dramatic. I’ve seen this in the past where a new piece of technology was deployed, when it was done well, the expectation of what is possible and what it should do can quickly outstrip the initial capability of the system.

The use of AI and Machine Learning, and having the experience learn from the interactions, is how we are able to take the initial experience and have it grow with our expectations. Sometimes this is training the system using captured information of interactions and correct responses to build the basis of the rules that the machine needs to create and the logic it needs to follow. Training AIs can be as simple as a dozen samples, like Google’s Dialogflow, or tens of thousands of samples. It tends to come down to the algorithm, the complexity of the task and the accuracy you need.

Garbage in and Garbage out: One of the biggest issues with learning algorithms is, garbage in and garbage out. The repositories that they use, be it the data sources or the training sources will ultimately affect the outcomes. Back when I was at IBM, building the training question and answer sets for a WATSON engine, required a lot of time and effort to ensure that each was validated and tested. Through triggering Applications in Google Home, I’m finding that some of the responses can be bizarre, especially if I was the one that tried to create the dialogue; nothing like my 3yo telling me that he want’s the story app I created to stop telling him his favourite story.

 

Legal and Ethical issues

AI has the huge potential to remove human error, introduce new levels of efficiencies, and by taking out the people, bring costs of delivering services down significantly. However, through learning algorithms, there is the risk in what is captured and how it’s used. Watching learning algorithms add to their repertoire is pretty amazing, but very quickly biases can creep in.

Learned Biases and the actions of intelligent systems isn’t a new problem, though the example I easily recall is Microsoft’s Tay becoming racist in under 24 hours and having to be taken offline. Putting guide rails on the algorithms and black-listing content and behaviours is some of how the behaviours are curbed, however, people always find a way to use things in unpredictable ways.

As with the example of how my 7yo interacted with the the AI assistant, very quickly AI will become indistinguishable from human interaction and people will make decisions and take action based on the recommendations or logic provided by AI. What should worry everyone is how misinformation, fake news and peoples agendas can shape the way people interact with and use the advice of these systems.

Today there are laws and guidelines being put in place in certain geographies to prepare for the ethical and liability issues that will eventually ensue. I tend to agree with Elon Musk’s view that there needs to be a lot more effort put into how we control and govern AI because it will quickly become a lot more complex and ingrained into society, moving past the novel toy of today.

My final initial thought is on Data Mining. How is what is going in being used? What’s being recorded? How much is being stored? who has access to it? what does this mean.

Each platform is different in how they capture, store and potentially re-use recordings of what you say or do. I love the idea of conveniently asking questions in natural language and being able to data mine the full depth of the Internet, but at what cost. I suppose we won’t really know until something bad happens.

For now it will stay a toy that is switched on when I want it and be where I can keep an eye on what my kids ask it.

 

 

 

Collaboration vs. Co-creation

October 18th, 2015 Comments off
Reading Time: 3 minutes

CollaborationI had an interesting conversation the other evening with Markus Andrezak (@markusandrezak). It was using music co-creation and collaboration as an analogy of how to interact with your customers in the business world.

I like the analogy of music and music creation having once, in a previous life, been one (or a joke goes been the guy who hangs around with musicians). when you create music with fellow musicians it really is both collaboration and co-creation, everyone feeding off each other’s ideas essentially starting from one persons base concept. Extending that analogy into business you can look at a start-up where they originally set out to solve their own problems as the basis of the initial product being created. That same problem solving “thing” is discovered to work for others and a new business is formed.

As the start-up grows it starts listening to its initial customers for changes and enhancements, directly adding in features and capabilities. Again this works with the music analogy; the musicians listen directly to the friends and fans, those that attend the performances, and adjust accordingly; changing tempo, changing key, even playing in a different style. In both instances again this is Co-creation because it is a small group able to communicate their needs wants and expectations. As both the music analogy grows into wider distribution of the music, be it online or via physical distribution,separating musicians from direct interaction with their fans, as an organisation grows to include many more clients, it is very hard for almost impossible to maintain or even regain that level of initial intimacy and Co-creation.

What happens when you do reach such scale is that there is a lot of noise that need to be picked through.

There was a lot of debate as to what was collaboration and what was co-creation. Both are the process of working together for a common end. I’m hard pressed to really distinguish between the two and could easily argue that try differentiate is degenerating into an argument about semantics.

Trying to find other people’s views on this was interesting. over at this site I found a reference to this paper: A Typology of Customer Co-Creation in the Innovation Process, where they define co-creation as –

“Customer co-creation is an active, creative and social process, based on collaboration between producers (retailers) and users, that is initiated by the firm to generate value for customers” (Piller, Ihl & Vossen – 2010)

Where as over here  they assert that collaboration is co-creation.

Where I think the difference could be is, collaboration is a structured coming together to address a specific issue or problem and co-creation is a broader, ongoing engagement. Either way, I think the point is that it’s a good thing and should be embraced, unless of course you just meme copy and take your strategies from others

Innovate, innovate, innovate – making time for ideas

March 17th, 2014 Comments off
Reading Time: 2 minutes

Warning: this is a half thought – I saw this YouTube clip recently by Steven Johnson , titled – Where good ideas come from.

http://youtu.be/NugRZGDbPFU

The short of it is:

  • Ideas need time to incubate
  • The best ideas and breakthroughs come from a collision of multiple ideas or hunches
  • You need to provide a way to allow contemplative thinking and mingling of people to allow the discussion to happen.

Every day customers, managers, investors are telling us to innovate more. The biggest issues I see is that in the corporate world we don’t make time to think about things. If we do it is generally in some form of work-shop environment where no one has had 5 minutes to spare before getting there to think about it.

Whilst the internet has made it a lot easier to collaborate, borrow, use or bounce-off other’s ideas, having time to get out there and participate in discussions as well as making time to reflect and absorb is becoming increasingly harder.

 

Thoughtlet – Are we moving to a single device?

June 15th, 2013 Comments off
Reading Time: 7 minutes

This isn’t a fully fleshed out thought. It is the beginning of some musings after looking at the Apple WWDC announcements and how they are building tighter integration between OSX and iOS. It was also spurred on by this article. As users are being driven by portability and the lag between feature parity of devices is shrinking, and looking at the history and trends of personal computing purchases, are we finally moving to the “single device”? What will this new “single device” look like and what affect will it have on the current trends in the market?

Screen Shot 2013-06-17 at 9.42.33 AM

If you don’t like my picture there are others to choose from

Personal computing kicked off in the 1980s with the personal computer. This was the first time that general and flexible computing was available to the average person.

In the 1990s mobile phones took off as did the personal digital assistant (PDA) in the mid to late ’90s. This took communications and personal computing mobile. Given the limited capabilities of the PDAs at the time, most people still had a desktop PC. Those lucky enough, also had access to laptops in the ’90s, these too had limitations and for the more powerful users, increased their device count further.

In the late 1990s PDAs merged with phones to create the first smart phone, reducing the number of devices a person carried.

The 2000s brought the advancement of laptops as the norm and in the latter part of the decade saw the introduction of net books and ultrabooks as a way of increasing the portability of computing, it also saw the paradigm (can’t believe I used paradigm) shift in mobile telephony with the introduction of the iPhone. This new interface saw people’s view of mobile computing change forever.

By 2010 tablet computing, on the back of smart phones, came to market and introduced another compromise to computing. This now sees people with 3 devices, notebook, smart phone and tablet computer, each needed for a specific purpose, notebook as the data entry and manipulation device, smart-phone for the all purpose device and a tablet as the compromise of the two, meeting somewhere in the middle.

In 2013 we now see the decline in PC sales and increase in smartphone sales with tablets of varying specification and size, trying to balance capability and portability, as well as smart-phones that are so large that the challenge the smaller of the tablets on the market. Why? This jostling and positioning is trying to meet the consumers needs what are these needs?

 

I argue that people are trying to get that balance right. Ideally they don’t want a phone and a tablet, but the phone screen is too big, or the tablet too big to always have with them. If this is truly the case then the real future is going to look a lot different from where we are now, reaching an almost sci-fi climax.

 

I think what will eventually happen is that the processing power that a mobile phone can have will be comparable with that of the ultrabooks of today. Once this happens is there really a need for everyone to  have 16 devices? The new devices will be like the smart phone today with a docking capability to turn it into a powerful data entry and manipulation tool or a sleeve that allows it to have a bigger, interactive display like that of a tablet or laptop .

iphone_5_aluminum_oc_dock

vision of future of personal computing

If this is the case, what are the implications to current enterprise trends?

 

Cloud Services – Today file sharing tools like box and dropbox allow us to share files with others, but most people tend to use them as a way of syncing and backing up their own personal data. In the single device world this won’t need to change. whilst the sync capability will be less of a concern, the sharing capability will increase as it does today, moving from file sharing to collaborative content creation and manipulation.

BYOD – the Bring your own device phenomenon,like cloud, is moving past the disruptive trend and becoming the norm. With a single device, the only barrier is compartmentalisation of work and personal. As mobile computing power increases so will the ability to have capabilities like personas or profiles. Allowing the seamless switching between contexts work and personal contexts

Security implications – This will cement the concept of the micro perimeter (see really crappy Figure 2 below). Mobile computing and secure code execution is becoming more and more mature, so too has the shift in desktop computing. We’ve moved from the personal firewall and the Hypervisor to the Micro-visor (see Figure 1 below) providing the ability to secure the execution of the operating system itself, as well as temporary sandboxed instantiation of the applications as they are used. Incorporating the Mobile device management (MDM) platform concept into a policy based micro visor, allows the seamless movement from personal device to multifunction device, with employers being able to specify policies for the components under their control.

Hyper-Micro

Figure 1: Hypervisor to Micr-visor

Figure 2: Evolution of the micro-perimeter

Figure 2: Evolution of the micro-perimeter

I think that the trends of today are not going to change much or slow down, each seems to fuel the other in regards to personal computing. There are still niches in the market to be had to help consumers and businesses ease into this new paradigm (there you have it I use paradigm)!

UPDATE – 18/6/13: After a brief twitter exchange with Brian Katz (@bmkatz) and Ian Bray (@appsensetechie) I realised that I conflate the concept of Mobile Device Management, Mobile Application Management and Device Data management into the MDM terminology.

I see Mobile Device Management,  device control, as the initial stage in the evolution of dealing with the data management problem. Application management is controlling the conduit to the data via enforcing trusted applications (another potential flaw). Ultimately the data is the only thing that anyone truly cares about. This is an oversimplification of the problem as there are other concerns and factors that come into it.

UPDATE – 22/6/13: Further comment from Tal Klein (@VirtualTal) reminds me that there will always be a multi device driven by consumption/creation as well as an aggregation and administration drive to consolidation of devices. I can see that there will continue to be those that have specific needs and require multiple devices (driven by technology adaptation, or scenarios). I’m also driven by watching my family’s adoption. I’m the only one that really has multiple machines, everyone else really utilises dual devices, and only uses the secondary device due to lack of feature parity on the primary iDevices.

The evolution of the web

September 5th, 2011 Comments off
Reading Time: 1

I know I’ve had my head in a hole for the last several months, but work can get like that. I’ve finally started to be able to look outside the confines of what I do for a living and stumbled upon this awesome interactive infographic showing web trends since the beginning of the 90s.

You can see the root idea was HTML5 capabilities, would be interesting to see other complementary technologies and tools mapped to this as well.

Categories: Technology Tags: ,