It seems like Artificial Intelligence (AI) is suddenly everywhere, from phones to cars to kitchen appliances. If it performs calculations – it’s AI. We’ve already been through three “AI winters” in the past 70 years – yes, our obsession with AI is that old – where the hype did not match the expected outcomes. Are we heading for another one now, or have we finally cracked it?
My session at this year’s itSMF UK conference (ITSM18) will cover the current state of AI and real world practical opportunities for IT service management (ITSM) in more detail. Prior to this, I’d like to introduce a few of the key themes that are shaping the industry right now – and help to cut through some of the buzz for added clarity.
What Does it Mean to be Artificially Intelligent?
Let’s start off with some categorization. When we talk about AI, we usually talk about the capability to imitate human behaviour. This is a rather broad definition, and indeed, almost everything that does something for us could thus be called AI. Like a robot dog that fetches our slippers. Or dances. Or opens doors. But what about real dogs who have been trained to do the same? Probably not, because we seem to assume that artificial also means mechanical, and this in turn to be seemingly uninfluenced by the mind or emotions. Anthropomorphized pets do not fit that category, but learning machines do.
The capability for machines to “get smarter” is called Machine Learning (ML) and it can be either supervised (working with labelled data) or unsupervised (working with unlabelled data), taking advantage of existing data, feedback loops, and Reinforcement Learning. Unsupervised learning often benefits from what is called Deep Learning (DL), and in many systems, the algorithms used for DL are inspired by biological nervous systems – we try to mimic the way (we think) our brain works. There are challenges with and limitations to this approach, but this is beyond the scope for this article.
Most of today’s real-world AI can be described as Artificial Narrow Intelligence (ANI), where the application of the acquired knowledge and skills is limited to a single domain or task type (e.g. face recognition or natural language processing). The goal – or perhaps the dream – is to move towards Artificial General Intelligence (AGI), where the machine could mix-and-match its skills and perform at the same level as any human being. Despite the buzz, we’re not even close to reaching AGI – some say we might get there by 2040, some say the year 2100 is much more likely, and some say it will never happen. There are, of course, people who claim this will happen any moment now, or has already happened …
Utopia and Dystopia
An interesting, and perhaps the scariest, question is what will happen once we do get to AGI. What will an artificial mind that has reached the human level of “reasoning” decide to do next? That is – how long will it take before AGI becomes smarter than humans, before it becomes Artificial Super Intelligence (ASI) with an exponentially scaling skillset? Are we talking about decades, or mere minutes?
There are two main camps of thought when it comes to ASI. The first one believes in utopia – that we will be able to constrain and leverage ASI for maximum benefit for humankind. We’ll be able to find cures for diseases, solutions to the planet’s problems, and eventually probably reach immortality, in whatever form we end up taking (hint: this might not be humanoid).
The second camp is more inclined to see the result of ASI as a dystopia, and believes it will end with our extinction. Several high-profile scientists (e.g. Stephen Hawking) and businessmen (e.g. Elon Musk) are in this camp, and consider AI (in the form of ASI) to be our biggest existential threat.
There is a lot more to be said about the possible futures when it comes to ASI, and I will pick this discussion up in my ITSM18 presentation. In the hopes that the potential extinction is not immediate, I’ll now move on to the topic of ITSM, and what benefits (and challenges) AI brings here.
Is it Time for the “I” in “ITSM” to Stand for “Intelligent”?
One of the pains in ITSM, as well as a frequently mentioned criticism towards it, is the amount of manual work that needs to be done. Tickets and requests and approvals and delays … which is also one of the reasons behind the popularity of the DevOps movement, where a lot of focus has been put on automation. It’s not like automation is a foreign concept for ITSM professionals, but the technological capabilities to fully leverage this in 2008 were just a tiny bit different from what we can do in 2018. The industry has started to catch up, though, and automation has been becoming an increasingly important topic in ITSM.
In fact, most of the AI-labelled discussions in ITSM today are about automation (and chatbots; oh so much talk about chatbots). The promoted solutions wouldn’t really be able to pass the Turing Test, and “AI-powered” doesn’t mean “exhibiting AGI-like behaviours”, far from it. Although these developments do make our lives easier, it’s all AI in the broadest (and least useful) sense of the phrase. Productivity gains are important, yes, but when it comes to e.g. Customer Experience (CX) then how do we know that replacing a human with a chatbot improves CX, rather than just allows us to play with new toys and spend company funds for questionable results?
Following strict predefined scripts for automation can be useful to not waste time on repetitive (but still important) work, but these tasks give us little insight to how to improve the value created with the services. The intelligence comes from learning, not from following; and for learning, we need data.
Then “T” should stand for “Transparent”
ML algorithms need training data to be able to provide us with new information – either by analysing well-structured and labelled data and answering specific (trends-related) questions, or by working with unlabelled and potentially highly unstructured data and looking for an answer to a “Tell me something interesting” question. That is to say that finding detailed answers to specific questions requires a well-structured dataset, and to find meaningful answers requires well thought-out questions. What is it that we really want to know? This is a bit like formulating search queries in the early days of Google search, before the algorithms were improved for natural language processing.
It’s the data where some of the challenges lie as well. There have been quite a few examples of an AI-powered system behaving in a way we consider biased and wrong – most recently, in the case of Amazon’s hiring tools. This is often referred to as Algorithmic Bias, but I believe this term obfuscates the challenge and makes it more difficult to find solutions. The result of using an algorithm to comb through the available data and come up with recommendations can be described as biased, but it’s not the algorithm that is at fault – it’s the data, and behind data, it’s us.
The utility of the AI solution we design will be as high (or as low) as the quality of the training data we provide. An easy-sounding solution to this would be to add compensating measures to the algorithm to balance the results, or perhaps even tweak the data, but this is a slippery slope. How do we know what to tweak and how much? And if we believe we know what the ideal desired outcome looks like, then why do we need AI at all? Couldn’t we just make our own decisions?
We also need to keep in mind that data is historical and non-judgmental. It reflects what happened before, without any moral or financial evaluation. It is us who add this dimension to the output of the algorithm, and it is us who decide what feels right and what feels wrong. Even recent history has shown that there can be surprisingly little consensus on some of these matters among the 7.5 billion people living on this planet, and that’s a challenge we can’t ignore.
Working with What we Have
I believe that one of the huge benefits of AI is the ability to shed light on how we have made our decisions in the past. It helps to highlight our biases, and provide us with an opportunity to assess these in the context of our specific frame of reference – be it political, financial, or moral. Biases are part of the mechanism we use to explore and make sense of the world, and it’s impossible for humans to avoid bias – but it is possible to become aware of some of the biases strongly affecting our lives.
This approach also applies to the context of ITSM. To use AI to make better decisions and to improve the quality and value of the services we provide, we need to be transparent about the data we use to find insights and make decisions. If the results of running an algorithm on live data seem disappointing, we need to take a look at the training data to better understand how and why we ended up with these results. We shouldn’t introduce biases with unknown effects into the algorithm – we should aim at providing it with better data.
So, there’s an important aspect to keep in mind when having discussions with your current or potential ITSM tool provider. Cut through the hype of “AI powered” and find out what are their plans for leveraging Machine Learning. Find out how they can help you to learn from your data.