|Tweets by @itSMFUK|
|Selling Problem Management|
Selling Problem Management
It can be argued that the combination of skills required by a good problem analyst or problem manager are amongst the most transferable, desirable and unique within any organisation. Critical thinking, problem solving, stakeholder analysis, managing virtual teams… the list is pretty impressive. So why is it that problem management as a function still finds it so difficult to be accepted in many organisations? Why do they have trouble justifying their very existence? How can they tell their story in a more effective way?
The ITSMF UK problem management special interest group (PM SIG) have interviewed a number of its members to try to answer these questions. We set out to baseline the perception of problem management within organisations and reasons for views held. We looked at how problem management measures itself alongside the communication tactics it uses to tell the story. Finally, the PM SIG investigated what it believes would be the ideal way for problem management to show its worth.
Our sample organisations ranged from financial services to UK utilities to third sector. From mature problem management to fledgling functions, the PM SIG members we interviewed gave us some remarkable insights that are truly representative of what’s happening today. For good measure, the author added his own views in the name of the PM SIG.
PERCEPTION OF PROBLEM MANAGEMENT
The first area of investigation for the PM SIG concerned the perception of the problem management function. So we asked, How is problem management viewed in your organisation? Does this perception vary between teams or with seniority? And finally, What do you believe are the key reasons for those perceptions?
Perhaps the most interesting response came from an organisation within financial services. Their view was that the name ‘problem management’ did not accurately reflect their activities or their mission. They argued that their outputs (service improvement initiatives, reducing the risk of recurrence of incidents, management information and involvement in the risk and governance lifecycles both internally and externally) were more synonymous with the terms ‘improvement’, ‘service’ and ‘control’. The term problem management, well understood by ITSM professionals in an ITIL context, is taken too literally by colleagues in other departments and thus the role has become stereotyped.
This supports the view expressed by a number of organisations that problem management is perceived as a dumping ground for ‘difficult incidents’. A utility organisation we spoke to even suggested that problem management should be wearing their underwear on the outside like Superman, such are the miracles they are expected to perform. One respondent commented that the lack of an organisation-wide service management mind-set had led to a situation where they had created the problem management function that IT wanted not one that the business needed. The knock-on effect was that the function and process had little visibility or penetration outside of IT.
One of the more organisationally mature functions we interviewed enthused at being seen as “a trusted professional team with good technical awareness, strong process skills and a thorough business understanding”. Indeed they went on to add that in a multi-supplier environment they were perceived as “the honest broker” by their partners, a team who were well respected and “not in the blame game”. They put a lot of their success down to being proactive in reaching out to their customer, a trend that was echoed by others interviewed. Consistency in communication with suppliers and internal teams alike was also cited as a secret of success, picked up and copied by one team from an ITSMF UK session held some three years ago.
Another financial services provider told us of a function that was well established and credible within the IT space and up and coming in other business areas. They put the fledgling success in other business areas down to good marketing of their skills during involvement in IT problems. The hope is that colleagues outside IT say “These guys did a good job. I like the approach they take. Let’s see what they can do for us…” Their team has gained further credibility by engaging closely with an ICT strategy body that sits above their Change Advisory Board (CAB). Problem management is involved as an equal partner in strategic decisions to target changes for the biggest business impact or risk reduction. They are also considering renaming the function to drop the negative ‘problem’ tag.
One very interesting (if disturbing) view was that problem management was being used as a way to reduce headcount in the service desk/incident management arena. This led to a problem management team fighting a rear-guard action to counter these arguments with visions of redeploying any displaced front-line staff to other value-adding roles. In stark contrast to the view of their lower ranks, the same organisation had wide managerial acceptance of problem management’s contribution right up to board level. They put some of this success down to effective resolution and risk reduction. As a newish (six month old) function, they believe managing problem creation and not carrying a huge backlog has helped to drive the efficiency perception.
What’s in a name? Quite a lot we’d say. Many problem management functions would do well to rebrand to include ‘service improvement’ in the name. Not only does that remove negative connotations around the word ‘problem’ but also serves to allay the dumping ground tag that comes from misinterpreting ITIL’s definition of the process. One factor that may seem obvious but merits repeating is that a proactive, service focused approach to service improvement in place of a reactive, technically driven blame game is a much better persona to project from your problem team. Making sure that your team’s problem investigation and service improvement follow a consistent approach with adequate communication and engagement would seem to be an obvious quick win.
Finally, whilst we would never denigrate ITIL education, there is a strong chance that the very limited treatment that the process receives in the Foundation exam is causing collateral damage to what problem management is really trying to achieve in many organisations. For example, the fixation on the words ‘root cause’ in the Foundation delivery fosters an often misguided perception of problem management’s role. I would encourage our problem functions to reach out to those who’ve recently sat ITIL Foundation to put straight some of the sweeping generalisations used to get them through the exam!
MEASURING PROBLEM MANAGEMENT
Measuring current success was our next avenue of investigation. What hard measures (KPIs) does your process use and why? Do you measure the individuals in your problem management team? Finally, do you employ any soft measures (Customer Satisfaction (CSat) etc.) in measuring problem management effectiveness?
There was surprisingly little variation in the responses between organisations that sat at the more mature and better funded end of the scale against those that were ‘moving up’ from the lower end. Outside of the ‘usual’ metrics, a financial services organisation was particularly keen on measuring the outcomes of major incident review action points. They adopt a softer people focus by gauging feedback from the problem process stakeholders; individuals in the team are also measured on their proactiveness and the professional presentation of reports, briefings etc. One measurable objective that has been prescribed for the whole team is getting involved in a business service improvement project outside their own area.
A utility company has abandoned the traditional ‘number of root causes found’ in favour of the ‘number of known errors created’. “What’s the point in finding a root cause if you can’t do anything about it?” they argue. Another utility organisation believes in measuring the teams performing the work. The age profile of problems in specific delivery towers is used to highlight good practices and spot improvement opportunities. As long as the urge to use these metrics for finger pointing is avoided, this can be a very powerful tool. The perception of customers is regularly measured, and the department in question relies heavily on repeat incident trend data provided by service level management (SLM) to drive actions. Indeed their reliance on SLM colleagues extends to broader cross-discipline action plans.
One of our financial service providers started like many organisations, measuring overall average resolution time. They further drilled down into this data to measure individual problem co-ordinators and resolver groups. Other early measurements focused on reduction of incidents and the backlog of problems. Increased maturity and capability now see them able to report on the cost of problems to the IT service and in particular how resolutions have contributed to prevention of revenue loss. They admit that there is difficulty in attributing a financial value to brand and reputation damage but they’re not alone in that. For staff, they still measure productivity but primarily to ascertain the value of the training and education they receive. This has worked well, according to their manager who believes that his current well-trained team of three is now more productive than the five he started with because of the ability to target appropriate education. In line with many organisations, specific CSat data is gathered informally in conversation with stakeholders.
One encouraging aspect of this research is that our less mature departments are measuring ‘what they can’ at the outset (e.g. the number of problems raised and average time to close) but with plans to increase the sophistication of KPIs to reflect more meaningful business outcomes as they develop.
Measuring the outcomes of major incident actions is a great high-profile stat but there is a risk that the incident will recur and leave you with the proverbial egg on your face. We particularly liked the idea of getting individuals involved in ‘alien’ service improvement outside their own area – that seems like a great showcase for the analytical skills they have picked up in IT problem management and service improvement. Given the less than specific nature (from problem management’s perspective) of most CSat surveys it makes absolute sense to use feedback dialogues and on-going engagement to measure the output of improvements made by the team.
Refreshingly, it seems that organisations are becoming less fixated in measuring ‘number of root causes found’. Hooray! There’s little point in finding a root cause if you can’t prevent the incident happening again, reduce the impact if it does, or even spot it happening at an earlier stage. A root cause is a bonus with people too often mistaking problem triggers for root causes.
In the early stages of implementing problem management, it’s no disgrace to measure the basics around average time to close or indeed measuring individual delivery towers as you mature. That’s how you learn about the mechanics of your process and how it works. The more customer focused, outcome based metrics are appropriate and vital but they must be balanced with a view of the nuts and bolts of managing the process.
PROBLEM MANAGEMENT - SELLING OURSELVES
With communication and reporting being a critical success factor for problem management we asked, How do you communicate your success, who to, and when?
One of our financial services respondents really throws the kitchen sink at this issue with an array of communications that might leave most green with envy. Their own SharePoint and intranet sites are backed up by a problem management newsletter. The newsletter contains the obvious success stories and updates about on-going investigations. Indeed, making these successes public knowledge has contributed to the momentum to set up non-IT problem management. Now that’s a result.
Sadly, at the other end of the scale one respondent just answered the ‘how do we communicate’ question with “badly”. The fledgling nature of the team means there probably isn’t a huge amount to share so it’s not all doom and gloom. Their aim currently is just improved visibility within the organisation and recognition of work carried out. That means they are very opportunistic in choosing what to communicate and how.
Word of mouth and personal recommendation are used by one team. They spread the word by helping with business IS focus groups and IS roadshows. The whole programme is driven by their SLM people but they make certain that they and their story are a centre piece of the sessions.
Being branded effectively and having high-visibility reporting were very important to another organisation. They report on daily, weekly and monthly cycles dependent on the criticality of what they deal with. “At times I’m getting customers asking questions just ten minutes after sending the e-mails to people. They read them and find them useful,” enthused the function manager. Being very open about problem management has helped too. Their own intranet site with a detailed explanation of the priority matrix is a great example of their ‘bare all’ strategy.
Another less developed team told us of the introvert nature of their communication. The do share KPIs with the customer but in standard reports. However, they try to break the mould by getting involved in high-profile individual events to communicate success.
There is an obvious direct relationship between the longevity/maturity of the team and the communication they do. One inescapable fact is that people will tend to listen more when you’re credible. Problem management needs time to establish that credibility but functions must start early and not hide their light under a bushel.
Internal case studies are perhaps the most powerful yet inexplicably under-utilised weapon in your armoury. A simple case study might include the problem framed in business language (not techno-babble), a summary of what you did (emphasise team work strongly), the benefits that were realised and a couple of well positioned quotes from people with influence. This can probably be done in two sides of A4 but it’s such a useful educational tool because it’s relevant to your organisation and the people in it.
We like the organisations that are not afraid of getting outside of their comfort zone in selling their story. Don’t think people will not be interested… they will because you have a rare skillset at any organisation. Combined the skills with a track record of success and you have a formula for success.
We’re also fully supportive of the financial services organisation that believes you should always “tell a story”. Presenting a report as figures or charts sends people to sleep. Tell them a story and they listen, they’re gripped.
THE PROBLEM MANAGEMENT MAGIC WAND
Our final question was the most open. The Utopian dream. With the benefit of a magic wand what would be the ideal way of proving the worth of problem management? Where do you want to take it?
A financial services organisation said they would like to be able to present “management information that tells a story of reduction in risk to the organisation”. They argued that there are hard numbers associated with this. Reduced risk exposure means that the organisation needs to hold less capital in reserve in case ‘something bad happens’. They also want a way to assess their contribution to the whole customer experience and where they add value, whether that’s corporate banking, investments or high street.
A monetary figure seems to be the end game for many. One organisation has set the target of March 2017 to make problem management a part-time function (not process), preferring to empower people with the skills to solve it themselves. Staying on the money theme one reply was stark “That’s easy. We just want to show it in pounds, shillings and pence. After all money talks”.
Elsewhere in the financial sector it was all about having a robust mechanism to measure the financial consequences of damage to brand and reputation. Integrating reduction of risk into proof of worth was a theme again. Their ‘dream’ was to move away from the pure ITIL definition of problem management. They’ve started by working on the demarcation between incident and problem.
Going right back to perceptions, one organisation were almost apologetic in saying they simply wanted to show that problem management is a not just a ‘nice to have’ function. They would be happy to demonstrate the reduction in incident volume leading to more effective use of resource. “Showing where we save resource gives us the ability to acquire the resources to analyse more problems,” they argued. A kind of self-fulfilling prophecy.
As problem management we must display our worth in a number of ways. Firstly, we need something that resonates with the organisation at the highest level to secure our funding… monetary saving seems ridiculously obvious but often this is very hard to achieve and takes time to realise. Verifiable reduction in risk might be an easier one to measure (you can do it qualitatively after all) and it still shows up immediately as a lower number. Secondly, we need a measure to appeal to our customers. Measuring improvements against a customer experience framework is ideal - this is likely to be subjective but not exclusively so. Finally, the stakeholders in the problem/improvement process need a measure that appeals to them. How does problem management make your job easier?
Something as simple as the name of the team has massive effect on perception. There is no getting away from the fact that the word ‘problem’ is negative. So ‘problem management’ ends up as synonymous with dumping ground. Changing the name to something more service improvement focussed reflects what you DO, not the tools you use, i.e. the problem management process.
Try to move away from proving your worth in terms of the uber-traditional “number of root causes found”. Focus on reducing the chances of incidents recurring, on reducing the impact if they do, on spotting incidents earlier. These are much more universally understood KPIs. You’ll find that root causes will follow anyway, sure as night follows day.
In measuring our function we shouldn’t be afraid to change our KPIs as maturity and credibility increase. Don’t be too ambitious when you start; proving your worth in monetary terms might well take focus away from actually improving service. An initial focus on measuring risk reduction could be the starting point you’re looking for.
Finally, communication from your ‘problem’ function should tell an engaging story and not simply be a long string of tables, figures and charts that will frankly send your customers and other stakeholders into a stupor. The service improvement case study is a powerful weapon that will certainly help the problem management function to have its achievements recognised and perhaps - just maybe - be utilised as a wider business function.
The author would like to acknowledge the help in preparing this article of individuals at the following organisations: Centrica, HSBC, Visa Europe, Northumbrian Water, Oxfam, Holistic Service Management, Global Knowledge.