People tend to think the prime purpose of organisational performance management is learning from the past to improve the future. While this is certainly helpful, it is by no means the whole answer. Often overlooked is the other other important purpose, namely exploring the future to deliver better outcomes.
Our recent paper (see 12-11-12 MBE Control Tower Paper v2.5 Final) uses the analogy of air traffic controllers managing the flow of planes coming in to land to illustrate how hospitals around the world can optimise appointments for outpatients coming in for treatment by adopting what we call the Control Tower approach.
It also explains how this approach has been derived from generic principles underpinning the optimisation of complex service delivery performance, notably including Lean Thinking and Connected Performance.
For ease of reference, papers presented by Landmark Consulting at previous PMA Conferences and Symposia are downloadable as follows:
(1) PMA Conference 2004 (Edinburgh): Getting the Most Out of Performance Measurement (see Paper for PMA 2004 (Final))
(2) PMA Conference 2006 (London): ‘Plumbed-In Performance Improvement’: Accelerating Improvement and Adaptation in Organisations (see Paper for PMA 2006 – Alan Meekings – v1.1)
(3) PMA Symposium 2010 (Loch Lomond): Goal-Directed Behaviour and Target-Setting: A New Way Forward (see 10-09-28 PMA Symposium Paper v2.6)
(4) PMA Conference 2012 (Cambridge): Connected Performance: A New Approach to Managing and Improving Organisational Performance (see PMA 2012 Paper (Final Version))
Yet little is known about where contact centre managers currently see themselves in the important field of performance measurement and management (PM&M) on a spectrum from poor to outstanding.
Nor is much known about future aspirations or intended timescales for improvement.
So, Simon Povey and I (at Landmark Consulting) and Paul Weald (at ProtoCall One) worked together to design and analyse a specifically tailored survey, based around Landmark Consulting’s PM&M maturity model for contact centres.
A brief summary of our findings is at PM&M Brief Survey Findings.
Details of our maturity model and the survey qeastions are at PM&M Self-Assessment Questionnaire.
Recently Tina, a new member of the LinkedIn Balanced Scorecard group, enquired if anyone would be willing to share their experience around “how to implement KPI´s from Lean-processes into BSC successfully?”
This sparked an interesting debate (see http://linkd.in/e0xEQR).
As last weekend was a holiday, I asked for extra time to share my thoughts on how to link operational indicators to a top-level scorecard, which are as follows . . .
Because my approach differs significantly from a traditional Kaplan & Norton (K&N) Balanced Scorecard (BSC) approach, both in terms of underlying philosophy and supporting methodology, I need to start by saying something about my background, and hence why I see things differentlI became a management consultant in 1988 – an unexpected event, given that my views on management consultants at that time, before I became one, are best not printed.
However, through serendipity, I stumbled across and then joined (as the sixth employee in Europe), a very different sort of management consultancy that had no interest in writing reports and, instead, wanted to work with clients, shoulder-to-shoulder, Monday-to-Friday, to seize opportunities and resolve issues in double-quick time in such a way that clients were also left with a new capability to continue to develop and improve in future.
By 1995, this consultancy had been acquired by Cap Gemini, been re-named as Gemini Consulting and was rated as the leading change management consultancy globally.
Following my recruitment in 1988, I was immediately immersed in designing and delivering some of the most significant and successful transformational change programmes ever seen in the UK.
What was unusual, back then, was that each of these projects was endorsed both by the relevant Chief Executive and their Board. Therefore, we were able to work with our clients on a genuinely systemic basis, without tripping over internal organisational boundaries or sensitive issues formerly deemed to be ‘off-limits’.
This meant we could help our clients paint on a huge (systemic) canvas, rather than on a small piece of paper.
What quickly became clear to me was that the field of performance measurement and management was actually far more important than I’d previously appreciated as a senior manager and former director myself (then aged 42).
Indeed, I often quote Charles Rossotti, Commissioner of Internal Revenue in the US from 1997 to 2002, who wrote in his book Many Unhappy Returns: One Man’s Quest to Turn Around the Most Unpopular Organization in America, “The power of performance measures is vastly underestimated. They have an enormous capacity to change an organization – for better or for worse.”
What I learned, extremely quickly, was that:
(1) Performance measures, and the way they’re used in practice, often act as an unseen ‘glass ceiling’ constraining organisational performance;
(2) This glass ceiling needs to be removed in order to liberate the full potential of organisations; and
(3) What needs to be put in place instead is amore systemic approach that gives future development and improvement full rein.
All the consulting work we did back in the late 80s and early 90s (and, indeed, since then) had two components: one, improving the way the work gets done; and, the other, improving the way the organisation is managed.
However, it was not until early 1992 that I came to see the importance of managing the flow of information and decision-making in both these two dimensions (i.e. how the work gets done and how the organisation is managed) and the power of linking these two perspectives together.
My own personal ‘Eureka moment’ came while I was working at a manufacturing factory in Scotland facing imminent closure if it didn’t dramatically improve its quality, delivery and profitability within less than three months – so quite a challenge.
Working with our joint project team, we did everything we possibly could to improve how the work was done (from raw material purchasing, through machining, plating and assembly, to packing and shipping), such as: re-designing the shop-floor layout; implementing the Visual Factory; cleaning up and using their MRP software to best effect, and, frankly, everything else that would now be termed Lean Thinking.
Yet, we quickly sensed there was something missing. There was another process that needed to be improved, namely how the organisation was managed.
Back then, there were seven levels of management in this factory, all of whom were involved to some greater or lesser extent in reviewing performance and making strategic and operational decisions.
Unfortunately, by the time the top team had reviewed the latest weekly figures (and, so on, down the line through seven organisational levels), the next set of weekly figures had been produced. Hence, there was no connectivity up, down or across the organisation.
Essentially, this led to a situation where: (a) there was no clarity around what decisions were expected to be made, by whom, at what level, and why; (b) decisions made at one level were subsequently overruled or ignored; (c) endless time was spent by managers discussing or arguing with each other; and (d) generally speaking, confusion reigned.
Interestingly, this underlying confusion was totally unseen, as everyone in this factory had grown up in this environment, even if they’d worked at other factories or with other organisations previously.
So, in reality, no-one knew any better, and it wasn’t their fault.
When we looked at the flow of management information up, down and across this organisation, it was blindingly obvious that the information itself merited only three levels of performance planning and review, namely: daily, weekly and monthly. In contrast, there were currently seven levels of management in place (albeit this hierarchy existed more for pay grading purposes than enabling effective performance management).
What we ended up designing, then implementing, was what I would now call a ‘performance architecture’.
This performance architecture answered the key question, “Who needs to come together, to look at what information, why, when, where and how, and how will levels of performance planning link up, down and across the organisation?”
Importantly, this performance architecture necessitated no changes to the formal organisation structure. We simply agreed who needed to come together to look at what, why, when, where and how.
I submitted a paper to the PMA Conference in Edinburgh in 2004 on this issue (see Paper for PMA 2004 (Final)).
Obviously, this answer begs a further question, “OK, Alan, assuming I was convinced by your logic so far, how would I set about developing a performance architecture in my own organisation?”
Here re some key pointers for you to consider, Tina.
To design a performance architecture, tailored to your specific organisation, it’s worth considering the following questions:
(1) Do we have a systemic set of goals at top level, informed both by our strategy and our operating and economic model?
(2) If so, do we understand the key drivers of performance, in the context of these systemic goals, and the key levers that can be pulled to influence performance?
(3) Given that we understand both our key drivers and levers, what metrics do we need to track performance against these key drivers and levers (most of which, hopefully, will already be in place)?
(4) What information is needed to (a) monitor progress; (b) enable managers at all levels to explore their own performance data (either on their own or with the assistance of data analysts), with a view to identifying actionable insights; and (c) show the impact of decisions taken?
(5) How are necessary levels of performance planning going to inter-connect, up, down and across our organisation?
In terms of detailed design, you’ll need to unpick my earlier key question, “Who needs to come together, to look at what information, why, when, where and how, and how will levels of performance planning link up, down and across the organisation?” in the context of your own circumstances. Typical subsidiary questions include:
(1) Who – should performance planning be a one-person activity at any particular level or do other people need to participate; if so, how far, cross-functionally, does participation need to stretch; will this be a good use of specific individuals’ time; etc?
(2) Comes together – how does ‘coming together’ best happen, particularly if people are working shift patterns; are there alternative ways of working, perhaps virtually; etc?
(3) To look at what – what information is needed to make what decisions; how is this information best presented; etc?
(4) Why – who is expected to make what decisions, at what levels in the organisation; do they have the necessary decision-making rights; etc?
(5) When – how frequently is performance planning expected to happen at each performance planning level; what performance indicators are people expected to look at and when, etc
(6) Where – where are people actually going to meet; if virtually, how will this be enabled; etc?
(7) How – how can performance planning best be enabled in practice; what coaching is needed to help people understand how to get the best from performance planning; how are issues going to be referred up, down and across the organisation; etc?
You’ll immediately spot that being able to answer these questions is essential to being able to design an optimal performance architecture tailored to your particular circumstance.
Pleased be assured this doesn’t have to be a long, labour-intensive process.
Indeed, if you already have a top-level BSC in place, then it’s relatively simple to connect this top-level framework downwards, using the approach I’ve described above, not least because you should be starting with a clear understanding both of your explicit strategy and your underlying operating and economic model.
The only other thing I need to emphasise is that performance planning should not happen downwards, it should happen upwards.
To illustrate this point, imagine two scenarios:
(a) An executive team comes together to review and plan performance in a situation where they get the latest data first and meet before everyone else. Obviously, they will look at this data and doubtless spot a number of issues or actionable insights. Unfortunately, there will then be little else they can do other than tell people lower down the hierarchy what they think needs be done or what explanations they require to be submitted upwards.
(b) An executive team comes together to review and plan performance in a situation where all the relevant data has already been reviewed at each contributory level earlier. This executive team will then either: (a) have immediate access to answers to almost all of their emerging questions (given that others will probably already have spotted these issues earlier, as perceptivity is not a talent uniquely gifted to senior managers); or (b) will immediately grasp what they themselves need to do to address the issues that cannot sensibly be addressed at other levels in the organisation. Please note I never use the term ‘lower levels in the organisation’, because each level should add value differentially and distinctively.
Once people have experienced this sort of ‘bottom-up’ approach to performance exploration and decision-making, there’s no going back. It’s a game-changing approach to improving the way organisations are managed.
Obviously there’s not a lot you can do, Tina, unless you can secure permission to work at an overall, systemic level, or you can find someone who’s willing to act as an advocate and coach at director level, or you decide to press ahead with implementing these principles within the area you personally manage.
With reference to implementing a performance architecture of this ilk, there are obviously other questions you’ll need to consider, such as: at what level should we start; how will we progress from there; who will coach the roll-out of performance planning, etc?
Do let me know if you’d like to hear more about designing and implementing a performance architecture.
I’d be happy to share my thoughts with you either on this LinkedIn forum or the new PMA website at http://www.performanceportal.org/.
The advocates and critics of target-setting seem unable to agree on any common ground.
On the one hand, there is incontrovertible evidence of the damaging effects of arbitrary numerical target-setting. Yet, on the other hand, there is a significant body of academic evidence supporting the benefits of goal-directed behaviour.
We have developed a paper showing how these fundamental contradictions can be resolved by taking into account seven important points:
(1) Understanding the typology of targets;
(2) Clarifying the terminology used;
(3) Distinguishing between differing uses of measures;
(4) Adopting a systemic perspective;
(5) Acknowledging the unknown and unknowable;
(6) Charting performance; and
(7) Differentiating managerial time spans of attention and added value from front line to boardroom.
By adopting the approach we propose, it is possible to secure all the benefits of goal-directed behaviour with none of the problems typically associated with target-setting.
An updated version of our draft target-setting paper (as published by Measuring Business Excellence, Vol. 15, No. 3, 2011) is now available at PMA Symposium Target-Setting Paper v1.4.