Bye VFM, Hello BYP

Having seen a range of improvement and scrutiny initiatives applied in the public sector money decisions, it still amazing that some discussions of Value for Money seem to avoid the time dimension. Given that we monitor the passing of time so closely on a daily basis (time pieces worn as body adornment in the way of watches) and marked with pomp and circumstance at new year, it often seems less tangibly applied in value for money debates.

After all, the value achieved from any cost expenditure needs to be assessed over the relevant time period. Sure, some of the good business cases go as far as “net present value” calculations, which deal with the costs standardised over explicit time periods. But equally there are VFM discussions which seem to overlook the time dimension. So I propose an alternative, the Value for Money for Time test. So no longer VFM but now VTM.

get the graphic

This starts to touch on issues of “lifetime” or “lifecycle” cost of a “purchase” decision. Perhaps we’re getting closer to thinking this way, with cars for example that come with 5 or even 7 year warranties and being able to see the appeal of that, if more intuitively than rationally, so it is possible.

I expect this is also something to do with being able to more easily measure and understand the “money” dimension in the calculation, but less easy to see and indeed quantify the broader value dimensions, which might be more qualitative or even intangible. Which probably still means a tendency towards decisions that minimise cost rather than maximise value. Add into the mix that thinking about the time element is forward looking and hence can need to be a bit predictive, and things can struggle a little. So quite easy to focus on the tangible money aspect.

So all in all, a strong emphasis on money, before value and time. With business cases selling benefits, one would think that it’s value (V) over time (T) which are the prime considerations which are then tested against cost (M)….VTM.

Ironically the net present value calculation is actually about the value of the money over time, rather than value. Perhaps we need to strive for a more technical calculation in any business case…

Value per time per cost or more practically……Benefit per Year per Pound.

Age of Analysis

As someone who has been both a custodian and user of national public data – both gamekeeper and poacher – the unfolding story of public data in 2010 has been momentous.

At the end of the year references over 5400 data sets, not at all bad in a year since the formal launch on 21st January. With the “Show us a better way” ( use the data) campaign rewarding applications for local recycling, cycle paths, school catchment areas and postbox locations. So lots about using the data, and loads of super apps built in quick time. The really strong emphasis to emerge has been around the data visualisation.

get the graphic

Perhaps the more subtle barometer is the Guardian Data Blog, and the extent to which the headline visuals for each story have become data visualisations. There are now 800 posts over the last two years. Of the last page of posts to the end of December 2010 nearly 50% of the visuals (7 of 15) were data visualisations. For the last 15 posts of 2009 this was just over 25% (4 posts), and for the first full page of 15 posts at the start of 2009, this was less than 7% (1 post).

So it’s certainly been the year of the visualisation. Perhaps epitomised by the BBC Joy of Stats documentary in December, broadly constructed around the dynamic visualisation of Prof. Hans Rosling showing the global heath and wealth trend over the last century, in 200 countries over 200 years using 120,000 numbers, and in four minutes. Plus of course all the design visualisations which have become more mainstream, especially those commissioned or produced by the Guardian, extending further by the flick fan club with 900 members and nearly 500 publically produced visualisations to date.

This is all great progress, but not of course and end in itself. The real point is to create better understanding from which better decisions can be made, whether for personal benefit, for the public purse, for probity or national policy. In the age of austerity decisions are tougher, but can be made easier and made more confidently with the insight that comes with the right analysis and interpretation of that data.

So for me perhaps the most significant step on the on the mainstream evolution on our civilisation’s relationship with its data is the publication by the UK Statistics Authority in July on “The value of statistical commentary”. In short – actually only a two page statement – it places clear emphasis on the messages that come from the analysis of the data, and that statistical commentary are equally important enough to be public property, just like the data. So for me that’s one step closer to the messages which is the real treasure in any analysis of any data.

So the real insight and understanding comes from a blending of the analysis and visualisation. There's been a great data and visualisation effort, with more to come on the analysis and insight activities. So here's my "effort index" on how I see the relative effort to date.

get the graphic

So if 2010 saw the full Dawn of the Data Designer, then 2011 may well start the Age of the Analyst.

Comprehensive Spending Perplexity

In digging into the Comprehensive Spending Review 2010 - laying out the government spending plans over the next five years - there were various different categories of spending presented, but it was not clear how these related together. So to start to make sense of the data I put together my own profile of the definitions and their corresponding amounts, to have as a necessary reference source.

Slow Roast Data

It’s stuck me that there seems to be a range of vocabulary which seem to apply inter changeably to both cooking and data environments.

Perhaps the most longstanding one is “slice and dice” which for data refers the process of taking a data set and selecting and analysing sections in smaller parts. Even Microsoft uses this language to describe pivot tables in Excel.

There are some real parallels. The data are the ingredients for analysis. The software are the tools and utensils. It’s then the skills or experience of the chef (analyst) which combines the ingredients in the right proportion, in the right way at the right time, and then presents these delightfully. Perhaps we should think of the analyst as the data chef.

There are also some really close parallels in the cooking and analytical processes. In the kitchen there’s the preparation time, cooking time and then the plating up. In the data world there’s definitely the preparation time, make sure the data is sufficiently usable, that the ingredients are fit and appropriate to use. Then there’s the analysis, which is the cooking equivalent (but not quite cooking the books). Then finally there the stage of plating up, which is the all about presenting the final product.

Interesting how the analogy might get extended further. I quite like the idea of slow roasting some data, let that long low heat tease out the subtle nuances that emerge when given enough time, and would otherwise be entirely lost.

In the data world we are increasingly combining or linking different data sets together. We already have “mash up”. So he we evolve more refined approaches. How about blending – a gentle combining, through to whisking – purposefully filling with air.

Precisely Wrong?

Ahh the lure of decimal places. This is the fallacy of precision. Express something in a precise way – make it look very exact – and somehow it seems so much more convincing.

Precisely accurate data can be quite an elusive thing outside the scientific lab. Accuracy and precision are two important – and different - dimensions to at least get a feel for in any analysis. In short a sort of proxy for statistical confidence. Not helpful that these terms are often used interchangeably, even to some extent in their dictionary definitions of each other.

So accuracy is about being “faithful measurement or representation of the truth”. In a class of young children (with actual ages between 5 and 7 years) an accurate measure of the average age might be 6 years old. An inaccurate measure would be 9 years old, so not faithfully representing the truth.

Precision is about the “exactness of measurement”. In that class of children, a precise average of their ages would be 6.25 years.

So accurate and precise 6.25 years…precisely right

Accurate and imprecise 6 years…generally right

Inaccurate and imprecise 9 years…generally wrong

Inaccurate and precise 9.75 years…precisely wrong

One of my favourite examples can be found in a footnote in the formal UK Economic Accounts. “Estimates are given to the nearest million but cannot be regarded as accurate to that degree”. (Quarter 2 2010 Edition 71. Table A1. Note 1). So those final units of millions are in fact of no value at all.

Here’s an alternative overview of accuracy and precision, in a dartboard sort of way, aiming for the bulls eye.

get the graphic

Accurate…… On target. Correct. Close to what you’re aiming for or trying to measure.

Precise…. Tightly clustered. We can consider this as having a small deviation.

So ideally we want something both accurate and precise. This give us four simple outcomes form an analysis.

Precise and accurate… Exactly right.

Precise and inaccurate… Exactly wrong.

Imprecise and accurate… Generally right

Imprecise and inaccurate… Generally wrong.

Worth recognising that in the world of laboratory science there are rules about what happens to the precision in measurement when different data are combined. In the simplest of terms, when we add and subtract data we add their respective errors, and when we multiply and divide data we multiply the errors. There’s also something called significance arithmetic, which tries to simplify this sort of thinking to be application for simple calculations.

This all provides a healthy reminder to be thinking about the levels of accuracy and precision when looking at data. Worth pausing to consider if the data is sufficiently precise and accurate for the job in hand. And certainly not rush to decisions based on data that insufficient or unknown, especially where it might generally wrong or even exactly wrong.

There can be some interesting debate about whether precision or accuracy takes priory, but in practice these need to be considered together. Intuitively something which is generally right (accurate but imprecise) seems preferable to exactly wrong (precisely inaccurate). In shooting, a high degree of precision or clustering is desired. This is about consistency, which if slightly off target this can be compensated for by adjusting the sights to make the cluster more accurate. Also works for manufacturing where consistency is important and variation is minimised. A whole different debate opens up here about Statistical Process Control to measure variation in processes to help understand and reduce the different sorts of variations.

So worth having a conscious prod at the accuracy and real precision of the data, rather than take this at face value.

Analytical Insight Index

This is my propensity for insight measure. Having worked with data, analysis, strategy and performance, I’ve identified six critical factors that need to work together to get the real value from analysis.

These are the six factors which collectively determine the likelihood of getting insight from an analysis. These are:

1. Data

a. Relevance – is it the right data?

b. Quantity – is there enough of it?

c. Quality – is it good enough?

2. Tools

a. Have any?

b. The right ones?

3. Skills

a. Have any?

b. The right ones?

4. Capacity

a. How much?

b. Realistic?

5. Question

a. Have a specific question to answer or more generally an issue(s) to address

6. Inclination

a. Have the desire and drive to want to address the issue – personal or corporate.

Like all sort of such indices, this not a science, nor is it an art, more a framework for thinking about where best to target effort, in order to increase the changes of valuable insight.

There are three ways to apply this. Firstly as a simple checklist. Secondly as an index into which to score against the six factors, and thirdly as a more sensitive algorithm, with some weighting for each of those six factors.

A. Checklist

In order to achieve any insight at all, each of these factors needs to have some effort directed toward them, however small or implicit. Equally, in order to maximise insight, then each of these factors needs to (a) maximised while (b) being balanced with the others.

The simplest way to use this is a checklist. The key thing here is to check that there is some effort on each of these dimensions. Also that the effort is sensibly balanced across these dimensions. If there is zero effort on one any of these then that can mean the analysis is doomed from the start. After all there’s no point buying tools if there’s no capacity to use them, or having load of great data but not having the expert skills necessary to undertake an analysis using the tools. Can line up all of the data, tools, skills and capacity, but without any issues or inclination then this can be missing purpose and drive, and be victim of the ‘So what? Syndrome’.

B. Analytical Insight index

This next alternative is simple enough, simply score 0, 1 or 2 for each of these six factors (see below). Then multiply these six scores together. There you have it. The scoring is as follows:

0 – no effort

1 – some effort (or basic requirement)

2 –maximium effort (or enhancement)

So Insight Index = data x skills x tools x capacity x issue x inclination. Minimum product of zero (0x0x0x0x0x0) and maximum of 64 (2x2x2x2x2x2=64).

Now here’s the key point….if any of these six numbers are zero, the product of multiplying these together is zero. Which basically means that if some effort (or even maximum effort) is put into five of these dimensions then that effort counts for nothing if the 6th factor is zero (no effort). In the simplest of terms, everything might be in place except the data, so no chance of an analysis let alone insight.

And if each of the six factors are scored 1. So the product is 1. If each of the six factors scored 2. The product is 64, the maximum value for this analytical insight index.

The more one looks at this the more it becomes clearer that it is quite difficult to get a reasonable value. In fact of the 729 possible combinations, and the most frequent outcome by a big margin is zero, and just over 90% of time. These 729 calculations (combinations of 0, 1, 2) only produce a small number of mathematical products: 0, 1, 2, 4, 8, 16, 32, 64.

This is purposefully designed to focus on the integers, around the way a single zero can annul all other effort and 1s is a basic requirement, and 2 an enhancement.

There’s an even simpler approach which is simply adding the six scores together.

C. Analytical Insight Index – weighty version.

So here’s where this can get a bit more sensitive to specific circumstances, and can be used to reflect both the strengths and barriers for each of these six factors. This is about weighting each of the factors to some degree.

So based on:

Insight Index = Data x Skills x Tools x Capacity x Question x Inclination

This can be abbreviated to the six factors

II = D x S x T x C x Q x I

And each of these factors has its own weighting.

II = (d)D x (t)T x s(S) x (c)C x (q)Q x (i)I

The basic idea is to give higher relative weighting to those areas which are likely to be the biggest barriers, and hence more important areas into which to put effort.

As a starter, here’s my general take on a set of weights, based on my take of where I’ve seen the challenges over 20 years of analysis projects. I have a favourite cooking analogy to help with this too.

1. Data. Weight = 2

There’s typically no shortage of data, and applying some tests of relevance should still provide something reasonable and relevant.

In the cooking analogy, these are the ingredients. Might be expensive, fresh, free range organic through to those ingredients past their use by date.

2. Tools. Weight = 3

There are some basic spreadsheet type tools available to all, and the more expert data crunching tools can often be found in a corporate environments. Important to have the right tools for the job in hand.

In my cooking analogy these are the implements, and there is nowhere like the kitchen for the potential for the widest range of often specialist tools. Often a few tools are the most flexible ones. And some tools are just not fit for some purposes, not physically possible to stir the contents of a saucepan with it’s lid. The of course there are those whizzy tools that only get used once because they are so specialised or just more trouble than they are worth.

3. Skills. Weight = 4

Data and tools are not enough to be able to extract some insight. There’s a great temptation to jump in. Can be a bit like jumping into a swimming pool of data, with armbands, and the flailing around. There needs to be some skills with provides (a) the approach to analysis, and (b) effective communication of the messages. These are quite different to simply being able to use some tools.

In an analogy along culinary lines, if the data and tools (and capacity – see below) are the ingredients and equipment, then it’s the skills and experience of the chef which creates a potential culinary masterpiece.

4. Capacity. Weight = 5

Probably the magic ingredient. The fairly dust to sprinkle on the other factors. Capacity should probably be seen in the context of the money spent on data, tools and training.

In the culinary analogy this is all about the preparation, cooking and presentation on the plate. To extend this analogy further, data needs a minimum level of analysis to be robust, the way chicken needs a minimum level of cooking, anything less and there can be significant consequences.

5. Question. Weight = 3

There’s not usually shortage of questions, and there’ll probably be some relatively clear high priority ones to provide an early focus.

In cooking terms this about the basic need to consume food on a regular basis to provide energy to function. Quite functional really, this is probably a sandwich for lunch at the desk.

6. Inclination. Weight = 3

There’s a mix or personal and corporate here. In corporate terms, unless there’s an Executive Team which is in denial, then there’s going to be a desire to be effectively informed about the data which is relevant and important to a corporate purpose. Great of course if this is directly supporting a specific strategy or business plan objectives.

In the cooking analogy this is about the degree of appetite, or even hunger for that insight which informs improvement. This is maybe even about trying new foods, recipes, experimenting. This is about that food being part of a broader experience, a great meal shared with friends, that food being a means to a broader end.

Inflation Choc Index

There has been some recent high level debate about the various measures of inflation. This has included some some to-ing and fro-ing between the UK Statistics Authority, the Office for National Statistics and the Royal Statistical Society, culminating in some significant reports and changes.

It's critical and heady stuff. These measures are used for various pensions and benefits calculations for example, and changes are being made. Which variant gets used and for which purpose has an effect on the public, personal and corporate purse. Small changes, big impacts.

There's the Consumer Prices Index (CPI), the Retail Prices Index (RPI), subsidiary indices of the RPI such as the including RPIX (excluding mortgage interest payments), RPIY (RPIX excluding indirect tax changes) and of the CPI, the CPIY (the CPI excluding indirect tax changes). All clear now.

I'd like to add another measure into the mix. The Christmas Chocolate Coin Pound Pence Index, or the CCCPPI. This is the face value of the chocolate coins in those little string bags available around Christmas time. This shows a substantive inflation this year. This year's bag contains the usual coins - 8 to be precise - but wait for there are also four notes with a total face value of 130!

That's serious inflation, last year we're measuring in coins, and this year in notes. Well above the latest 3.3% figure from the Bank of England (that's the CPI of course).

Not forgetting the the subsidiary measures to the CCCPPI. These are the Homogenous Ontological, the Hetroscedastic Orthogonal and Hypothetical Ordinal. So that's the CCCPPI HO-HO-HO.

Joy of Stats

I expect that back in January 2009, when the chief economist at Google (Hal Varian) had said the “the sexy job in the next ten years will be statistician”, it may have come as no surprise that that nearly two years on the BBC screen this hour long documentary, “Joy of Stats”. (BBC Four, 7 Dec 2010).

Well maybe this is less documentary, and more drama, given the programme description…. takes viewers on a rollercoaster ride….presented by superstar boffin…mind expanding… Using data... so that we can take control of our lives, hold our rulers to account and see the world as it really is. Phew.

Here the purpose of data takes centre stage for civilisation… “essential to monitor governments and societies”… so we end up with “citizens more powerful and authorities more accountable”. But we go even further, and even toward a sense of human being….”make sense of the world…proving a greater understanding of life on earth”. At the same time there’s a data deluge such that “the data we now have is unimaginably vast”, but of course “data doesn’t tell you anything, you have to analyse it and even make the data sing.” And perhaps the key message is to do this in an attractive and interesting way, in order to effectively engage people.

So this is the showcase for the of data visualisation that does just that, specifically that of the presenter Prof. Hans Rosling, from the discipline of global health. Inviting a fact based view of the world, here he tells visual story of the world in 200 countries over 200 years using 120,000 numbers, in four minutes. This is big stuff. Big issues, big graphics, a genuinely global perspective, and of course engaging. Powerful enough to have been viewed over 1.5 million times on Youtube in its first two weeks.

Here the power of the data is used to see five dimensions of analysis at the same time: mortality rate; income; country location, country population size and time. Of course this is about using data to provide information for positive change. For Hans Rosling this is especially helping us all see a global picture which has changed substantively, and help overcome some of stereotyping especially around the out dated labels of developing or western world.

Some of the historical examples used really exemplify this. The Swedish example of the monitoring of births and deaths in the 1800s, revealing (a) that the actual population of the country was 2m and not 20 million (b) that there were high levels of infant mortality, which led to a sustained national drive to reduce this. The other great example of Florence nightingale who’s “polar area graph” of 1855 showed the extent of preventable hospital deaths from the Crimean war, and led to a new improved era of hospital hygiene.

And to bring this bang up to date... the billon dollar o-gram by desinger David McCandless, showing examples of the billions of global spending in relation to each other.

This programme also attempts to fit the visualisation work within the context of the more usual statistical stuff. So something around average – including the example that the average number of legs per person is 1.999 - and then recognising the variation – described as turning numbers into shapes. So this is where the visualisations start to make a mark. And then visualisations as a link to correlation to help see how data measures vary together.

So where does this take us…..

Perhaps a key test here for visualisations is what I’ll call the “Pretty what” scale. Which is “pretty nice" through to “pretty useful”. It’s the dual meaning for pretty that works here. Pretty can have the visual meaning - appealing, attractive, beautiful – as well as the functional – considerable, rather, somewhat. So from can think of this scale from….”nice to look at, but so what?” through to “wow, what are we going to do to improve that?”

This is about knowing that the visualisations are a means an end, rather than an end in themselves. As an end in themselves they may happily qualify for “pretty nice”, and that might be just fine. I would expect this wave of visualisations might even start to move on, from pieces of design, to be an art form in their own right. The language is evolving too. This is not about graphs, this is graphics, images and animation. But of course the real output is the message. From that message the ultimate outcome is something changing for the better, and that's Hans Rosling's motivation to improve global health by enabling us to understand the issues more clearly. Visualisation is a really valuable tool in the broader analysis process.

On that note, as we’re approaching a new wave of data design, it’s worth doffing a cap the early explorers who lead the first wave in the 1980s. Those such as Edward Tufte and John Tukey who led some of the simple visualisation thinking, keeping those visualisations well grounded in the real and raw data. Those books by Tufte are now in the coffee table league, visually pleasing browsing. And how about “Visual and Statistical Thinking: Displays of Evidence for Making Decisions”, and early contender for the “pretty –useful” visualisation camp.

There’s also something here about a gentle challenge the more traditional statistical approaches. The example used here is quite light hearted. This looks at result for a ‘global health awareness’ test for students and found that statistically those students were less likely to get the right results than random. Then making the comparison to a better result (50/50) that would be achieved by monkeys. While this is light hearted it is still a challenge the relative value of the power of the traditional statistical stuff to inform and convince. So for me another indicator of the that shifting balance from left side brain thinking approach to data - statistical analytical - to the more left side visual and intuitive approach to data.

In short this is all about telling a story, and about a message at a time. So here’s another recent TED (Technology, Entertainment, Design) talk from Hans Rosling, along similar lines, this time on child mortality….”The good news of the decade”.


Envisioning Information. Edward R Tufte. 1990.

The visual display of quantitative information. Edward R Tufte. 1983.

Visual and Statistical Thinking: Displays of Evidence for Making Decisions. Edward R Tufte. 1997

Exploratory Data Ananlysis. John W. Tukey, 1977

Fiddle Factor

So what sort of margin of error might we sensibly plan into in our data. It might be helpful to take a steer from the Comprehensive Spending Review which lays out the national spending plans for the next five years 2010/11.

These are serious and big numbers. The total spending for 2010/11 is £697 billion. Having been digging into the Comprehensive Spending Review recently, there are accounting adjustments, depreciation and reserves (including “special” reserves) all of which are separated out. But looking at these collectively, this adds up to a not insignificant figure, which in absolute terms is quite striking.

get the graphic

1. Accounting Adjustments.

That total spend of £697bn (which is Total Managed Expenditure in HM Treasury speak), and comprises of what might be simply described as revenue and capital.

For the revenue part, (Public Sector Current Expenditure) of £637bn, there are accounting adjustments of £14.1bn, which represents 2.2%.

For the capital part (Public Sector Gross Investment ) of £59.5bn the accounting adjustment is minus £7.9bn or 13.3%.

So when we look at the total spend of £697bn, we are aggregating these two accounting adjustments. So combining £14.1 and -£7.9bn we get an accounting adjustment of only £6.2bn or 0.9% of the £697bn. On the face of it our adjustments have become smaller as we have added them together. That £6.2bn or 0.9% belies the underlying bigger adjustments.

We get a quite a different picture if we consider the absolute adjustments. So £14.1bn and £7.9bn give and absolute total of £22bn, which makes a more remarkable 3.2% adjustments of the total £697 spend.

These can be helpfully seen in the context of spending on individual departments. For the revenue spend (Public Sector Current Expenditure of £637bn), that £14.1bn accounting adjustment is more than the settlement for the majority of Government Departments….for example, Intelligence 2.1bn, Transport £6bn, Culture Media and Sport £6.1bn, International Development £6.4bn, Home Office £10.2bn, and indeed the Wales settlement of £13.7bn. And that £14.1bn is still sizable when compared to the other end of the spectrum, the traditionally big spending departments….Defence £35bn, Education £53bn, Health £98bn and Work and Pensions £159bn.

get the graphic

2. Depreciation.

While this is not specified explicitly, some of it can be induced from the various totals. It’s £16.1bn. (This is from a category of resource spending - Departmental Expenditure Limits - both with depreciation, £342.7bn, and without £326.6bn).

3. Reserves

For the revenue part of £637bn there is a £2bn reserve and a £3.4bn “special reserve” (with does rather sound like a port), which make a collective £5.4bn or 0.8% of the total revenue.

For the capital part of £59.5bn, the reserve is 2.1bn and the special reserve £0.7bn, making this £2.8bn, or 4.7% of total capital.

So collectively these reserves total £8.2bn, from a total spend of £697bn represents 1.2%.

In the round

So pulling all this together, there’s £22bn Accounting Adjustment, £16.1bn Depreciation and £8.2bn reserves which total £46.3bn. So that’s 6.6% of that £697 billion total spend for 2010/11. Even if we use the net (£6.2bn) rather than absolute (£22bn) figure for the accounting adjustments, this is £30.5bn or 4.4% of that £697bn annual spend.

So not surprising then when digging into the more detailed millions - rather than billions – of the formal UK Economic Accounts the story continues. At the first table of those 204 pages includes the note….. “Estimates are given to the nearest million but cannot be regarded as accurate to that degree”.

So there you have it, the standard set by the national accounting is around the 5% mark. Just in real terms - at over £30bn - that’s a lot of accounting adjustment, reserve and depreciation.


HM Treasury. Comprehensive Spending Review 2010. 20 Oct 2010. Statistical Annex. Tables A.1, A.3 (col.1), A.4 (col.1), A.5 (col.1)

UK Economic Accounts. Quarter 2 2010 Edition 71. Table A1. Note 1.

Data Data Everywhere

…but let’s just stop and think”, was the opening title for the panel debate earlier this month which I and forty or so others attended at the Royal Society, sponsored by the Royal Statistical Society (as part of the Get Stats Statistical Literacy campaign) and the British Academy.

The panel session was titled “Speed Data-ing: the effects of the rapid rise of the data society. Is the public’s date with data heading for disaster, or could it be a match made in heaven?”

The opener was from David Hand, the current President of the Royal Statistical Society and Professor of Statistics at Imperial College London. The key messages here were around that the fact that some of the data collection is more explicit and some more implicit. The more explicit includes the government collection of data to help understand the needs and wishes of the population. The more implicit includes all that personal and collective on-line purchasing information that is used to make quite targeted recommendations to us.

Perhaps the key message though was around the impacts of joining all of the data up. Well not in fact all of it, just some of it will enable new insights. How long before the life insurance premium is informed by the information collected about personal food purchases….

Opening up and explaining the numbers behind the news was the message from Simon Rogers the Guardian’s Datablog editor. Acting as the bridge between the data and expert user is a key role for the datablog, supporting the mutualisation of data. This has led to a significant flow of visualisation of the emerging data, but also widening the scope of what might be helpfully visualised, such as mapping the locations in the latest round of wikileaks. Also some frustrations along the way for the datablog, including the ability to get consistent high level data from across government departments. But the Guardian still pull together the most publically accessible and comprehensive spend profile for government.

The benefits and risk of open data was the theme for David Spiegelhalter, Winton Professor of the Public Understanding of Risk at the University of Cambridge. The emphasis here again on the added value of integrating data together, and the ease with which this can be done. But also acknowledging the risk of over interpretation. Was interesting to explore where things goes wrong, including where the logic flows in linking data but the outcome is flawed and even nonsensicle, and especially when outcomes are statistically significant but meaning is nonsensicle. Simple tools can be quite enlightening, with the funnel plot providing easy insight in a range of cases.

One of the key questions that emerged for me - and which I put to the panel - was the extent to which open access to data is just access for expert public rather than the lay public, and this might evolve. While the Guardian has a datablog, and other broadsheets have data experts, so perhaps a measure might be when red-tops newspapers are also driving a data enabling agenda. There was recognition of the work going on to help identity what works for the lay public, with data experts getting more involved in public documents. Also value is now getting extracted from the numbers by folk that don’t need to crunch the numbers in the ways that would have been necessary in the past, and that we’re in the early days of data open data and visualisation… Also noteworthy that some of the media data products are already becoming definitive reference points – even for government - such as the Guardian’s visualisation of government departmental spend as seen here…

So where might all this take us….

1. Road Map. This makes we wonder if now we need a new way to think about “data”. It seems we’re missing a macro and accessible way to organise and describe this emerging world, the way we have for our physical world. Perhaps the best analogy for me is “roads” (plural, the way data is plural of datum). We all have a sense of a UK road structure (road numbers generally increase clockwise out of London) and a hierarchy of roads (M,A, B, minor) with standard characteristics which are generally predictable (which we know having travelled only some) but which are still locally unique. As a simple structure like ‘roads’ helps us understand and deal with the real world with some degree of confidence, we might need a framework for the “public understanding of data”. After all roads get us to a destination the way data enables us to get to a message

2. Data Stardom. Also, as the volume of data increases so some will reach stardom and others will fall by the wayside. While talent is only a factor in stardom, so usefulness will be only one factor for data, there’ll also be the right time right place factors. So a world where not all data are equal will be the norm, with some survival of the fittest, but because of those other factors may not be the most ‘fit for purpose’.

3. So what Test. With the wonderful visualisations that are emerging on a daily basis, there’s a risk that these are seen as an end product. Some visualisations are an attempt to jazz up the standard data tools and in doing so create more complexity by needing to understand how the visualisation works before being able to work out what it says.  O course that's still a necessity for the more traditional approaches which are more generic, but which are currently more familiar. Of course the real challenge is about extracting the messages – the “so what” test - and engaging visualisations will be a big factor to that end.

4. Intuitive Insight. Something quite fundamental emerging here about how the traditional heady stats are not always providing a meaningful and engaging answer, even a nonsensicle one even though statistically significant. I sense a trend here to the more visually intuitive rather than statistical inductive. After all the eye and brain team can see quite complex patterns, so a bit more right side art brain and a bit less left side science brain might be the new norm.

5. Data Rave. In the same way that two good ideas rubbed together can create a great idea, linking the right two data might just create great insights (a bit more ambitious than just the whole being greater than the sum of the parts). And while there’s a law of diminishing returns, the turning point at which value starts to tail off in the data world might be further along than we might initially think. In public data terms we’re only starting to merge small numbers of data sets (even if they are big ones), so there’s an interesting times ahead. So more like a table for two over a glass of wine, soon to become the big party, then the club and the rave. Roll on the data rave then.

So a great session to tease out some of the new dynamics in the world of data, but like learning to ride a bike, there’s plenty of early wobbling and tumbling before getting to the state of generating efficient speed and distance from the here and now.

British Academy's audio coverage of the event.....

What a Performance

If there’s one reoccurring truth, it’s that good analysis follows process and a multidimensional one at that.

The first dimension is to take a measure and see it in its own context. This the analytical process described below. In short the process for a basic rounded analysis, to get to an overview quickly and confidently. There are pointers to more to more advanced options below, but not intended to distract from getting to that overview. The second key dimension is to look at a range of measures – assessing measures in the context of other measures - but more of that another time.

The starting point is of course a “Question” for which this analysis provides an answer, (as well as new questions). In the most general sense, that question might be “how are we doing”. Simple examples might be profit, crime rates, sales, customer satisfaction and so on. The broad analysis components are the same, although the emphasis may vary.

So assessing a measure in its own context has three main components, and the analysis of this measure splits into five steps.The components are snapshot – we have a number that is important to us (step 1), trend - to see what’s happening to that number over time (step 2) and benchmark – to see how this compares to others (step 3). This provides the material to provide a consolidated assessment (step 4), and the foundation for further analysis (step 5).

Step 1: Snapshot – we have a number that is important to us.

This is about having a relevant, realistic and meaningful measure to analyse. This first step is to have a value on that measure. A

simple step but important to ensure some clear thinking on the fundamental basics. This should be relevant to a specific purpose, and meaningful in that it can be understood with some ease and confidence. This is all about having a starting point, the right measure which is important to us.After all the analysis of the wrong measure is mostly pointless.

Our starting point might even be a single data point, but one that we know is important to unpack. That one data point might often seem sufficient in a paragraph, but graphically starts to look rather insufficient...

Step 2: Trend – what’s happening over time?

So what is this measure doing over time? Is it stable, getting higher or lower, or more specifically getting better or worse? Using the same measurement over time, there should be some understanding and confidence that this is measuring the same thing, that definitions have remained stable for example. Or if not, when and how they changed.

So now we have at the very least a second data point, and ideally more. In short, more data points the more clarity. After all there will become degree of “noise” in the data.

In fact in many circumstances it can be more probable that the data will be different from one time point to the next, because of that “noise”, in short ‘natural variation’. Hence with two or three data points you still might be seeing noise rather than real signals or messages. In fact in this graphic there are places where the measure goes down from one time period to the next, despite the overall upward trend….just using those two data points on their own would point wrongly to a trend in the opposite direction.

[More advanced options: This would involve looking at the trend over different time groupings.Having looked at a monthly trend, it may well be that a yearly or weekly perspective tells different story. See “Trend, what trend?” ]

Step 3: Benchmark – how this measure compares to others?

Having set that snapshot in the context of time, and seen the trend for the data - the next step is to see that snapshot in the context of other data on this measure. Typically seeing how this compares to other organisations for example

Of course comparing when data with other organisations, we typically do not have the same degree of understanding and confidence about the data, as we do with our own data, so we need to tread more carefully. If not comparing apples and pears, we might at least be comparing different varieties of apples, or if we’re lucky, just different apples of the same variety.

From this we can see things if look better or worse than others. So is our snapshot measure typical, higher or lower, or perhaps in the higher or lower extremes.

[More advanced options: This would involve looking at different sorts of benchmarks which might tell a different story. This might include looking at national or regional perspectives, or indeed other sectors. See “Benchmark, what benchmark?”]

Step 4. Consolidate Assessment

At this point it’s key to consolidate what has been learned to get an overall picture – that basic rounded analysis. Tempting to do more detailed trend or benchmark analysis, but this may be at the detriment of getting to a broader picture quicker.

So now we have a sense of how our snapshot compares over time and to others, which we need to bring together. A simple way to do this is to consider the outcomes of both trend and benchmark

Trend – things getting better or getting worse?

Benchmark – good compared to others or poor compared to others.

It’s really helpful to approach this graphically (based on an approach called the Boston Box, originally developed by the Boston Consulting Group). Often there is no clear cut trend or the benchmarking, but it’s still possible to identify where the data sits in this overall framework.

The best place to be is probably improving and already best in best in class. The worst place is probably getting worse and already worse in class. The remaining two categories are a bit more subjective…worst in class but getting better might just have the edge on better than others but getting worse.

Step 5. Further Analysis

The process above can bring us to a rounded and big picture assessment quite quickly. There are more detailed analysis opportunities that are possible along the way- and these are discussed separately - but they should not distract from getting to a quick high level overview on these trend and benchmark perspectives. For example, if we knew from our trend that the performance was downward, our response to that would be different depending on our benchmark, whether for example best in class, or worst in class.

This current picture provides a foundation from which to look ahead, and here’s the quick indicative peek…

5a... look at the trend for the comparison to others [benchmark trend]

5b… look at the trajectory for our measure. [projection]

5c… look at the trajectory for the comparison to others [projection of benchmark trend]

This also now provides the foundation from which targets can be more confidently set, reflecting (a) what’s been achieved in the past, (b) what the trajectory looks like, (c) how other are performing, and their trajectory.