If the reflective processes of the cultural organisations involved in this National Test are one vital element of unlocking insights from the quality metrics, another is the overall report on the National Test that the Culture Counts team will be producing. We are giving a lot of thought to how best to aggregate the data and tell a rich story about the results in the round.
 

The wonderful thing about standardised questions (like the quality metrics) is that if you ask them enough times, you can start building a large pool of comparable data, enabling new questions and answers to emerge. 
 

The quality metrics as a core set of questions aim to capture the quality of artistic work and can be used across art-forms, presentation mediums and types of experience. It is essential that when faced with an aggregate quality metrics dataset encompassing all the different conditions in which they can be applied, that the questions asked of the resulting quality metrics data set do not dilute or diminish the rich details of the underlying survey data.


Therefore the data aggregation analysis in the trial is not about comparing the artistic quality of one event over another; it is about enhancing insight around the experience of artistic work as a whole across the National Test. It needs to draw out the dimensions of quality and enable conversations about different reactions to different types of work to be specifically interpreted based on evidence on a national scale.
 

We’ll keep you posted about how we are going to be approaching the analysis of the data (applying meta data to the overall data at its most granular level) in order to ensure we tell a powerful overall story without losing the complexity and richness of the underlying data.