Media scholar Lev Manovich visited Setup today to present his new research project of cultural analytics for which he uses computer power and screen capacity to analyse large amounts of visual data. The advantage being, he argued, that no longer this work had to carried out by quantitative researchers who would only use a small selection of the rich amount of data. Examples: Capturing the first and last frame of each shot in a film, shot length, amount of movement in a shot and displaying them chronologically. Or taking screenshots of a play session of a computer game every three seconds and displaying a composition of their vertical middle lines. No data will be lost anymore.

Admittedly, the images looked very interesting from the beginning. But my mind started to wonder when Manovich presented graph after graph of the same media text. This indicated to me that the same text had to be analysed repeatedly to highlight different aspects of it. Perspectives that could not be presented at the same time on the enormous screens designed for the task. But much worse: perspectives that analysed only a part of the text. Because how can a collection of samples of the original ever be as good research material as the original itself?

Display wall visualising analysis of fifty thousand Manga pages.

Not to say that Manovich did not have a point. The new techniques provide a way to actually process all the data a text contains. However, I cannot agree that this solves the problem that qualitative and quantitative research methods face. Every way you approach it, researchers still have to take decisions of what intervals to measure, and thereby bias their research. This was illustrated best with Manovich own example of motion. Moving his hands in an inimitable sequence, he stated that different types of movement are hard to compare. Tracking the speed and reach of movement enables this, as he said, forgetting that it loses the richness of the aesthetics which incorporates many more features. Virtually uncountable ones.

It was very unlucky that I could not stay until the end to witness the questions from the audience. Reading this new media & digital culture master blog (in Dutch), tells me that a question in the line of my critique was in fact posed by one of the present professors that organised Manovich’s visit (see Skip Intro). And it looks like Manovich had no concrete answer.

A researcher who has put my fears in clear words is William Gaver, who writes:

  • “Asking unambiguous questions tends to give you what you already know, at least to the extent of reifying the ontology behind the questions. Posing open or absurd tasks, in contrast, ensures that the results will be surprising.
  • Summarizing returns tends to produce an ‘average’ picture that may not reflect any individual well, and that filters out the unusual items that can be most inspiring.
  • Analyses are often used as mediating representations for raw data: they blunt the contact that designers can have with users” (p. 7)

Gaver, William W. et al. “Cultural Probes and the Value of Uncertainty”.

More discussion on Manovich’s lecture (also in Dutch) can be read on the website of Setup.