Logo and Side Nav


The posts from 2018-2016 are on our archive site: www.softwarestudies.com.

Browse News Archive

02 May 2016

The Science of Culture? Social Computing, Digital Humanities, and Cultural Analytics

  • This visualization shows all images that have one or more “Maidan” tags, and every image is repeated for each of its tags. For example, if an image has #euromaidan and #майдан tags, its repeated twice. As a result, 1,340 images turn into 2,917. (The images are organized by date and time (left to right, top to bottom). From research project The Exceptional and the Everyday: 144 Hours in Kyiv (2014), an analysis of Instagram images shared in Kyiv during the 2014 Ukrainian Revolution.

  • Map of Kyiv that shows locations of images shared during February 18-22, 2014. From research project The Exceptional and the Everyday: 144 Hours in Kyiv.

  • Detail of research visualizations for the installation On Broadway (2014), an interactive installation exploring the Broadway in NYC using 40 million user-generated images and data points.

  • Screenshot of the installation On Broadway.


Lev Manovich

Download Article

The Science of Culture? Social Computing, Digital Humanities and Cultural Analytics (2015).


I define Cultural Analytics as “the analysis of massive cultural data sets and flows using computational and visualization techniques,” I developed this concept in 2005, and in 2007 we established a research lab Software Studies Initiative to start working on practical projects. The following are the examples of theoretical and practical questions that are driving our work.

What does it mean to represent “culture” by “data”? What are the unique possibilities offered by computational analysis of large cultural data in contrast to qualitative methods used in humanities and social science? How to use quantitative techniques to study the key cultural form of our era – interactive media? How can we combine computational analysis and visualization of large cultural data with qualitative methods, including "close reading”? (In other words, how to combine analysis of larger patterns with the analysis of individual artifacts and their details?) How can computational analysis do justice to variability and diversity of cultural artifacts and processes, rather than focusing on the "typical" and "most popular"?

In 2015, eight years later, the work of our lab has become only a tiny portion of the very large body of research. Thousands of researchers have already published tens of thousands of papers analyzing patterns in massive cultural datasets. First of all, this is data describing the activity on most popular social networks (Flickr, Instagram, YouTube, Twitter, etc.), user created content shared on these networks (tweets, images, video, etc.), and also users’ interactions with this content (likes, favorites, re-shares, comments). Second, researchers also have started to analyze particular professional cultural areas and historical periods, such as website design, fashion photography, 20th–century popular music, 19th–century literature, etc. This work is carried out in two newly developed fields – Social Computing and Digital Humanities.

Where does this leave Cultural Analytics? I think that it continues to be relevant as the intellectual program. As we will see, Digital Humanities and Social Computing carve their own domains in relation to the types of cultural data they study, but Cultural Analytics does not have these limitations. We are also not interested in choosing between humanistic vs. scientific goals and methodology, or subordinating one to another. Instead, we are interested combining both in the studies of cultures - focus on the particular, interpretation, and the past from the humanities and the focus on the general, formal models, and predicting the future from the sciences.

In this article I discuss these and other characteristics of both approaches to the study of large cultural datasets as they developed until now, pointing out opportunities and ideas that have not yet been explored.