I’d like to propose a session to discuss what the digital humanities can (and cannot) do with binary image and sound files.
Part of my frequent dissatisfaction with recent experiments in data mining, computational distant reading, etc., I think, comes from a tension that’s been in place in DH for a long time. The same set of McGann-Druckerites (a group among which I count myself) who were raised up on a critical methodology that emphasizes the centrality to literary interpretation of material texts, bibliographic codes, etc.—those aspects of texts that are extremely difficult to capture in digital surrogate form—now finds itself in the midst of a digital scene in which plain text, whether marked up or not, is king. As often as not, scanned or photographed images more accurately capture the material situation of a book than marked text does—and text files, though they can operate as “phonotext” with the potential to make sound, as Garrett Stewart points out, cannot embody the sounded performance of poetry in the way audio recordings can. “Multimedia” was once the buzzword that best captured the promise of computers in culture, but those messy image, sound, and video files strewn all about the Internet have proven beyond the purview of most text-heavy DH research.
Some recent attempts to deal more readily with image and sound in DH suggest to me that there might be more we can do on this front:
- Alex Gil‘s work as a Scholars’ Lab fellow on an edition of Aimé Césaire, which aimed to meticulously recreate the original placement and appearance of text in Césaire’s work;
- Lev Manovich‘s work on large sets of images, memorably presented at a Scholars’ Lab talk in fall 2011;
- and Tanya Clement‘s work on using digital tools to illuminate the meaning of sound patterns in the work of Gertrude Stein.
I’m surely unaware of lots of great work being done on this front, and one of the purposes of the session would be to have a bit of a show-and-tell of that existing work. I’d also like to have a conversation about the possibilities for and limitations of multimedia data in relation to the digital humanities. How can we conceive of image and sound files not just as black boxes to be surrounded by metadata, but as data (or capta, as the case may be) in their own right? Do such files offer enhanced versions of the texts we work on, or are they in many respects impoverished? And of course, what knowledge can we actually produce by playing around with the multimedia treasure trove of the Internet?
Another question for the session, just tweeted by Lev @manovich: “If 2010s web was dominated by search, what will be the paradigm of 2010s? you cant’ search 7 billion photos uploaded on FB every month.”
What a well-crafted and thoughtful proposal. I would love to hear folks’s thoughts on this b/c (with perhaps the exception of the tools Manovich is using) I’m completely unaware of the sorts of things one could do using other types of media.
The challenges seem to present themselves immediately. A concept like “word” (incoherent as it may itself already be) has a clearly (well…) extractable representation in plaintext for manipulation (white-space or punctuation delimited, with contractions representing… an unpleasantness); but even separating words within some sort of audio format (another problem: so many competing binary formats) is very nontrivial.
Are there ways to just end-run around this problem? ImageMagick to merge images files (of page images) is something I’ve tinkered with, without having to worry about conceptualizing the binary data in way.
In any case, I’d love to have my horizons expanded. Great idea.
I’ve always wondered how TV networks handle these issues: how do they locate relevant clips, sometimes from years ago, sometimes just lots of uses of a particular phrase (I’m thinking of Daily Show montages). Surely these aren’t all tagged or transcribed? Do they use tools we don’t know about? Anything we could appropriate for academic uses?
Just popped up via twitter, not irrelevant:
lab.softwarestudies.com/2012/04/software-studies-initiative-awarded.html?utm_source=twitterfeed&utm_medium=twitter