Archive for the ‘General’ Category

  • DH and the Tech Industry


    As someone who hails from Silicon Valley and has a lot of friends working at places like Google, Facebook, LinkedIn, etc., I’ve been able to take advantage of a lot of input and exposure to new technologies as I’ve been exploring the possibilities for DH to inform my own research in Renaissance drama.

    The University typically sets itself aside from the private sector, especially in the humanities. There’s much more back-and-forth conversation in the sciences though, with professors consulting for industry and industry leaders returning to lecture at the academy. I wonder if DH should or will provide a similar bridge between private and public as tech companies might produce a tool useful for digital humanists or bibliographers/librarians/archivists/etc. might be able to do consulting for projects such as Google Books.

    This perhaps also relates to Eric’s public humanities session. We all bemoan the awful lack of metadata in the Google Books digitization project, but the truth is those texts will be the ones that the general public is most likely to access – not the carefully curated archives that DH academics have been painstakingly putting together (at the moment anyway). Is partnership with industry our best shot at getting the right (or at least better) information about humanist subjects out there? What are the possibilities or ramifications of flirting with the line between public and private DH?

  • Between capta and data


    My session is on making sure the digital is humanistic, and a tension I’ve seen running between a couple lines of argument in DH literature.

    Humanists have long recognized that our needs are rarely met by tools imported from natural or social sciences. Echoing this longstanding objection, Johanna Drucker argued last year that humanists require fully humanistic representations that glory in ambiguity, in information as taken and interpreted, not given, and that the most common uses of software built for representing data, whether it is spatial or tabular, rarely meets this goal.

    Yet digital humanists have also been attuned to the ways that new technologies, tools not originally built for humanistic ends, can lead to intellectual breakthroughs. Databases are their own genres to be read. GIS brings to the eye patterns unidentifiable without visualization. These and other technologies yield new interpretations of the phenomena they represent by breaking them into discrete events and units that might be measured, analyzed, seen.

    I would like to see a session that deals with the seeming tension in these two lines of argument: one, that new technologies yield intellectual breakthroughs and contribute to new humanistic knowledge precisely because of their ability to cast information in different light; the other, that these new technologies often fail to bring any insight identifiable as specifically humanistic. So let’s talk about specific technologies and how they might bring new insight because they lead to new digital and humanistic readings.

    Oh, and I also really want to attend Eric’s session.

  • Sharing Data / What Data?


    I attended a symposium earlier this month entitled “Sharing and Sustaining Research Data,” wherein the participants specifically discussed scientific datasets.  One of the presentations that really excited me was David E. Schindel’s talk on “DNA Barcoding and Early Data Release.” During this presentation, Schindel discussed the Fort Lauderdale Principles, 2003, which he described as being a new paradigm for accelerating Cybertaxonomy development (much like the Bermuda Principles, 1996, which continue to encourage the rapid dissemination of genomic data).

    I’ll summarize the Fort Lauderdale Principles as follows:  when it comes to sharing data, there are three groups that are responsible for ensuring the success of the “community resource system:”

    1. Funders
    2. Producers
    3. Users

    Furthermore, producers must commit to “making data broadly available prior to publication,” and users should respect the expressed research intents of the producers.   This requires that data should be published before the research (or, as the case may be in many DH projects, even before the website).

    Therefore, I’d like to discuss how such data-sharing tenets do or do not fit into the current Digital Humanities landscape.  I have seen online discussions and articles, such as Christine Borgman’s “The Digital Future is Now: A Call to Action for the Humanities,” but what have been the results so far?  Or, if I’m completely mistaken and such systems for data sharing already exist (maybe GitHub has become the de facto standard, for instance?), I’d love to learn more about those systems during this THATCamp, too.

  • Possibilities for Local History



    As the librarian for UVa’s School of Architecture, I work with many scholars who are deeply interested in the history of communities.  Often, the interest is local (Charlottesville), but our historians and urban planners are also digging into communities throughout the United States and the world.  While the research is often centered on architecture and urban planning, it extends to interdisciplinary aspects of food planning, use of public space, and many other directions.  In terms of media, it encompasses images, texts, primary documents, maps, oral histories, planning documents, and just about anything else you can imagine.  I find so many wonderful new ways of discovering local history resources, many of which are the direct result of DH technologies like Omeka, GIS, etc.  But, implementation is scattered, and often limited to the silo of a single institutional collection.


    I’d love to create a vision for the ideal local history portal for researchers.  I imagine that it would combine multiple aspects of some of my favorite sites (HistoryPin, WhatWasThere, the NYPL MapWarper, Visualizing Emancipation), along with characteristics of tools like Omeka (and I have a feeling I’ll be adding NeatLine to that list soon).  It would also need to transcend silos of individual institutional collections—bringing together photos, documents, etc. from the local historical society, university archives, local planning and preservation org, public library, and more—while allowing those institutions to promote and “brand” their own resources.


    I’m hopeful that there’s a group of folks that might be interested in playing a game of “Imagine going to one site for a city/town and being able to….”.  I would guess that many of us will contribute knowledge of projects that are inching us closer to this research utopia, and we might also come up with some “boy, it would be great if someone developed…” ideas as well.  At the end, we might walk away with a road-map to some amazing possibilities, and hopefully some excited people that might want to collaborate to make that a reality.

  • Proposed session: iBooks author


    I’m proposing a session to work on figuring out / optimizing the use of / discovering an excellent purpose for the iBooks author app.

    With my research assistant Lauren Burr, who has been moving all my teaching materials for the Multimedia course at the Digital Humanities Summer Institute from HTML to iBook e-textbook format, I’ve been trying to explore the potentials and the limitations of this publishing and editing format. There’s been some hiccups and some learning along the way, as well as some bug fixes from Apple.

    So, I have a half-done textbook I can share around for us to play with, a real project that has hit some real obstacles, or we can all work on our own stuff together, or we can argue about proprietary formats and iDevices, too. I think it would be really fun to put this app through its paces in a non-hypothetical situation with lots of media (I’ve got galleries and movies and podcasts and such all through my book).

    You’ll need a Mac with the (free) iBooks Author app installed, and an iPad to preview the e-book on.

  • Pedagogy


    The other session I’d like to propose is a show-and-tell pedagogy session about making (better) use of the various digital tools now available, and especially your experiences with them in the classroom.

    At UVA, the writing curriculum is based on the Little Red Schoolhouse curriculum pioneered by Wayne Booth and Greg Colomb, and a number of grad students here are hard at work producing a digital companion to this curriculum. But only with the greatest hesitation have I brought extracurricular digital tools into play, either in first-year writing or lit surveys, leading to abortive-at-best experiments with class Flickr streams, Twitter discussions, and blog posts. My pedagogical toolbox already feels dusty and out of touch, and I haven’t even been at this two full years yet!

    I’d like to hear what has worked–or not worked–for others (wikis? nGrams?), and what seems to hold promise for the near future (Neatline? others?). Basically, I want to figure out ways to shake up my classroom, and others’ as well, and preferably in ways that get students more excited than apprehensive.

  • 5K Run (for varying values of “fun”)


    I’m proposing, as one session, participation in the 5K run being hosted by the English Dept’s grad student association. The run will start at 10:30 just outside Alderman Library, so not far at all from the Scholars Lab. I believe also I’ll be able to get THATCampers access to shower and locker-room facilities nearby, though whether that is directly next to the library or one bus stop away is not yet nailed down.

    The registration link is here:

    Hope to see some of you at the starting line!

  • Visualization Tools for Interdisciplinary Scholarship


    Various types of visualization tools for conceptualizing relationships between different types of media seem particularly hot in the DH community right now, and I’d like to explore the possibilities of these tools further. I’m a PhD student in English currently working on an interdisciplinary dissertation that focuses on connections between early photography and Victorian poetry, and thus my interest is primarily orientated towards networks between art and literature, although this topic could also be productively extended to involve other media (like sound or music, as Eric discusses in an earlier blog post).

    I know that text analysis search interfaces like Voyant Tools and Word Seer can help me execute complex queries that would enable me to analyze the linguistic and rhetorical structures surrounding my search terms within selected databases (and thereby address the literary side of my project). Likewise, data visualization software like ImagePlot can enable me to explore patterns in large collections of images. I’m not sure how to use these tools to their full potential in my research, however, and I’d like to discuss how best to employ these devices in a practical sense. As my research focuses on intersections between literary and visual texts, I’m personally interested in investigating ways to combine these two areas of inquiry productively using existing online tools. (As I mention above, however, this session would certainly be relevant to other types of media.) The potential of these tools for expanding scholarship beyond disciplinary boundaries has not yet been fully utilized, and I’d like to expand the discussion to consider how existing tools might be enhanced to better address the needs of interdisciplinary (and inter-media) research and scholarship.

    In a more theoretical sense, I’d be interested talking about how to use tools like these as jumping-off points to complex academic arguments about the relationships they represent. How can the use of dynamic interdisciplinary DH applications be integrated within the traditional boundaries of the traditional static article, dissertation, or book?

  • Producing Digital Ethnographies “On the Spot”


    A short and sweet session proposal based on discourse and creation:

    I’d like to propose a session where we choose a place of current global significance outside of the United States. Using Google Maps as a window, I’d like for a group to then gather information and produce, in the limited time frame of the session, a google doc ethnography that tries to combine global statistics with real local knowledge and insight (restaurant reviews, local newspapers, local blogs can all be combined into an ethnographic matrix of ideas). With so much emphasis on the global and on the transnational in both literary studies and the digital humanities, this session would be a real test for how these tools can help scholars gather local knowledge and form a starting point for ethnographic engagement with a location.

  • DH Beyond Plain Text


    I’d like to propose a session to discuss what the digital humanities can (and cannot) do with binary image and sound files.

    Part of my frequent dissatisfaction with recent experiments in data mining, computational distant reading, etc., I think, comes from a tension that’s been in place in DH for a long time. The same set of McGann-Druckerites (a group among which I count myself) who were raised up on a critical methodology that emphasizes the centrality to literary interpretation of material texts, bibliographic codes, etc.—those aspects of texts that are extremely difficult to capture in digital surrogate form—now finds itself in the midst of a digital scene in which plain text, whether marked up or not, is king. As often as not, scanned or photographed images more accurately capture the material situation of a book than marked text does—and text files, though they can operate as “phonotext” with the potential to make sound, as Garrett Stewart points out, cannot embody the sounded performance of poetry in the way audio recordings can. “Multimedia” was once the buzzword that best captured the promise of computers in culture, but those messy image, sound, and video files strewn all about the Internet have proven beyond the purview of most text-heavy DH research.

    Some recent attempts to deal more readily with image and sound in DH suggest to me that there might be more we can do on this front:

    • Alex Gil‘s work as a Scholars’ Lab fellow on an edition of Aimé Césaire, which aimed to meticulously recreate the original placement and appearance of text in Césaire’s work;
    • Lev Manovich‘s work on large sets of images, memorably presented at a Scholars’ Lab talk in fall 2011;
    • and Tanya Clement‘s work on using digital tools to illuminate the meaning of sound patterns in the work of Gertrude Stein.

    I’m surely unaware of lots of great work being done on this front, and one of the purposes of the session would be to have a bit of a show-and-tell of that existing work. I’d also like to have a conversation about the possibilities for and limitations of multimedia data in relation to the digital humanities. How can we conceive of image and sound files not just as black boxes to be surrounded by metadata, but as data (or capta, as the case may be) in their own right? Do such files offer enhanced versions of the texts we work on, or are they in many respects impoverished? And of course, what knowledge can we actually produce by playing around with the multimedia treasure trove of the Internet?

Page 1 of 212»
Skip to toolbar