Archive for April, 2012

  • THATCampVA follow-up

    1

    Thank you again for making THATCamp Virginia such an absolutely fantastic event!  I wanted to follow up with just a couple of final things:

    Evaluation
    The good folks at THATCamp Central have asked us to ask you to fill in a brief evaluation about your THATCamp experience, which can be found here: www.surveymonkey.com/s/thatcampeval We’re always interested in feedback so we can make THATCampVA (and all THATCamps) as successful as possible. (You can see anonymous results here.)

    Capturing the THATCamp experience
    Please feel free to continue to make use of the THATCampVA blog for any ongoing post-THATCamp discussion.  And if you post any related thoughts to your own blog, feel free to share a link with us on the THATCampVA blog.  Future scholars of THATCamps appreciate it when tweets, blog posts, Flickr pictures, or other social media get tagged with “#thatcamp”–it makes it easier for them to find the material.  You might also be interested in contributing to the THATCamp documentary project or to the Proceedings of THATCamp.  If you want to make a nomination (self- or otherwise) to be part of the Proceedings, here’s how it’s done:

    1. If it’s a blog post on any registered THATCamp site, whether on or off thatcamp.org, assign it the category #proceedings.
    2. If it’s something else, say a Flickr image, a tweet, or a post on a personal blog, tag and/or categorize it with both #thatcamp and #proceedings.
    3. If it’s something that can’t easily be tagged, such as a Google Doc, put a link to it in a blog post and categorize or tag that blog post

    Again, thank you all so much for coming and making THATCampVA so rewarding, and we hope to see you again next time!

  • Hiding in Plain Sight

    0

    photo of a footprintFollow this link to see the question key and results of the THATCamp 2012 Zen Scavenger Hunt. Assembly and presentations took  place at 3:30 today. The slide show is rough but the images are cool, and we had a blast doing it.

  • Neatline Hands-On

    2

    Following up on the Neatline workshop yesterday afternoon, I propose a more “hands-on” session in the Scholars’ Lab fellows’ lounge (complete Neatline on large, shiny iMac screens…) in which we can play around with Omeka and Neatline, experiment with Geoserver, and talk about possible use cases and ideas for new features.

    As I mentioned in the workshop yesterday, developing “framework” applications like Neatline is a challenge because you’re constantly walking a line between making the software too abstract (not enough features, not just right for any project) and too concrete (hyper-specific features that work great for a specific use case, but not for anything else).

    What kinds of new features in the Neatline application would be most useful? How closely should Neatline be coupled with the underlying Omeka collection? I’d love to sit down and talk about specific project ideas and generally think about the direction for ongoing development.

  • Hack Proposals

    4

    My proposal is to host an old-school hackfest, covering technologies useful for humanist inquiry. These would be beginner-to-intermediate friendly, though if there is interest, we could also do a much deeper dive in to any one of these areas. These are some ideas, but would love to hear if others have ideas they would like to explore.

    NodeJS

    There is a lot of excitement in various developer communities for a new server-side JavaScript platform named node.js. Built on the Google v8 JavaScript runtime, node.js allows developers to quickly write real-time applications using an evented model. This session will take a look at how one interesting Node application (HUBOT) is constructed, the technologies used, how it’s deployed, and how it can be extended to implement an IRC bot.

    HTML5 Technologies

    HTML 5 is a buzz word a lot of people talk about, but few actually know what it is. This session would take a look at HTML 5 technologies (e.g. webGL, canvas, audio, video), and how people are beginning to use these components in interesting ways, and perhaps even put them together in a browser-based action game.

    Omeka Plugin Development

    Have an idea for a plugin for Omeka? Don’t know where to start? Stuck somewhere? This session would explain the basics of how Omeka’s plugin architecture works, how to get going, and some tricks we’ve learned along the way developing Neatline (and other) plugins.

    Ruby Zotero gem

    At last year’s THATCampVA, we started hacking on a Ruby gem to allow developers to work with the Zotero APIs. I started to refactor this code to work on the 1.9.3 MRI and use a modular HTTP transport mechanism. This session would hack on adding features and working on the code refactor. You can check out the code on Github. If we’re really ambitious, we could even hack on the node client I began experimenting with.

     

  • DH and the Tech Industry

    6

    As someone who hails from Silicon Valley and has a lot of friends working at places like Google, Facebook, LinkedIn, etc., I’ve been able to take advantage of a lot of input and exposure to new technologies as I’ve been exploring the possibilities for DH to inform my own research in Renaissance drama.

    The University typically sets itself aside from the private sector, especially in the humanities. There’s much more back-and-forth conversation in the sciences though, with professors consulting for industry and industry leaders returning to lecture at the academy. I wonder if DH should or will provide a similar bridge between private and public as tech companies might produce a tool useful for digital humanists or bibliographers/librarians/archivists/etc. might be able to do consulting for projects such as Google Books.

    This perhaps also relates to Eric’s public humanities session. We all bemoan the awful lack of metadata in the Google Books digitization project, but the truth is those texts will be the ones that the general public is most likely to access – not the carefully curated archives that DH academics have been painstakingly putting together (at the moment anyway). Is partnership with industry our best shot at getting the right (or at least better) information about humanist subjects out there? What are the possibilities or ramifications of flirting with the line between public and private DH?

  • Knowing the Audience

    1

    In line with a couple of other posts, I’d love to have a discussion about the audience for digital humanities projects. Do we have any obligations in terms of who we create these for? What are the expectations of them? How can we widen the scope of who is being reached? It’s exciting to think that tools are being created that can be used for instruction in the classroom, to advance scholarly work, and to create communities; I wonder how to balance all these different roles within one project. Or should they be created with one goal in mind? It’d be great to hear first-hand from those who develop projects to see how these considerations are evaluated and used to shape projects. I’d also love to hear what others think about the different environments generating this work. As a student newly entranced by the excitement of digital humanities, I’m still a little unsure of the lay of the land. Perhaps we could talk about the different ways and places from which this work can be happened upon.

  • Metadata: describe, view and do

    1

    Everyone so far has posted some really cool ideas, and my proposal is to latch on what you guys are already talking about but delve into the description, access and output aspects of your collections and projects.  As the Metadata Librarian in the Cataloging department here at UVa, my interest is in how you guys describe the stuff you have, like this travel journal and photographs and local history collections made up of lots of different formats,  and whether it is possible to get non-experts in on the description (and what it would look like if we did).   I’m also interested in exploring how we can use the metadata (and choose the metadata wisely) so that it can do stuff for us, like making nifty visualizations.

  • Mine your own business

    6

    Yesterday afternoon, Brad Pasanek and I decided to play at text-mining. We started working with MALLET and this GUI tool but were soon lost in the mine, buried in code, with nary a respirating canary, shafted.

    Our proposal includes two potential approaches:

    (1) a session could look at how a scholar might begin to use topic modeling in the humanities. What do those of us with limited technical nous need to know in order to begin this type of work? We imagine a walk-through, cooking-show-like presentation that goes from A (here are some texts) to B (here is a visualization). Between A and B there are many difficult and perilous interactions with shell scripts, MALLET extrusions, statistics, spread sheets, and graphing tools. While we two are probably not capable of getting from A to B with elegance, flailing about in a group, roughing out a work flow, getting advice from sundry THATCampers, and making time for questions would be generally instructive—or so we submit.

    (2) An alternative approach assumes some basic success with topic-modeling, and focuses instead on working with the cooked results. How can my-mine-mein data (we would bring something to the session and invite others to do the same) be interpreted, processed, and visualised? This secondary concern may even be included in the visualization session that has already been proposed.

    Both bits assume a willingness to wield the MALLET and do some topic modeling. We aim primarily at a how-to and hack-and-help, and not a discussion of the pros and cons of topic modeling or text-mining in general.

  • Between capta and data

    0

    My session is on making sure the digital is humanistic, and a tension I’ve seen running between a couple lines of argument in DH literature.

    Humanists have long recognized that our needs are rarely met by tools imported from natural or social sciences. Echoing this longstanding objection, Johanna Drucker argued last year that humanists require fully humanistic representations that glory in ambiguity, in information as taken and interpreted, not given, and that the most common uses of software built for representing data, whether it is spatial or tabular, rarely meets this goal.

    Yet digital humanists have also been attuned to the ways that new technologies, tools not originally built for humanistic ends, can lead to intellectual breakthroughs. Databases are their own genres to be read. GIS brings to the eye patterns unidentifiable without visualization. These and other technologies yield new interpretations of the phenomena they represent by breaking them into discrete events and units that might be measured, analyzed, seen.

    I would like to see a session that deals with the seeming tension in these two lines of argument: one, that new technologies yield intellectual breakthroughs and contribute to new humanistic knowledge precisely because of their ability to cast information in different light; the other, that these new technologies often fail to bring any insight identifiable as specifically humanistic. So let’s talk about specific technologies and how they might bring new insight because they lead to new digital and humanistic readings.

    Oh, and I also really want to attend Eric’s session.

  • [First Beginning]

    3

    During the 1980’s, as a poet in Alaska, I became the amanuensis of the Dena’ina Athabaskan writer Peter Kalifornksy (1911-1993). He was among the last speakers of his language and was the first to bring it into writing. Indeed, he was considered a literary stylist. Perhaps as interesting, he became the scholar of his language, spoken by the Kenaitze people on the Kenai Peninsula, devising a theory of spelling, and explicating the Old Dena’inas’ theory of knowledge, their poetics, their spiritual cosmos, the power of the animals, law and education, their encounters with the Russians, and much else. He reflected on the very meaning of writing, and of what the language revealed to him as he delved the very process of writing.

    As he talked, I wrote. He let me ask questions. We conversed over a period of about five years, me writing as he spoke, he taking new thought from my questions. This was the spiritual, intellectual, social history of a people, come down to the mind of one man, the last one of his generation educated (he said) in the old stories.

    Together, we made a two-volume work entitled “From the First Beginning, When the Animals Were Talking.” Very thick, heavy with cross-referencing footnotes, impossible to page through without having to use all your fingers as place-holders. But full of wonders!

    I am organizing a digital edition containing this manuscript, commentaries, digital images of two of his Dena’ina manuscripts, audio files of all his writings, as recorded by himself, and visual matter. And I am experimenting with a demonstration for the iPad, using the new iBooks Author. But my first task — toe in the water — is to put up a work-in-progress site on Omeka.net, to be called First Beginning, a journal of development. Here, I want to learn how to use Neatline, which looks as though it’s going to be a good application for relating and annotating visual, textual, and geographical cross-references.

    Thanks, finally, to the Virginia Foundation for the Humanities, where I began this work several years ago and am still an Affiliated Fellow.

Page 1 of 41234»
Skip to toolbar