Archive for the ‘Visualization’ Category

  • Neatline Hands-On

    2

    Following up on the Neatline workshop yesterday afternoon, I propose a more “hands-on” session in the Scholars’ Lab fellows’ lounge (complete Neatline on large, shiny iMac screens…) in which we can play around with Omeka and Neatline, experiment with Geoserver, and talk about possible use cases and ideas for new features.

    As I mentioned in the workshop yesterday, developing “framework” applications like Neatline is a challenge because you’re constantly walking a line between making the software too abstract (not enough features, not just right for any project) and too concrete (hyper-specific features that work great for a specific use case, but not for anything else).

    What kinds of new features in the Neatline application would be most useful? How closely should Neatline be coupled with the underlying Omeka collection? I’d love to sit down and talk about specific project ideas and generally think about the direction for ongoing development.

  • Metadata: describe, view and do

    1

    Everyone so far has posted some really cool ideas, and my proposal is to latch on what you guys are already talking about but delve into the description, access and output aspects of your collections and projects.  As the Metadata Librarian in the Cataloging department here at UVa, my interest is in how you guys describe the stuff you have, like this travel journal and photographs and local history collections made up of lots of different formats,  and whether it is possible to get non-experts in on the description (and what it would look like if we did).   I’m also interested in exploring how we can use the metadata (and choose the metadata wisely) so that it can do stuff for us, like making nifty visualizations.

  • Mine your own business

    6

    Yesterday afternoon, Brad Pasanek and I decided to play at text-mining. We started working with MALLET and this GUI tool but were soon lost in the mine, buried in code, with nary a respirating canary, shafted.

    Our proposal includes two potential approaches:

    (1) a session could look at how a scholar might begin to use topic modeling in the humanities. What do those of us with limited technical nous need to know in order to begin this type of work? We imagine a walk-through, cooking-show-like presentation that goes from A (here are some texts) to B (here is a visualization). Between A and B there are many difficult and perilous interactions with shell scripts, MALLET extrusions, statistics, spread sheets, and graphing tools. While we two are probably not capable of getting from A to B with elegance, flailing about in a group, roughing out a work flow, getting advice from sundry THATCampers, and making time for questions would be generally instructive—or so we submit.

    (2) An alternative approach assumes some basic success with topic-modeling, and focuses instead on working with the cooked results. How can my-mine-mein data (we would bring something to the session and invite others to do the same) be interpreted, processed, and visualised? This secondary concern may even be included in the visualization session that has already been proposed.

    Both bits assume a willingness to wield the MALLET and do some topic modeling. We aim primarily at a how-to and hack-and-help, and not a discussion of the pros and cons of topic modeling or text-mining in general.

  • Distributed Scholarly Collaboration

    0

    Now that I’m out of the application woods, I’d love to have a conversation about the more difficult DH task I’ve been working on: how to form, organize, and motivate distributed scholarly platforms, like the one I’m contemplating under the “Modernist Letters Project.”  I think building the infrastructure for quicker, more transparent, open-source scholarly knowledge creation  and review will be one of the major projects for the next decade, as it has already been in the case of NINES.  And I tend to think that the new platforms that are successful will be both field and object-specific (thus, in my field, the Modernist Journals Project, now Modernist Versions Project, etc.).

    I’ll work through today referencing and organizing this problem, but it seems to me that first of all this should be approached by examining the following questions: I’d appreciate others’ thoughts about this, or sources to look at.

    A. What has worked (NINES, Whitman Archive) and why?

    B.  What hasn’t been successful?

    C.  What sorts of contracts for collaboration are most succesful? What organizational structures, forms? (I know Lynn Siemens has written a good deal on this.)

    D. How does the work get incentivized?  How credited?  What are good models for developing pedagogical units, etc (an interest of one of my collaborators)?

    E. How do we include the non-digital (native) scholars in the field?  What sorts of  ongoing mechanisms for peer-review could be included?

    I’ll come back and reference this a bit later, as well, once I’ve gone through some of the available material.  Folks interested in participating in the Modernist Letters Project are particularly welcome to get involved here, of course.

  • Would you like fries with that?

    2

    No, I’m not talking about employment and DH or #alt-ac anything like that… I’m picking up a conversation that Tom Scheinfeldt addresses in his blog post “Where’s the Beef? Does Digital Humanities Have to Answer Questions?” The post, republished in Debates in the Digital Humanities, equates DH to the role Robert Hooke played for 40 years until his death in 1703. As someone whose job it was “to prepare public demonstrations of scientific phenomena for the fellows’ meetings,” Hooke demonstrated scientific curiosities that at first had no apparent purpose. Answers did come, eventually, but not until the 18th and 19th centuries.

    I raise this in light of what many of us will be doing in the workshops (and having read about research other THATCampVAers have discussed–GIS, sound, image modeling, etc–as well as my own work with visualizations) and I wonder if at some point we all don’t address a similar question: “What do visualizations in the humanities really do?” Are we at a point where we could argue that visualizations produce “new” knowledge? I am coming at these questions from two perspectives. First, as someone who uses visualizations to explain ways to reconsider the structural underpinnings of a particular genre of poetry. Readers’ expectations of digitally enabled visualizations are often that they should “tell us something new.” And yet, most visualizations don’t–not yet, anyway. Most tell us what we already know, differently.  Secondly, I work in a disciplinary area intimately concerned with the historical tensions between meaning-making in spatial and temporal forms of representation. Western thought creates a binary relationship between images and words, and images are frequently viewed with suspicion. How do we know what they say? For this reason, images on their own aren’t really considered “scholarship.” That’s something that might change, but hasn’t yet. However, as we make spatial arguments to address humanities questions, what role can we see visualizations having in the changing climate of scholarly conversation/publication?

    So, I guess what I’m saying (rather circuitously) is that I’d like to have a session in which we think through what visualizations in humanities do. Considered in conjunction with the workshops and the “show and tell” sesson on Saturday afternoon, I’m interesting in thinking about: What are visual analyses? What can we reasonably assert is their value now and their potential value? What is the value in displaying humanities data if it doesn’t tell us something we don’t already know? Are visualizations the “fries” to the DH burger, or are they a meal of their own? (Ok, I’ve extended that metaphor *way* too far… and now I’m hungry.)

    Looking forward to seeing everyone this weekend!

Skip to toolbar