Thank you again for making THATCamp Virginia such an absolutely fantastic event! I wanted to follow up with just a couple of final things:
Evaluation
The good folks at THATCamp Central have asked us to ask you to fill in a brief evaluation about your THATCamp experience, which can be found here: www.surveymonkey.com/s/thatcampeval We’re always interested in feedback so we can make THATCampVA (and all THATCamps) as successful as possible. (You can see anonymous results here.)
Capturing the THATCamp experience
Please feel free to continue to make use of the THATCampVA blog for any ongoing post-THATCamp discussion. And if you post any related thoughts to your own blog, feel free to share a link with us on the THATCampVA blog. Future scholars of THATCamps appreciate it when tweets, blog posts, Flickr pictures, or other social media get tagged with “#thatcamp”–it makes it easier for them to find the material. You might also be interested in contributing to the THATCamp documentary project or to the Proceedings of THATCamp. If you want to make a nomination (self- or otherwise) to be part of the Proceedings, here’s how it’s done:
Again, thank you all so much for coming and making THATCampVA so rewarding, and we hope to see you again next time!
]]>Follow this link to see the question key and results of the THATCamp 2012 Zen Scavenger Hunt. Assembly and presentations took place at 3:30 today. The slide show is rough but the images are cool, and we had a blast doing it.
]]>Following up on the Neatline workshop yesterday afternoon, I propose a more “hands-on” session in the Scholars’ Lab fellows’ lounge (complete Neatline on large, shiny iMac screens…) in which we can play around with Omeka and Neatline, experiment with Geoserver, and talk about possible use cases and ideas for new features.
As I mentioned in the workshop yesterday, developing “framework” applications like Neatline is a challenge because you’re constantly walking a line between making the software too abstract (not enough features, not just right for any project) and too concrete (hyper-specific features that work great for a specific use case, but not for anything else).
What kinds of new features in the Neatline application would be most useful? How closely should Neatline be coupled with the underlying Omeka collection? I’d love to sit down and talk about specific project ideas and generally think about the direction for ongoing development.
]]>My proposal is to host an old-school hackfest, covering technologies useful for humanist inquiry. These would be beginner-to-intermediate friendly, though if there is interest, we could also do a much deeper dive in to any one of these areas. These are some ideas, but would love to hear if others have ideas they would like to explore.
There is a lot of excitement in various developer communities for a new server-side JavaScript platform named node.js. Built on the Google v8 JavaScript runtime, node.js allows developers to quickly write real-time applications using an evented model. This session will take a look at how one interesting Node application (HUBOT) is constructed, the technologies used, how it’s deployed, and how it can be extended to implement an IRC bot.
HTML 5 is a buzz word a lot of people talk about, but few actually know what it is. This session would take a look at HTML 5 technologies (e.g. webGL, canvas, audio, video), and how people are beginning to use these components in interesting ways, and perhaps even put them together in a browser-based action game.
Have an idea for a plugin for Omeka? Don’t know where to start? Stuck somewhere? This session would explain the basics of how Omeka’s plugin architecture works, how to get going, and some tricks we’ve learned along the way developing Neatline (and other) plugins.
At last year’s THATCampVA, we started hacking on a Ruby gem to allow developers to work with the Zotero APIs. I started to refactor this code to work on the 1.9.3 MRI and use a modular HTTP transport mechanism. This session would hack on adding features and working on the code refactor. You can check out the code on Github. If we’re really ambitious, we could even hack on the node client I began experimenting with.
]]>
As someone who hails from Silicon Valley and has a lot of friends working at places like Google, Facebook, LinkedIn, etc., I’ve been able to take advantage of a lot of input and exposure to new technologies as I’ve been exploring the possibilities for DH to inform my own research in Renaissance drama.
The University typically sets itself aside from the private sector, especially in the humanities. There’s much more back-and-forth conversation in the sciences though, with professors consulting for industry and industry leaders returning to lecture at the academy. I wonder if DH should or will provide a similar bridge between private and public as tech companies might produce a tool useful for digital humanists or bibliographers/librarians/archivists/etc. might be able to do consulting for projects such as Google Books.
This perhaps also relates to Eric’s public humanities session. We all bemoan the awful lack of metadata in the Google Books digitization project, but the truth is those texts will be the ones that the general public is most likely to access – not the carefully curated archives that DH academics have been painstakingly putting together (at the moment anyway). Is partnership with industry our best shot at getting the right (or at least better) information about humanist subjects out there? What are the possibilities or ramifications of flirting with the line between public and private DH?
]]>In line with a couple of other posts, I’d love to have a discussion about the audience for digital humanities projects. Do we have any obligations in terms of who we create these for? What are the expectations of them? How can we widen the scope of who is being reached? It’s exciting to think that tools are being created that can be used for instruction in the classroom, to advance scholarly work, and to create communities; I wonder how to balance all these different roles within one project. Or should they be created with one goal in mind? It’d be great to hear first-hand from those who develop projects to see how these considerations are evaluated and used to shape projects. I’d also love to hear what others think about the different environments generating this work. As a student newly entranced by the excitement of digital humanities, I’m still a little unsure of the lay of the land. Perhaps we could talk about the different ways and places from which this work can be happened upon.
]]>Everyone so far has posted some really cool ideas, and my proposal is to latch on what you guys are already talking about but delve into the description, access and output aspects of your collections and projects. As the Metadata Librarian in the Cataloging department here at UVa, my interest is in how you guys describe the stuff you have, like this travel journal and photographs and local history collections made up of lots of different formats, and whether it is possible to get non-experts in on the description (and what it would look like if we did). I’m also interested in exploring how we can use the metadata (and choose the metadata wisely) so that it can do stuff for us, like making nifty visualizations.
]]>Yesterday afternoon, Brad Pasanek and I decided to play at text-mining. We started working with MALLET and this GUI tool but were soon lost in the mine, buried in code, with nary a respirating canary, shafted.
Our proposal includes two potential approaches:
(1) a session could look at how a scholar might begin to use topic modeling in the humanities. What do those of us with limited technical nous need to know in order to begin this type of work? We imagine a walk-through, cooking-show-like presentation that goes from A (here are some texts) to B (here is a visualization). Between A and B there are many difficult and perilous interactions with shell scripts, MALLET extrusions, statistics, spread sheets, and graphing tools. While we two are probably not capable of getting from A to B with elegance, flailing about in a group, roughing out a work flow, getting advice from sundry THATCampers, and making time for questions would be generally instructive—or so we submit.
(2) An alternative approach assumes some basic success with topic-modeling, and focuses instead on working with the cooked results. How can my-mine-mein data (we would bring something to the session and invite others to do the same) be interpreted, processed, and visualised? This secondary concern may even be included in the visualization session that has already been proposed.
Both bits assume a willingness to wield the MALLET and do some topic modeling. We aim primarily at a how-to and hack-and-help, and not a discussion of the pros and cons of topic modeling or text-mining in general.
]]>My session is on making sure the digital is humanistic, and a tension I’ve seen running between a couple lines of argument in DH literature.
Humanists have long recognized that our needs are rarely met by tools imported from natural or social sciences. Echoing this longstanding objection, Johanna Drucker argued last year that humanists require fully humanistic representations that glory in ambiguity, in information as taken and interpreted, not given, and that the most common uses of software built for representing data, whether it is spatial or tabular, rarely meets this goal.
Yet digital humanists have also been attuned to the ways that new technologies, tools not originally built for humanistic ends, can lead to intellectual breakthroughs. Databases are their own genres to be read. GIS brings to the eye patterns unidentifiable without visualization. These and other technologies yield new interpretations of the phenomena they represent by breaking them into discrete events and units that might be measured, analyzed, seen.
I would like to see a session that deals with the seeming tension in these two lines of argument: one, that new technologies yield intellectual breakthroughs and contribute to new humanistic knowledge precisely because of their ability to cast information in different light; the other, that these new technologies often fail to bring any insight identifiable as specifically humanistic. So let’s talk about specific technologies and how they might bring new insight because they lead to new digital and humanistic readings.
Oh, and I also really want to attend Eric’s session.
]]>During the 1980’s, as a poet in Alaska, I became the amanuensis of the Dena’ina Athabaskan writer Peter Kalifornksy (1911-1993). He was among the last speakers of his language and was the first to bring it into writing. Indeed, he was considered a literary stylist. Perhaps as interesting, he became the scholar of his language, spoken by the Kenaitze people on the Kenai Peninsula, devising a theory of spelling, and explicating the Old Dena’inas’ theory of knowledge, their poetics, their spiritual cosmos, the power of the animals, law and education, their encounters with the Russians, and much else. He reflected on the very meaning of writing, and of what the language revealed to him as he delved the very process of writing.
As he talked, I wrote. He let me ask questions. We conversed over a period of about five years, me writing as he spoke, he taking new thought from my questions. This was the spiritual, intellectual, social history of a people, come down to the mind of one man, the last one of his generation educated (he said) in the old stories.
Together, we made a two-volume work entitled “From the First Beginning, When the Animals Were Talking.” Very thick, heavy with cross-referencing footnotes, impossible to page through without having to use all your fingers as place-holders. But full of wonders!
I am organizing a digital edition containing this manuscript, commentaries, digital images of two of his Dena’ina manuscripts, audio files of all his writings, as recorded by himself, and visual matter. And I am experimenting with a demonstration for the iPad, using the new iBooks Author. But my first task — toe in the water — is to put up a work-in-progress site on Omeka.net, to be called First Beginning, a journal of development. Here, I want to learn how to use Neatline, which looks as though it’s going to be a good application for relating and annotating visual, textual, and geographical cross-references.
Thanks, finally, to the Virginia Foundation for the Humanities, where I began this work several years ago and am still an Affiliated Fellow.
]]>Hailing as I originally do from the museum and library world, I have a particular interest in the more outward-facing aspects of the humanities–and in the digital humanities, the aspects of the field that might particularly be considered “public” or “open.” I’d love to get into a conversation about this stuff. Maybe we can take a look at how audiences are examined in digital projects, or talk about the degree to which digital humanities projects are (or aren’t) by their very nature forms of public scholarship. What makes a scholarly effort “public” in the first place, and is there anything particular to digital work that supports or undermines that idea? Maybe we can talk about crowdsourcing and its role in digital research and scholarship. In short, if the phrase “public humanities” catches your attention, I’d love to chat.
]]>I’m interested in learning about various resources for the curation and exhibition of digital archives and scholarly editions with extensive critical apparatus. While I have my own project I’m looking to start this summer, which I describe below, I’m interested in general discussion of what’s available, what their strengths and weaknesses are, and how they play with other online resources.
In particular, I’m looking to create an electronic edition of the 100-page travel journal and accompanying 200 photographs Walter J. Ong kept during the three years he spent traveling throughout Europe doing research his dissertation, which he published as Ramus, Method, and the Decay of Dialogue and The Ramus and Talon Inventory. As these three years were formative for Ong’s academic career (the people he met, a series of lectures he gave in France on behalf of the US State Department, insights he had, and the connections he maintained via correspondence) I’d like to use this route book as a framework for presenting and contextualizing the thousands of pages of material in the Walter J. Ong Manuscript Collection dating to this period.
Saint Louis University’s Archives currently use Content dm to host digital materials and early on when I was helping process the Collection, we created a web site to make some select items available. I’m finally starting to think of this project seriously and I’m assuming I want something more flexible and elegant than Content dm. Based upon my preliminary searching, I’m assuming Omeka may be the best resource for my needs.
]]>I attended a symposium earlier this month entitled “Sharing and Sustaining Research Data,” wherein the participants specifically discussed scientific datasets. One of the presentations that really excited me was David E. Schindel’s talk on “DNA Barcoding and Early Data Release.” During this presentation, Schindel discussed the Fort Lauderdale Principles, 2003, which he described as being a new paradigm for accelerating Cybertaxonomy development (much like the Bermuda Principles, 1996, which continue to encourage the rapid dissemination of genomic data).
I’ll summarize the Fort Lauderdale Principles as follows: when it comes to sharing data, there are three groups that are responsible for ensuring the success of the “community resource system:”
Furthermore, producers must commit to “making data broadly available prior to publication,” and users should respect the expressed research intents of the producers. This requires that data should be published before the research (or, as the case may be in many DH projects, even before the website).
Therefore, I’d like to discuss how such data-sharing tenets do or do not fit into the current Digital Humanities landscape. I have seen online discussions and articles, such as Christine Borgman’s “The Digital Future is Now: A Call to Action for the Humanities,” but what have been the results so far? Or, if I’m completely mistaken and such systems for data sharing already exist (maybe GitHub has become the de facto standard, for instance?), I’d love to learn more about those systems during this THATCamp, too.
]]>[NOTE: I proposed this session for THATCamp Prime last summer and it didn’t fly. If at first you don’t succeed…]
According to Urban Dictionary (most credible source EVER!), the phrase “speak truth to power” means:
A phrase coined by the Quakers during in the mid-1950s. It was a call for the United States to stand firm against fascism and other forms of totalitarianism; it is a phrase that seems to unnerve political right, with reason.
or
A vacuous phrase used by some on the political Left, especially the denizens of the Democratic Underground website. Ostensibly, it means to verbally confront or challenge conservative politicians and conservative ideals using the overwhelmingly logical and moral arguments of liberalism. Doing so would, naturally of course, devastate the target individual, leaving them a stuttering, stammering bowl of defeated jelly. That or cause them to experience an epiphany that would have such a profound, worldview-changing effect that they would immediately go out and buy a Che t-shirt and start reading Noam Chomsky. Unfortunately, the individuals who would use this phrase have little or no understanding of either liberalism or conservatism, and the “truth” that they speak consists mainly of epithets and talking points, memorized by rote, which they learned from other, equally vapid liberals. As such “speak truth to power” joins other feel-good but ultimately meaningless gems from Leftist history such as “right on”, “up against the wall”. “question everything” and the ever-popular “fuck you, pig”.
(Well, OK, then…)
Seeking out slightly more credible sources for the origin of the phrase leads one to a Quaker pamphlet from the 1950s. As a “trained” political scientist, I think of Aaron Wildavsky’s book and, more recently, a book by Manning Marable. Across these sources, I believe the phrase is about questioning reasoning of “the state;” it’s about bringing information (maybe evidence?) to the table with those who are in formal positions of power who may not want to “hear” it.
I suspect other THATCamp attendees find themselves in positions like those that I find myself in where I have opportunities to “speak truth to power.” I get coded as “the technology guy” and “volunteered” onto any/all task forces and/or committees (let’s call them task committees) that have any connection at all to technology. Often, those task committees are led by someone with formal decision-making authority who may or may not *really* want to hear what you say.
We all know the perils of committee work, but there are obvious advocacy opportunities presented by this work as well. So, I’m proposing a session where we share advocacy strategies. We might discuss our “tactics” within the realm of formal committee work, but even outside of it. There, the overlap with Mark Sample’s ideas around “tactical collaboration” are obvious, so perhaps we can convince Mark to grace us with his presence (and his ideas) as part of the session.
]]>
Hey, campers! If you haven’t posted your session idea yet, please remember: these can (and, really, in the best spirit of THATCamp, should) be tentative, informal, even half-baked. Bake ’em on Saturday with your new friends and colleagues! Our suggested length for posts is well under 300 words.
There’s also no need to prepare for THATCamp sessions. If you propose a topic, you should be willing to get the ball rolling with a question or two, or a very quick demo (preferably not even of your own stuff). This keeps the bar for entry low and the ideas flowing — you can give your paper at plenty of other conferences!
The more session ideas we see on the blog over the next day or so, the more fun we’ll all have on Saturday morning, watching THATCampVA staff negotiate with the crowd to create a program that belongs to everybody. We can’t wait to see you in Charlottesville this weekend!
]]>Now that I’m out of the application woods, I’d love to have a conversation about the more difficult DH task I’ve been working on: how to form, organize, and motivate distributed scholarly platforms, like the one I’m contemplating under the “Modernist Letters Project.” I think building the infrastructure for quicker, more transparent, open-source scholarly knowledge creation and review will be one of the major projects for the next decade, as it has already been in the case of NINES. And I tend to think that the new platforms that are successful will be both field and object-specific (thus, in my field, the Modernist Journals Project, now Modernist Versions Project, etc.).
I’ll work through today referencing and organizing this problem, but it seems to me that first of all this should be approached by examining the following questions: I’d appreciate others’ thoughts about this, or sources to look at.
A. What has worked (NINES, Whitman Archive) and why?
B. What hasn’t been successful?
C. What sorts of contracts for collaboration are most succesful? What organizational structures, forms? (I know Lynn Siemens has written a good deal on this.)
D. How does the work get incentivized? How credited? What are good models for developing pedagogical units, etc (an interest of one of my collaborators)?
E. How do we include the non-digital (native) scholars in the field? What sorts of ongoing mechanisms for peer-review could be included?
I’ll come back and reference this a bit later, as well, once I’ve gone through some of the available material. Folks interested in participating in the Modernist Letters Project are particularly welcome to get involved here, of course.
]]>No, I’m not talking about employment and DH or #alt-ac anything like that… I’m picking up a conversation that Tom Scheinfeldt addresses in his blog post “Where’s the Beef? Does Digital Humanities Have to Answer Questions?” The post, republished in Debates in the Digital Humanities, equates DH to the role Robert Hooke played for 40 years until his death in 1703. As someone whose job it was “to prepare public demonstrations of scientific phenomena for the fellows’ meetings,” Hooke demonstrated scientific curiosities that at first had no apparent purpose. Answers did come, eventually, but not until the 18th and 19th centuries.
I raise this in light of what many of us will be doing in the workshops (and having read about research other THATCampVAers have discussed–GIS, sound, image modeling, etc–as well as my own work with visualizations) and I wonder if at some point we all don’t address a similar question: “What do visualizations in the humanities really do?” Are we at a point where we could argue that visualizations produce “new” knowledge? I am coming at these questions from two perspectives. First, as someone who uses visualizations to explain ways to reconsider the structural underpinnings of a particular genre of poetry. Readers’ expectations of digitally enabled visualizations are often that they should “tell us something new.” And yet, most visualizations don’t–not yet, anyway. Most tell us what we already know, differently. Secondly, I work in a disciplinary area intimately concerned with the historical tensions between meaning-making in spatial and temporal forms of representation. Western thought creates a binary relationship between images and words, and images are frequently viewed with suspicion. How do we know what they say? For this reason, images on their own aren’t really considered “scholarship.” That’s something that might change, but hasn’t yet. However, as we make spatial arguments to address humanities questions, what role can we see visualizations having in the changing climate of scholarly conversation/publication?
So, I guess what I’m saying (rather circuitously) is that I’d like to have a session in which we think through what visualizations in humanities do. Considered in conjunction with the workshops and the “show and tell” sesson on Saturday afternoon, I’m interesting in thinking about: What are visual analyses? What can we reasonably assert is their value now and their potential value? What is the value in displaying humanities data if it doesn’t tell us something we don’t already know? Are visualizations the “fries” to the DH burger, or are they a meal of their own? (Ok, I’ve extended that metaphor *way* too far… and now I’m hungry.)
Looking forward to seeing everyone this weekend!
]]>Given all the tools most of us use to manage our daily reading habits, I’d like to hear how this could be used to the classroom. Not every class wants to engage with the news of the day, but I can imagine many advantages to having students engage with very recent events. Since we obviously can’t build these into a syllabus before hand, I’m wondering if anyone has experience (or wants to brainstorm about) using RSS feeds, or something else, in a class: how do we keep our students and ourselves up to date on recent events? How could we ensure we all have the same focus? Or the same information? Or usefully contrasting information (can some students watch Fox, some the Daily Show)?
]]>I have three ideas for sessions this year — and will also be keeping one eye on this guy:
Pinterest Wunderkammer: For years, I’ve fantasized about creating the perfect interface for a digital humanities cabinet of wonders, but never had time to follow through. Have they beaten me to it? I didn’t pay much attention to Pinterest at first, but then started to see some startling collections. I especially find the temporal dimension fascinating: if you follow this woman’s feed, you can watch her move through varying aesthetic obsessions over time — coherent washes of color, for instance, even across diverse assemblages. So it’s fluid, performative collection-building — or beautifully diachronic fixing. There’s plenty to read about Wunderkammern, but I’d like to have a conversation with some immediate implications for building.
Quantified Self: At past THATCamps, I’ve co-hosted workshops and conversations on physical computing (especially wearables). I also started a Zotero group for research and inspiration on soft circuits. Now I’m getting interested in the “quantified self” movement (see Wolfram for an extreme example) and am thinking about melding the two. My FitBit has an API. My phone knows where I’ve been. Anybody else interested in the intersection of DH, quantified self, and physical computing?
Rethinking the Graduate “Methods” Course: I wrote this thing. Now I’m hosting these conversations and running this program. I also spend a lot of time thinking about how well qualified lots of these people are to help train the next generation of humanities faculty and knowledge workers. Wanna talk about it?
]]>
As the librarian for UVa’s School of Architecture, I work with many scholars who are deeply interested in the history of communities. Often, the interest is local (Charlottesville), but our historians and urban planners are also digging into communities throughout the United States and the world. While the research is often centered on architecture and urban planning, it extends to interdisciplinary aspects of food planning, use of public space, and many other directions. In terms of media, it encompasses images, texts, primary documents, maps, oral histories, planning documents, and just about anything else you can imagine. I find so many wonderful new ways of discovering local history resources, many of which are the direct result of DH technologies like Omeka, GIS, etc. But, implementation is scattered, and often limited to the silo of a single institutional collection.
I’d love to create a vision for the ideal local history portal for researchers. I imagine that it would combine multiple aspects of some of my favorite sites (HistoryPin, WhatWasThere, the NYPL MapWarper, Visualizing Emancipation), along with characteristics of tools like Omeka (and I have a feeling I’ll be adding NeatLine to that list soon). It would also need to transcend silos of individual institutional collections—bringing together photos, documents, etc. from the local historical society, university archives, local planning and preservation org, public library, and more—while allowing those institutions to promote and “brand” their own resources.
I’m hopeful that there’s a group of folks that might be interested in playing a game of “Imagine going to one site for a city/town and being able to….”. I would guess that many of us will contribute knowledge of projects that are inching us closer to this research utopia, and we might also come up with some “boy, it would be great if someone developed…” ideas as well. At the end, we might walk away with a road-map to some amazing possibilities, and hopefully some excited people that might want to collaborate to make that a reality.
Hi, my name is Vic. I’m one of the kids coming to THATCamp. My proposal is about Minecraft, which is kind of like a computer game, but it’s also not, really. You can run it on a Mac or a PC, and you can use it to build many cool things, from realism to abstraction. I’ve been playing Minecraft since the start — since it was released on May 17th, 2009 (when I was five years old).
I’m going to bring my Hackintosh and my Dad’s Mac laptop. I run a Minecraft server for me, my Dad, and my friends. I’d be happy to show it to you and teach you some building strategies. You can mine or build anything!
]]>We are attending with our 13-year-old, who loves writing and digital photography. I’d like to know who else is bringing kids this year, and how we might engage them? Because of Mona’s schedule, we will be late or no-show (sadly) at the Friday sessions. Maybe we can meet up with other families w/kids at the dinner that evening?
I am a writer and writing teacher, and would be happy to host a creative writing workshop in the afternoon for any kids who want to get together to write. In particular (or instead), I am really intrigued by the possibilities for the Zen Scavenger Hunt described here. That seems like it’d be great for any age participant. I don’t know the UVa campus all that well, so I’d like to run it with someone who is more familiar with the surroundings. It is the kind of thing the kids and other participants can engage in all day (since there is no list until later), then gather in the afternoon or early evening before dinner to compare notes.
We could run this as a regular scavenger hunt instead, with a fairly general list that stimulates creativity (“an object that fits in your hand,” “something red,” “something that was once alive,”) and/or have kids document with cameras, pen & paper, maps, and phones in case there is concern about collecting actual things (might get out of hand :-)…Would love to hear thoughts about how to put something together that is fun and flexible for families and younger participants to do.
]]>If anyone’s interested, I’d be happy to organize a session introducing basic HTML and CSS. All you’d need to participate in this is a laptop with a good ol’ text editor and a browser. We can cover basic HTML markup and CSS syntax, and maybe end with some presentation/discussion of resources for going further.
If it turns out there isn’t enough time or space to do this, but folks are interested in it, we could always find time to talk in between sessions, or after the event.
]]>
I’m proposing a session to work on figuring out / optimizing the use of / discovering an excellent purpose for the iBooks author app.
With my research assistant Lauren Burr, who has been moving all my teaching materials for the Multimedia course at the Digital Humanities Summer Institute from HTML to iBook e-textbook format, I’ve been trying to explore the potentials and the limitations of this publishing and editing format. There’s been some hiccups and some learning along the way, as well as some bug fixes from Apple.
So, I have a half-done textbook I can share around for us to play with, a real project that has hit some real obstacles, or we can all work on our own stuff together, or we can argue about proprietary formats and iDevices, too. I think it would be really fun to put this app through its paces in a non-hypothetical situation with lots of media (I’ve got galleries and movies and podcasts and such all through my book).
You’ll need a Mac with the (free) iBooks Author app installed, and an iPad to preview the e-book on.
]]>One of the things I love most about digital humanities projects is the opportunity to make more concerted decisions about the design of the project, and how people use or experience that project. It’s important to consider how the design of a DH project, be it a collection of artifact, and online exhibit, or some work of digital scholarship, impacts the project’s overall argument or contribution to the field. This opportunity is of course also followed by uncertainty about how best to approach the issues of design for DH work.
I’m curious if others are interested in exploring this topic in discussion, maybe better articulating the issues we face in dealing with design and user experience for whatever digital humanities projects folks are working on.
]]>Howdy, all. I’m Erin White, Web Systems Librarian at VCU Libraries in lovely Richmond, VA. I’m new to DH and this is my first THATcamp! So I am looking forward to meeting everyone and talking with you about your work.
I have had a hard time deciding what to propose here, so I am cheating and throwing out multiple proposals.
The other session I’d like to propose is a show-and-tell pedagogy session about making (better) use of the various digital tools now available, and especially your experiences with them in the classroom.
At UVA, the writing curriculum is based on the Little Red Schoolhouse curriculum pioneered by Wayne Booth and Greg Colomb, and a number of grad students here are hard at work producing a digital companion to this curriculum. But only with the greatest hesitation have I brought extracurricular digital tools into play, either in first-year writing or lit surveys, leading to abortive-at-best experiments with class Flickr streams, Twitter discussions, and blog posts. My pedagogical toolbox already feels dusty and out of touch, and I haven’t even been at this two full years yet!
I’d like to hear what has worked–or not worked–for others (wikis? nGrams?), and what seems to hold promise for the near future (Neatline? others?). Basically, I want to figure out ways to shake up my classroom, and others’ as well, and preferably in ways that get students more excited than apprehensive.
]]>I’m proposing, as one session, participation in the 5K run being hosted by the English Dept’s grad student association. The run will start at 10:30 just outside Alderman Library, so not far at all from the Scholars Lab. I believe also I’ll be able to get THATCampers access to shower and locker-room facilities nearby, though whether that is directly next to the library or one bus stop away is not yet nailed down.
The registration link is here: graduate.engl.virginia.edu/gesa/fivek.html
Hope to see some of you at the starting line!
]]>Greetings, THATCampers!
We’re very much looking forward to seeing you all this weekend. We wanted to mention a few logistical things as we gear up:
Session proposals
As always, we’ll lead off with a reminder to add your brief session proposal to the THATCampVA blog. Only a few more days left! Also, please feel free to browse other posts and add your comments—the conversation there is a great way to get involved early.
Friday afternoon workshops
If you’re planning to come to a workshop on Friday, they’ll begin at 3:00 and wrap up at about 5:00. Both workshops are held here in the Scholars’ Lab in Alderman Library (directions and parking info). Note too that we’ll also have your THATCampVA registration packets ready starting at 2:00 if you’d like to pick yours up on Friday before the workshop.
Friday night
On Friday night, plan to join us for food and libations at a nearby restaurant, Boylan Heights—home of gourmet burgers, salads, and more. The Scholars’ Lab staff will be found in the upstairs area of the restaurant starting at 6:00 p.m. There is a parking deck just behind Boylan Heights and the restaurant validates parking. It’s also within walking distance of the Scholars’ Lab.
Saturday morning
On Saturday, registration will open at 8:00 a.m. here in the Scholars’ Lab (directions and parking info). We’ll have breakfast goodies and coffee out at 8:30. The opening session—in which we’ll collectively organize the day—starts at 9:00 sharp.
Dork Shorts
Finally, during lunch on Saturday: Dork Shorts! These are two-minute elevator-pitch presentations where you’ll have the opportunity to introduce current projects, invite participants into a project, show a cool site, etc. These are even more informal than the regular sessions and the point is to do the introduction—folks can follow up with you afterwards for details. You’ll have access to a computer to show websites. Signups will be available on Saturday morning.
Keep an eye on the THATCampVA blog for more updates.
See you in a couple of days!
]]>Various types of visualization tools for conceptualizing relationships between different types of media seem particularly hot in the DH community right now, and I’d like to explore the possibilities of these tools further. I’m a PhD student in English currently working on an interdisciplinary dissertation that focuses on connections between early photography and Victorian poetry, and thus my interest is primarily orientated towards networks between art and literature, although this topic could also be productively extended to involve other media (like sound or music, as Eric discusses in an earlier blog post).
I know that text analysis search interfaces like Voyant Tools and Word Seer can help me execute complex queries that would enable me to analyze the linguistic and rhetorical structures surrounding my search terms within selected databases (and thereby address the literary side of my project). Likewise, data visualization software like ImagePlot can enable me to explore patterns in large collections of images. I’m not sure how to use these tools to their full potential in my research, however, and I’d like to discuss how best to employ these devices in a practical sense. As my research focuses on intersections between literary and visual texts, I’m personally interested in investigating ways to combine these two areas of inquiry productively using existing online tools. (As I mention above, however, this session would certainly be relevant to other types of media.) The potential of these tools for expanding scholarship beyond disciplinary boundaries has not yet been fully utilized, and I’d like to expand the discussion to consider how existing tools might be enhanced to better address the needs of interdisciplinary (and inter-media) research and scholarship.
In a more theoretical sense, I’d be interested talking about how to use tools like these as jumping-off points to complex academic arguments about the relationships they represent. How can the use of dynamic interdisciplinary DH applications be integrated within the traditional boundaries of the traditional static article, dissertation, or book?
]]>A short and sweet session proposal based on discourse and creation:
I’d like to propose a session where we choose a place of current global significance outside of the United States. Using Google Maps as a window, I’d like for a group to then gather information and produce, in the limited time frame of the session, a google doc ethnography that tries to combine global statistics with real local knowledge and insight (restaurant reviews, local newspapers, local blogs can all be combined into an ethnographic matrix of ideas). With so much emphasis on the global and on the transnational in both literary studies and the digital humanities, this session would be a real test for how these tools can help scholars gather local knowledge and form a starting point for ethnographic engagement with a location.
]]>I’d like to propose a session to discuss what the digital humanities can (and cannot) do with binary image and sound files.
Part of my frequent dissatisfaction with recent experiments in data mining, computational distant reading, etc., I think, comes from a tension that’s been in place in DH for a long time. The same set of McGann-Druckerites (a group among which I count myself) who were raised up on a critical methodology that emphasizes the centrality to literary interpretation of material texts, bibliographic codes, etc.—those aspects of texts that are extremely difficult to capture in digital surrogate form—now finds itself in the midst of a digital scene in which plain text, whether marked up or not, is king. As often as not, scanned or photographed images more accurately capture the material situation of a book than marked text does—and text files, though they can operate as “phonotext” with the potential to make sound, as Garrett Stewart points out, cannot embody the sounded performance of poetry in the way audio recordings can. “Multimedia” was once the buzzword that best captured the promise of computers in culture, but those messy image, sound, and video files strewn all about the Internet have proven beyond the purview of most text-heavy DH research.
Some recent attempts to deal more readily with image and sound in DH suggest to me that there might be more we can do on this front:
I’m surely unaware of lots of great work being done on this front, and one of the purposes of the session would be to have a bit of a show-and-tell of that existing work. I’d also like to have a conversation about the possibilities for and limitations of multimedia data in relation to the digital humanities. How can we conceive of image and sound files not just as black boxes to be surrounded by metadata, but as data (or capta, as the case may be) in their own right? Do such files offer enhanced versions of the texts we work on, or are they in many respects impoverished? And of course, what knowledge can we actually produce by playing around with the multimedia treasure trove of the Internet?
]]>For some time, I’ve been interested in the similarities between two given texts. That similarity could be understood as textual (approximate string matching, longest common subsequence, etc), language-based (translations), semantic (paraphrases, allusions, etc), and ludic (think Derrida’s Glas). In an effort to resist my tendency to think up Digital Humanities chalupas (e.g. Neatline + Omeka + Juxta + Zotero + VoyantTools all rolled up into one), I’m trying to imagine the most simple block matcher possible.
Focusing on the textual for a bit. Here’s what I want my tool to do for me:
We already have a tool, Juxta, that could provide this functionality if we expand it’s capability to abstract matches and divorce it from the DIFF algorithm. The one addition we would need would be the ability to give unique ID’s to blocks and visualize from a distance. Anyone up for tweaking Juxta?
]]>
I am a medieval historian by training, and also a THATCamp newbie. I currently work as a manuscript specialist on a grant-funded DH project called “The Virtual Libraries of Reichenau and St. Gall” (www.stgallplan.org), now based at the UCLA Center for Medieval and Renaissance Studies, but which in an earlier phase of the project (before I came on board) was based at UVA’s IATH. In a nutshell, this phase of the project reconstructs the intellectual landscape of two of the most important intellectual communities of Carolingian Europe. We have digitized or purchased the rights to use images of about 170 Latin manuscripts that are or were owned in the Middle Ages by the Benedictine monasteries of St. Gall and Reichenau in what is today southwestern Germany / north-central Switzerland. My work on the project mostly entails describing these manuscripts and creating TEI XML metadata for our page-images of them.
Outside of my work on the St. Gall project, I also have used DH applications in my own scholarship, which focuses on the Venerable Bede (672/3-735) and the manuscript transmission of his works. My work in this area has mostly focussed on database development, and so in that connection I would be very interested in a discussion of some aspects of linked data and what it means for the future. Specifically, I’m interested in whether linked data will be the answer to what for me has become an old conundrum: namely, whereas to do serious research on medieval textual transmission you used to need to access, say, 1,000 pretty specialized books; since the digital revolution took hold you now need to access 700 pretty specialized books (half of which you might be able to find online if you look hard) and 300 different websites, one by one. In short, access has definitely been increased dramatically, but I think there’s still a lot of room to improve in terms of leveraging technology to reduce the amount of labor expended in accessing this type of information (I’m talking essentially about eliminating busy work; obviously the hard thinking bits will always be done by scholars). Or, to put it another way, will the growth of linked data technologies make it feasible to build a equivalent of Worldcat for medieval manuscript collections (or for that matter other types of archival/special collections)? Can others point me in the direction of projects that have done or are attempting to do this sort of thing for other fields of study? What would need to be done to make this happen?
Joshua Westgard
UCLA / Silver Spring, MD
]]>Topics might include: which discussion tools you like and use most (and why), what features you find most valuable and what features are still needed, what variables seem most important in engaging readers, tips and tricks you’ve found useful, etc.
A possible point of departure: A team at UVA’s Curry School has been looking at what a next-generation online discussion tool might look like and can share its preliminary design for comments, criticisms, and suggestions.
Submitted by:
Dan Doernberg, NowComment.com
Bill Ferster, UVA Curry School
My first THATCamp, right in my backyard. I’ve hopped around every digital branch of the UVA tree (think the bewigged cardinal): Scholars’ Lab (including the GIS specialists), NINES, SHANTI, and IATH, where lately we’re working on the BESS schema (Biographical Elements and Structure Schema), a database, and some prototypes for visualization, all taking the bibliography of 1200+ books in the Collective Biographies of Women project further into studies of biography. Lots of DH work is biographical (a lot of projects have a person’s name in the title), and personal data and life narratives are all over the Internet, but even in literary digital studies there is relatively little work on genres of nonfiction. I just got back from a conference on Life Writing at the Huntington Library. My talk was called Social Networking in Old and New Media, the “old” being books, the “new” being digital, both social media and digital humanities. The talk was like a sandwich with “all about my DH project” pressed thin in the middle, and thick slices of observation and speculation about the social construction of persons online. What are the elements of a unique “person,” identity, or life narrative? How do the forms (in all senses) of life narrative vary with repetition across different media? Name, date and place of birth and death, portraits, signature or password, resume-style events (think Linked-In)–and, as on Facebook, relationships and consumer choices/opinions–these elements seem to give us a handle on the unique individual linked to others. But as anyone who has worked in a library, written a biography or a history, or developed a database involving any social records knows, every component in this list of identifiers can be shared by others or it changes over time or can be falsified or lost. There’s lots to pursue in the ways that computers affect life narrative, writing or encoding or studying it. I’m interested in all the angles people might bring to this, but the Huntington crowd was very much about scholarship in paper archives and writing full-length literary biography. I think print and digital media present similar issues about reconciling the big and little picture, what’s shared, what’s interesting, and what kind of elements or controlled values our schema allows.
]]>As part of an NEH ODH grant, I am developing a video segmentation and annotation plugin for Omeka that will enable academic and cultural institutions and individuals to incorporate annotated video into online collections and exhibitions. Using either the client- or Web-based version of the Annotator’s Workbench, scholars and cultural professionals will be able to segment and annotate video and upload it to an Omeka-based Web site using the plugin created by this project. The annotated video plugin for Omeka will greatly enhance the pedagogical and research potential of video for online collections and exhibitions by providing humanities scholars and cultural institutions with a tool for incorporating video segments that contain integrated descriptive data linked specifically to the video content.
I see an opportunity to extend the capabilities of Omeka’s robust yet flexible development environment by building the annotated video plugin. Currently, users can incorporate a video file into an Omeka-based Web site and play it back. To include metadata is more difficult and the exisiting plugins are generally designed for one file with a single, associated set of metadata. None of the current Omeka plugins can deal with a video file that has been virtually segmented and for which corresponding annotation metadata is associated with each segment.
I plan on showing the plugin in action and would like to discuss how digital video, especially segmented and annotated video can be used in research and pedagogy.
]]>Hi all!
My name is Kathleen Thompson and I’m a PhD student in Russian language and literature here at UVA. I’m writing about 21st-century Russian-American authors who were born in Russia and emigrated to the U.S. (usually with their parents) in childhood, so my work focuses a lot on transcending borders and movement and fluidity of medium.
I took Intro to the Digital Liberal Arts with Rafael Alvarado here two years ago and loved it; I have only a very basic grounding in DH, but I’m fascinated with the very idea of it, and I really want to be able to apply it to my work somehow. For that class, we each had to build our own WordPress site on a particular topic – mine was a digital repository for one of the authors I’m studying – and that gave me an idea: why not nudge my dissertation towards the digital? Why not start a conversation that’s immediately accessible to more than just my small committee and anyone wandering around our library stacks checking spine titles for something interesting?
Slavic studies is sort of a dinosaur in that it’s slow to embrace change, and most of the people in it who are doing online work (blogs, mostly) are politics and history scholars. Literature has a very small presence on the web; the UVA library does have a new and fantastic online collection of contemporary Russian literature, but I want to add to that. Pursuant to that, I want to explore the idea of the digital dissertation itself: what does it entail? What format is acceptable? How can we best make older work digitally consumable? Is a digital dissertation even viable/cromulent/workable in academia today? If you’re on a dissertation committee, would you be willing to work with one, or would you strike it outright? How do we vet them?
I’ll probably add more questions as I think of them, but this is already over 300 words for a start!
]]>I am a THATCamp newbie! I am very excited to be attending and look forward to meeting all of you.
Right now, my pride is KPK: Kpop Kollective, my digital cultural studies project on Korean popular culture. I, along with a rag-tag band of colleagues and students at my institution and across the country, engage in collaborative research and writing, studying and document the international fan’s experience of Korean popular culture, which occurs almost exclusively on the Internet. I am running three IRB-approved studies through the site, and I am interested in talking to others about how to successfully collaborate in a digital environment in terms of writing, teaching research methods and writing in an onine environment. I am also interested in innovative ways of presenting large amounts of information in a way that engages the user. (We want to create an interactive cultural history of post-1997 Korean popular culture).
I’ve also been charged by my colleagues at Elon University to find out more about how we can plant that digital humanities flag at my institution in ways that are recognized by administration as legitimate scholarly activity.
I’m an associate professor in an English department at Elon University, but I have an interdisciplinary degree (American studies), and my work tends to be transnational. I do comparative cultural studies (African American, Asian, Asian American), focusing on visual culture, popular culture and literature. I have a book under contract on Afro-Asian cultural interaction in a global age, and working on a second combining qualitative research and cultural analysis on Korean and Chinese historical television dramas. I teach courses in American studies, American literature, Asian film and literature and speculative fiction.
Can’t wait to meet all of you!
]]>
Hotels! Fresh, hot hotels!
Just a quick note to let everybody know that three different hotels with conference rate options are now listed on the Location & Logistics page. Please note that reservations at all three must be made by Tuesday, March 20, in order for the conference rates to apply.
]]>I’m wondering if Patricia Battin’s framework for the role of an academic library set in 1984 has been fully accomplished? I think we are close, but not fully there yet. Here’s a list of the functions and facilities that she listed in the article, The Electronic Library – a Vision for the Future by Patricia Battin, EDUCOM Bulletin, Summer 1984
Our Electronic Scholar of the ’90s will find the following opportunities at the workstation:
In short, the capacity to rummage around in the bibliographic wealth of recorded knowledge, organized in meaningful fashion with logically controlled search:
NOTE: Used with permission of the author.
]]>The fundamental tenet of the THATCamp experience is the user-generated-ness (to coin a term) of the event itself. In other words, it’s up to you to propose the sessions, and this site is set up to help that happen. Phil Edwards has kicked us off already down below.
How does it happen?
Now that you’ve registered for THATCamp Virginia, we’ve make you a user account on this site. You should have received your login information by email. Before the THATCampVA, you should log in to the site, click on Posts –> Add New, then write and publish your session proposal. Your session proposal will appear on the front page of this site, and we’ll all be able to read and comment on it beforehand. (If you haven’t worked with WordPress before, see codex.wordpress.org/Writing_Posts for help.) The morning of the event, we’ll vote on those proposals (and probably come up with several new ones), and then all together we’ll work out how best to put those sessions into a schedule.
Here’s some guidance for you when considering a session idea to post.
Everyone who goes to a THATCamp should propose a session. Do not prepare a paper or presentation. Plan instead to have a conversation, to get some work done, or to have fun. Also remember, try to keep the posts brief–300 words or less should be enough to give your colleagues a sense of what you’re interested in talking about, without tiring out their eyeballs.
An unconference, in Tom Scheinfeldt’s words, is fun, productive, and collegial, and at THATCamp, therefore, “[W]e’re not here to listen and be listened to. We’re here to work, to participate actively.[…] We’re here to get stuff done.” Listen further:
Everyone should feel equally free to participate and everyone should let everyone else feel equally free to participate. You are not students and professors, management and staff here at THATCamp. At most conferences, the game we play is one in which I, the speaker, try desperately to prove to you how smart I am, and you, the audience member, tries desperately in the question and answer period to show how stupid I am by comparison. Not here. At THATCamp we’re here to be supportive of one another as we all struggle with the challenges and opportunities of incorporating technology in our work, departments, disciplines, and humanist missions.
If you propose a session, you should be prepared to run it. If you propose a hacking session, you should have the germ of a project to work on; if you propose a workshop, you should be prepared to teach it; if you propose a discussion of the Digital Public Library of America, you should be prepared to summarize what that is, begin the discussion, keep it going, and end it. But don’t worry — with the possible exception of workshops you’ve offered to teach, THATCamp sessions don’t really need to be prepared for; in fact, we infinitely prefer that you don’t prepare.
At most, you should come with one or two questions, problems, or goals, and you should be prepared to spend the session working on and working out those one or two points informally with a group of people who (believe me) are not there to judge your performance. Even last-minute workshops can be terrifically useful for others if you know the tool or skill you’re teaching inside and out. As long as you take responsibility for running the session, that’s usually all that’s needed. Read about the Open Space Technology approach to organizing meetings for a longer discussion of why we don’t adopt or encourage more structured forms of facilitation.
We’ll do our best to provide space for additional, on-the-fly conversation. Sometimes, for instance, your discussion was going so well at the one hour fifteen minute mark that you hated to end it; if there’s a slot available, you should be able to propose “Training Robotic Ferrets: Part Two” as a session as soon as “Training Robotic Ferrets” ends.
And yes, we know this went over 300 words.
Info on this post shamelessly cribbed from THATCamp Texas and THATCamp.org. Because who can improve on perfection?
]]>I’m Phil Edwards, and I’m very excited to be a part of the regional THATCampVA this year. In my day-job, I work with individual faculty members, graduate students, and departments as they think about their teaching, courses, curricula, and student learning. I earned my B.S. in Chemistry with a Minor in Mathematics (2001) from the University at Buffalo–SUNY, my M.S. in Information with a specialization in Library and Information Services (2003) from the University of Michigan, and I was a Ph.D. candidate [A.B.D.] in Information Science at the University of Washington from 2003-2010. I was a member of the faculty at the School of Information and Library Science at the University of North Carolina at Chapel Hill from 2008-2011 until I joined the Center for Teaching Excellence at Virginia Commonwealth University in July 2011. (Go Rams!)
In terms of session ideas…
if anyone would be interested in engaging in a conversation about some of the recent developments in (mostly-)online education (e.g., Mozilla’s Open Badges project, HASTAC’s Badges for Lifelong Learning Competition, MITx, Udacity, etc.), I’d be willing to come to the table to share in that discussion. I’m currently enrolled in the prototype MITx course, 6.002x: Circuits & Electronics, and I’ll be documenting my experiences as a student along the way. (Well, MITx 6.002x officially starts tomorrow.) Anyone interested?
Let’s admit it: we love THATCamps because they make us kids again. They’re like perfect sandboxes and brand-new Crayons and the first day of school combined. We get to make new friends, invite them to play with the building blocks we share — and enjoy some willful trespassing in unfamiliar fields and methodologies, all the while thumbing our noses, for a day or two, at authority: conventional conference and presentation formats, disciplinary boundaries, and those class divisions in the academy that we all know to be bogus, man. Totally bogus. (Are you going to drink your chocolate milk?)
Inspired by past conversations about our own child-like wonder at unconferences, the shared goal of the digital humanities community to instill a maker’s ethos in the next generation (young or less young), and our perennial need for babysitting in order to attend events like this — we are declaring THATCampVA 2012 to be a kid-friendly THATCamp!
A couple of kids in the 8- to 13-year-old age-range have already signed up to attend along with their parents — and we are both extending the deadline and opening up some extra slots to accommodate new registrants. Kids will be welcome to accompany parents or guardians at our Friday workshops (where they might especially enjoy some DIY aerial photography), as well as to attend all day on Saturday. Depending on junior THATCamper turnout, we will either let the kids self-organize some sessions of their own, or you can bring them along to the grown-up conversations you think they’ll find interesting.
So if you were reluctant to sign up because it meant leaving the little guys at home, or if you’re excited at the chance to spend some time geeking out together on technology in the humanities — please REGISTER BY MONDAY MORNING, March the 5th.
]]>In just a little over a week, the application period will close for THATCamp Virginia 2012! Be sure to register to attend by March 1st.
]]>
And as promised, a description of the second workshop offered at THATCampVA on Friday, April 20. Indicate your interest when you register for THATCampVA. From instructors Chris Gist and Kelly Johnston:
Need aerial images for a scholarly publication or research project and can’t find any that fit your needs? How about making your own? Grassroots mapping is an idea that allows people to survey and map what is important to them. People have surveyed oil spills, public demonstrations, small archaeological sites, etc. at a scale that fits their needs by dangling cameras from balloons and kites. They then use software to mosaic their aerial photographs into larger scenes that can be easily shared via Google Maps, Google Earth, or other digital mapping tools.
Come learn techniques to fly your own camera, make your own mosaics and go fly a kite (or balloon in this case)!
UPDATE: check out our post on the test flights!
]]>
THATCampVA will open on Friday, April 20, with your choice of two workshops. The first of them, on a soon-to-be-launched tool for the spatial humanities, comes from instructor David McClure (co-teaching with Eric Rochester):
Neatline is a geo-temporal mapping application built on top of the Omeka framework that makes it possible to plot any collection of things – objects, letters, buildings, photographs, events, people, imaginative topologies – on maps and timelines. Built by the Scholars’ Lab in collaboration with the Roy Rosenzweig Center for History and New Media and supported by grants from the National Endowment for the Humanities and the Library of Congress, Neatline provides a native environment in which to represent arguments, narratives, and stories that are fundamentally rooted in space and place.
The 2-hour workshop will start with a basic overview of the software – what it is, where it came from, the types of use-cases it’s designed to accommodate – and then move into the nitty-gritty of creating exhibits, configuring custom layouts, and plotting records on the map and timeline. The workshop will also touch on some more specialized techniques that make it possible to represent hierarchical relationships among records, create custom styles for map vectors and timeline spans, and edit the ordering of the content in an exhibit to create narratives and linear progressions.
If you have a laptop, you’ll be able to follow along in real-time using a public webservice soon to be launched at neatline.org.
Stay tuned for a bit more illumination on the DIY Aerial Photography workshop next week.
]]>We’re excited to say that registration is now open for approximately 75 participants at THATCampVA! Slots will be filled on a first-come, first-served basis, so register early.
CLOSING DATE: March 1 March 5, per this update
What’s this now?
You know! A regional THATCamp.
When?
THATCampVA will be held on Friday and Saturday, April 20-21, 2012. Friday will open at 3:00 with your choice of two-hour workshops, one focusing on DIY Aerial Photography and the other on the newly-launched Neatline. And then on Saturday the THATCampVA unconference itself, with sessions generated by the participants, will be held 9:00 a.m. to 5:00 p.m. Opportunities for social time with friends old and new will be available on Friday night, April 20, at nearby establishments.
Where?
Charlottesville, Virginia (at UVA Library’s Scholars’ Lab)
Who?
Organizers include digital humanities folks from UVA, Mary Washington, and other central Virginia institutions — but this is your unconference!
Anybody with energy and an interest in the humanities and/or technology should attend: graduate students, scholars, librarians, archivists, museum professionals, developers and programmers, administrators, managers, and funders; people from the non-profit sector, the for-profit sector, and interested amateurs. We say any- and everybody, and especially those who would find this interesting but who may never have been to a THATCamp or anything like it before.
Questions in the meantime? Email us!
Look here for more news soon — and follow us at @THATCampVA.
]]>