Below is a collection of the blog posts composed while engaged in the Digital History Minor Field readings with Dr. Stephen Robertson. By no means are they a full representation of the scope and depth of our discussions, nor do they fully represent the scale of our reading. Instead they should be understood as my own reactions to these themes and concepts and represent my opinions and knowledge as I work through the field.
Our final set of readings focused on the subject of crowds. It was less so about networks as we’d imagined it – in terms of network analysis- and instead about how networks of people can work together to transform action in the digital environment.
This week the readings emphasized how networked crowds (like we’ve discussed elsewhere) handle scale differently. Though the phrase “many hands make light work” – seems to hold true here, the difference is, as Clay Shirkey writes, “In the words of physicist Philip Anderson, ‘More is different.’ When you aggregate a lot of something, it behaves in new ways, and our new communications tools are aggregating our individual ability to create and share, at unprecedented levels of more.” (Cognitive Surplus, 25). Shirky, Yochai Benkler, Cass Sustein, Max Evans, Guy Marieke and Emma Tonklin agree that an important and transformative feature of the Web 2.0 world is that scale changes everything.
But also as Sustein emphasizes, do we need to increase our concern and critical assessment of the process itself- be suspicious about the negative impact of the voices of the many drowning out the voices of the few. In terms of my own interests I was drawn to the way in which the web makes it possible for engagement and participation from minority voices – and for me this has to do with the way in which digital media has transformed the ways in which (and spaces in which) members of the deaf community interact, share ideas and engage with hearing individuals. Shirky was particularly meaningful in his (hopeful) assessment of how the internet can grow groups and organizations. In Here Comes Everybody, he writes, “We now have communications tools – and increasingly, social patterns that make use of those tools – that are a better fit for our native desires and talents for group effort.” (48) While we’ve focused much of our discussions in this minor field reading on our own dissertation work, in a much larger way- this has been the draw for me in engaging in the digital humanities – for the productive way in which DH has made it easier to effect change, achieve access, and connect with peers – particularly in the community in which I am engaged.
Stepping back from my own interests – the readings were useful in conceptualizing and imagining the way in which networks and crowds enable us to get at data differently and more efficiently than previously possible. I can’t read about the Flickr Commons project and Folksonomies or Max Evans’ piece about archival digitization without thinking immediately of the Papers of the War Department and the Scripto tool used at CHNM – or the New York City Public Library Building Inspector project that is learning to machine read insurance maps, or many other user-driven attempts to harness the “power of the crowd”. While these approaches won’t aid me in my dissertation work – they are useful for me to consider in future projects and will continue to play a big role in finding and obtaining access to information/resources I need for my research.
Having worked on two topic modeling projects this year (a final project for Clio I and a topic modeling of the THATCamp proceedings undertaken by the Digital History Fellows at CHNM), I walked into this grouping of readings with relative confidence.
A confidence that was quickly deflated when confronted with how little I truly understood when I completed those projects. This week, Ted Underwood, Lauren F. Klein, David M. Blei, Andrew Goldstone, Elijah Meeks, Lisa Rhody and Ben Schmidt exposed me to the possibilities and pitfalls of using LDA topic modeling. Given what I learned, I’ve a real inclination to revisit those projects at another time.
There were a few areas in particular that drew my attention and raised interesting questions for me regarding the analysis and interpretation of the results that topic modeling produces.
A fundamental concern relates to the fact that topic modeling analyzes texts by counting and grouping tokens into models. This word-based analysis should be subject to a good deal of consideration. Rhody, and the others, emphasize the value of LDA topic modeling as “revealing patterns and relationships that might otherwise have remained hidden.” A great benefit indeed for historians interested in considering new approaches. However, a considerable barrier in the application of digital tools is, and will continue to be, a lack of understanding regarding the results.
How effective is this process if the conclusions you draw “will be limited to those who understand how topic modeling works” (Schmidt) ? How do non-digital scholars make sense of topic modeling? Furthermore, as researchers, what other pitfalls are hidden in the data, overlooked because we don’t know what we don’t know?
This week Amanda shared with us a link to Goldstone and Underwood’s PMLA research data (available here) and what struck me as I looked through it was two thoughts: one, this looks beautiful, and two, what does any of this mean? For me this highlights the difficulty in communicating the work of digital tools like topic modeling to non-digital historians. I think it is an impressive feat, to distantly read a large corpus of documents, but I’m not convinced of the efficacy in the face of concerns raised across the readings about whether or not we even understand the “topics” we produce.
Topic modeling is not without the interpretation and close-reading associated with traditional historical research, but machine reading still causes some discomfort for me. The privileging of the text and the stripping of context make me nervous and I echo the concerns we read about word meanings and changing terminologies.
In her explanation of the process of topic modeling, Rhody uses the analogy of a farmers’ market to describe the computational processes that occur within the program. The machine simply reads the contents of the baskets and identifies patterns. The process seems simple when you describe how “a pear is put in the basket with other pears”, but I found myself wondering about instances when the fruit looks like a pear but is decidedly different; a pear-apple, or a green apple, or a watermelon.
Any researcher knows that information can be categorized in different ways with different meanings. Information about a single deaf church (if you share my research interests) involves a relationship to a constellation of things; the founding reverend, members of the clergy, members of the congregation, the religious beliefs, local churches, deaf organizations, the wider city deaf community and its members. I am able to glean a great deal about these things through their relationship with one another. One would hope that a machine reading of historical documents would produce topics that could highlight these relationships, or at least reproduce them in a meaningful way.
The problem I face, however, is that the term “deaf” is not a static phrase. It is not one that is only used to refer to deaf people, nor is it the only phrase that has historically referred to deaf people.
Seeking this information involves wading through countless instances where the phrase is used metaphorically (“deaf to their pleas”). It also involves using terms like “deaf-and-dumb”, “deaf-mute”, “speechless” and “mute” with an awareness of the time period and context. In the graph above I’ve used the Chronicle (a tool that examines language use in New York Times reporting from 1850) to demonstrate how these terms have changed in popularity over time and to demonstrate how the term “deaf” appears much more frequently.
These types of problems led Underwood to focus on topics rather than words – in this sense, words are given context. Schmidt remained critical of topics as well and demonstrated how clustered terms may be combined in a single topic – operating in separate directions.
Still, despite the skepticism I sometimes feel, there is something meaningful in the findings that topic modeling and text mining projects produce. And I can’t overlook the way in which topic modeling enables us to interrogate a corpus with informed questions.
In our meeting, Stephen emphasized that a potential strength of the digital is that we do have to make transparent what we did to arrive at these conclusions. Our processes and practices are made much more obvious and are under greater scrutiny because they involve new and complex techniques. The need to “open the black box” comes not just to explain these processes, but also makes it possible to begin to discuss methodology in the context of making an argument. Traditional historians don’t always describe their work in this manner, despite the fact that (as we read two weeks ago) the majority of historians have changed these processes to include computational processes like “search”.
As historians we are meant to be questioning, and aware of, our own assumptions. Topic modeling is another way of highlighting what those assumptions may be.
This grouping of readings aligned more strongly with my concerns and interests for my dissertation project. Mapping elements will likely feature prominently in my work and it was really helpful to see some real worked examples of mapping devices and tools.
I picked up Monmonier’s How to Lie with Maps first. I actually found his writing to be very useful in that it framed the subsequent readings. For me, his work was a useful reminder that all maps are constructed. We often attribute a great deal of legitimacy to maps, we take them to be accurate representations of physical space without engaging the same level of critical reading we apply to other texts (a point he makes, repeatedly). Looking ahead to my own project aims, I anticipate the need to defend the maps I produce and address the choices I made in including or excluding content. Monmonier is a good reminder that all cartographers contend with these types of decision. All maps are constructed with intent and reflect the biases and perspectives of their maker.
Though I think that maps make sense to historians, as historical objects, we don’t see ourselves as doing the same work as cartographers. However, I appreciated Hitchcock’s thinking as he emphasized that our fields are not mutually exclusive. We share a number of concerns and questions and can benefit from an overlapping of theory and techniques. He wrote “The habits of mind and analytical tools of geographers need to inform our understanding of the past; while the mental ticks of the historian, and the authority of history as a literary genre, are necessary tools for communicating all kinds of memory to a wider audience.”
So where do we see this overlap occurring. If we read only Bodenhamer, et al, and Gregory and Geddes, we might think that GIS, (or HGIS) represented the overlapping of fields and concerns. A GIS perspective, however, is rooted in the notion of “place” while spatial historians seem to be more interested in the notion of “space”. Bodenhamer, Corrigan and Harris suggest that the “Representation of the past… is a kind of mapping where the past is a landscape and history is the way we fashion it… mapping is not cartographic but conceptual.” (xi) This notion is harder to work into a georeferenced map because it involves thinking about maps in a different manner.
Lock, in the same text, gets at the point a bit more fully as he envisions, “landscape as a metaphor for social and cultural complexity of being human – both in the way it can be used to represent the past but also, and perhaps more forcibly, the present where any representation is located.” (105) In this sense historical mapping projects seek to map not only objects like buildings and roads, but ephemeral relationships and networks- all of which is linked to space and time. Schwartz and Thevenin (in Toward Spatial Humanities) echo this sentiment (and critique of a strictly GIS approach to historical presentation. “Spatial history ought to do more than examine questions about geographic distributions over time. To go further, spatial history should concern the study of spatial relationships and of spatial interconnectivity over time, that is the degree to which change is one part of an interrelated system alters other parts in turn.” (104) Coupling these ideas with Monmonier and Hitchcock, I started to envision mapping projects that are more metaphorical in their representation of space.
Hitchcock included an interesting critique of maps that resonated with me – He described Mackay and wrote “some people stand in the same place longer than many buildings; and have a greater right to appear on a map, than many landmarks.” What things/people are missing from the maps we make/read? What is taken to be transient but is actually permanent in the lived experience of that space? These questions resonated with me because a great deal of my historical research involves lives and stories that exist in the gaps.
It accesses something that I brought up in class – the idea of maps that serve as metaphors. While mapping is about making a visual representation of geographical space, mapping is also used to reference the allocation of social space as well (I referenced The Ugly Laws by Susan Schweik in this case). What form would a map take if it involved the mapping of deaf residential school newspapers? One that examined communities? The missionary efforts of the Church Mission to Deaf Mutes? Imagined in a GIS environment, these would operate similarly, but imagining them in a conceptual manner becomes much more interesting to me.
Stephen suggested in our discussion that thick mapping/deep mapping is, perhaps, a middle ground between abstracted and geographical maps. I agree with his thinking here- dynamic maps are less about linking objects in place and more about the relationships between objects and places. It entails thinking about the relationships between space, temporally as well as physical distance. It also enables us to access the scalability of a digital environment. It redefines, for me, Monmonier’s argument about cartographers making choices about what to include and what to exclude in traditional mapping. Digital mapping enables us to operate both macro and micro, embedding content and context in meaningful ways.
The question that bounces around my head, however is – Can maps ever convey the amount of information necessary to make arguments? If maps can be used to answer questions, can mapping projects present scholarly arguments? Some of the obvious arguments against this idea are that reading maps requires extensive contextual knowledge to understand or appreciate the argument (Then again, so do texts), or that maps omit content or context to privilege other content or contexts (so do texts). Perhaps one could argue that maps don’t make a single argument, but rather make several arguments simultaneously – in which case, the concern is that it is a nonlinear (or multilinear) argument (in which case, I’d address concerns by directing them to our second week of reading.). A reasonable critique may be that that maps can’t always demonstrate change over time- but thick/deep mapping projects challenge this thinking.
I don’t know that I am convinced that ONLY maps can do this, but maps combined with other forms of scholarship, as presented in a number of the projects we looked at this week, are doing so capably.
Maybe the argument we should be making is more aligned with What Scheinfeld wrote in the Gold text, sometimes tools are built to answer questions and sometimes tools produce questions. “maybe we need more time to articulate our digital apparatus, to produce new phenomena that we can neither anticipate nor explain immediately… We need time to experiment and even… to play.” [Scheinfeld]
The projects we examined were more meaningful to me in this light. Test cases and experimentations in what data and tools can tell us about the past. Torget, for instance, provides more a critique of the digitization of historical newspapers than a discussion of the content of the texts. Presner and the Hypercities project are interesting in that they work to “publish… geo-temporal arguments”, but it is difficult to see how this work can be understood to produce a single, cohesive argument or how it operates as something more than a mapped repository. This is not meant to disparage their work (I was really inspired by Meeks and the ORBIS project in particular), but rather to point out how these are useful exercises in expanding our ideas of what maps can do, how data can be used and how scholarship can be created.
Is digital history fundamentally different from history as we know it? Dougherty suggests “By definition, digital history utilizes different tools, differently, than most historians are used to. It has its own vocabulary and requires different skills sets (emphasizing, for example, curation as opposed to detective work)” (Dougherty, Jack, ed. “Writing History in the Digital Age.” Writing History in the Digital Age, May 22, 2011.) So what are these tools, vocabularies, and skills? And how have historians begun to embrace them?
The study, “Supporting the Changing Research Practices of Historians,” which included interviews with historians, suggests that underlying research methods remain the same, despite new tools and technologies. The one area that respondents were comfortable incorporating into their research practices was Searching. Tools that aided in analysis or information organization were not. It would seem that there is a gap between the historian and the digital. Though historians (myself included, of course) embrace searching as a change to our day-to-day research practices, Putnam complicates it’s uncritical usage.
Putnam’s article offers an interesting critique of our use of text searches. She highlights the way in which searching releasing text from it’s anchors, encourages scholars to consider the effects of searching more broadly, taking into account the macro and the micro, geographic scope and the scale of observation. Together, Rutner, Schonfeld and Putnam make me think more critically about research processes. Term-fishing and side-glancing have been added to our methodology, but perhaps we need to be more considerate about what those processes mean. Are there digital footprints akin to the footnotes we use in our texts, do we track our searches and maintain the links we followed? What doesn’t appear? What exists in the shadows, as Putnam writes? If we only search digitized projects- how do we privilege time periods? Languages?
In what ways does text-searching reinforce the linguistic turn? Jones’s work makes a strong argument that the field has placed a great deal of emphasis on words. ““from the utterance there is, supposedly, everything!” (Jones 536). In emphasizing words we decontextualize word from deed and person from society. “History must find ways to relate words to deeds to overcome this renewed bout of tunnel vision…history still needs to find ways of aggregating, not just particularizing, its subjects.” (522) Thinking about Jones’s arguments in the context of Putnam – we need to be considerate of these issues in as we tackle large-scale text mining projects- what isn’t in the word? What deeds, actions and groups are missing? How can words be more directly linked to deed using digital tools? (My initial reaction would be to think of digital mapping projects that have begun to do just that, but I’ll reserve comments on those projects until next week).
Outside of searching- what can historians use digital tools to do? As we’ve read in the weeks prior, the digital humanities offers several interesting and important avenues for collecting, organizing, analyzing, and interpreting the historical record. The digital turn offers new tools coupled with new questions. One of the most meaningful changes as we address the digital must be the subject of scale. As Price writes, “A theoretical possibility of digital scholarship — the indefinite expansibility — has become a lived reality in our case.” (Price 17)
Perhaps the shift in scale offered by the digital enables us to take a longer view of history, while still being responsive to the critiques and concerns raised in the field since the 1970s. Armitage and Guldi explore the ‘Return of the Long Duree” in their article and make a convincing case for taking a “macroscope” lens to the evidence and evaluating larger time periods and long term trends.
An additional shift in historical research is the changing role of visualization. Staley and Moss examine the way in which the digital turn enables historians to harness visualization in meaningful ways. Staley defines visualization as “any graphic that organizes meaningful information in multidimensional spatial form.” (xi) This is, of course, juxtaposed with narrative, or prose, “a one-dimensional medium” (Staley xi).
The thrust of an argument for the visualization of history seems to be that creating visualizations enables the reader to access multiple levels of information simultaneously. I was most excited to read Moss and Staley, and while they offered a compelling discussion of visualization in historical study – the same question still floats around my mind- can history be done with visual tools or are we restricted to prose- can visualization make an argument?
Staley places this question at the center of his text and while he examines exciting and interesting visual presentations of history, I’m not sure he’s answered it. Moss, on the other hand, traces the rise of the visual and argues (like Hayles) that today’s learners are conditioned to appreciate the visual. (“In harmony with mass culture, the visualized emphasis of society becomes the currency by which to express thought.” Moss 3) It is important to note that while visuals, like photographs and maps, may be seen as communicating information about an historical time period and also as evidence from within that time period- these are not the types of visualization that Moss and Staley describe.
Rather we should be thinking about those visualizations that are generated to describe evidence. “Data-rich” visualizations, as Staley describes them, are complicated, work on multiple levels, while also clear and concise.
It is hard, given that we’ve so few concrete examples of what this type of project looks like that I’m not able to judge it’s effectiveness. We’ll see how the next grouping of readings on mapping will shape my thinking.
This collection of readings grappled with making sense of the changes brought on by emerging digital technologies, tools and approaches. The works included in this grouping, though clearly written for different audiences, with different questions, and highlighting different concerns, provide a look at how ‘the digital turn’ may be shaping the way in which we think, read and access information.
I’m consistently assessing these readings in the context of my dissertation topic – thinking not only about the arguments they pose in relation to one another, but also how these arguments will shape the direction of my own project. Looking across the texts there are several main subjects and themes that are useful to consider. I paid particular attention to hypertext, scale, linearity (and nonlinearity), authorship, and participation.
These areas, consistently linked to digital work, are also useful in forming/building/answering my own questions.
The subject of participation is particularly meaningful for my work. One of the draws of digital work is the presentation of historical work that is both accessible and appropriate for members of the American deaf community. I anticipate that the use of video/visualizations (primarily for the purpose of communicating content in American Sign Language) may be disruptive to the dissertation in a conventional sense. It is in this sense that I read Katherine Hayle’s argument in How We Think, “The Age of Print is passing, and the assumptions, presuppositions, and practices associated with it are becoming visible as media-specific practices rather than the largely invisible status quo.” (pg 2) Print traditions that draw a distinctive line between the oral traditions used within communities (like the American deaf community) and written text can take new digital forms.
Further, as Hayles indicates, the assumptions about how information can/should be shared are becoming revealed. In addition to defending a dissertation that may or may not be linear, uses a database and draws a large dataset- I anticipate having to discuss the rationality behind using digital media to convey historical information to a bilingual audience. The assumption that academic research should take a written form is not a new one. However, digital technologies offer a new challenge and new solutions. (The creation of a diglossic, blended presentation that utilizes both ASL and written English, for instance.)
There is some indication that choosing digital tools and presentations is participatory in another sense. Gee states “Digital media – themselves tools for meaning making, like writing – do not lend themselves strongly to a purely mental view in the way that reading and writing do… there is something more apparently social and institutional about digital media.” (pg 8). Gee’s work emphasizes the nature of learning in the context of digital media. Here, he argues that while writing also serves to “make meaning”, there’s something different about how digital media produces learning.
In Hyper/Text/Theory, Ess gets at a similar topic. He argues for the democratizing nature of hypertext believing that it “would facilitate discourse among a diversity of grass-roots communities that might agree, by way of the same form of discourse, upon different norms, and thereby preserve individual and cultural differences.” (251) Though the success of hypertext in creating a diversity of discourse is critiqued in more recent scholarship, I’d like to consider both Gee’s and Ess’ arguments. In what ways is digital work more or less “social”? Does that make it any less scholarly? Looking at contemporary practices, Web 2.0 is definitely built on the idea of participation. But I think Gee and Ess are speaking more broadly to the subject of linear/nonlinear narratives.
The audience participates in navigating the information provided. Though authorship is reserved for the creator of the site, users are given freedom to experiment and choose the way in which they move through the data. As we discussed in our meeting (and as is argued by Liu) the idea of a linear tradition is critiqued. Regardless, the form of my dissertation will likely include a participatory element but the shape of this requires a better idea of the scope of my project. I’d like users to access and/or manipulate data, I’d like to make arguments that support data, but I’d also like visitors to draw conclusions as well.
Thinking about the audience further even more questions jump to mind: What is a deaf digital discourse? What form would it take and how might that differ from conventional digital discourse? Given my research – there has been very little work done to answer these questions.
Deaf community members have embraced digital technologies, particularly those of video conferencing, recording and presentation. (Reinforcing Gee’s point that “digital tools have allowed ‘everyday people’ to produce and not just consume media” producing a “participatory culture” (pg 12)) But I’ve no real sense of how these technologies may or may not be included in my digital work.
In general, I’ve a vague sense of what the final project will look like – readings are often raising more questions than they are answering – but that’s the point.
For the remainder of the summer, I’ll be using this space to document and discuss my reactions to the readings we’ll be doing in our Digital History minor field reading at GMU. Given that the readings are meant to prepare us for our work in DH, I’ll be focusing many of my comments on wrestling with and formulating my own arguments and sensibilities regarding work in the field and looking ahead to my research and dissertation work. This should not be taken as a complete or comprehensive encounter with these works but rather my own process of coming to terms with DH as a field and situating myself within it. Generally, the goal is to create a post prior to each meeting as a means of organizing my thoughts and preparing for a fruitful discussion.
That minor caveat aside, the works we examined this week were clustered around the subject of Digital Humanities, Past, Present and Future. The title seems a bit ambitious, but overall the readings were rather timely given the recent posts by Stephen Robertson and Scott Paul McGinnis on DH Now. It would seem that this week we are naming the elephant in the room: What does it mean to practice Digital Humanities and how does Digital History fit within (or outside of) Digital Humanities?
After reading these works I find myself with largely the same questions in mind.
This week we’ve focused on several broad texts; Understanding Digital Humanities by Barry et al, Digital_Humanities by Burdick, Drucker, Lunenfeld, Presner and Schnapp, and Debates in the Digital Humanities by Matthew Gold, et al as well as smaller, more focused works by Fred Gibbs and Trevor Owens (“Building Better Digital Humanities Tools”), Stephen Ramsey (Reading Machines), John Unsworth, (“Scholarly Primitives”), G. Hall (“Toward a Post-Digital Humanities”), and Andrew Prescott (“Consumers, Creators or Commentators?”).
Putting these texts in conversation really highlighted the subjects that receive significant emphasis in DH discourse. Generally each of the works offered a definition of Digital Humanities that placed it in the context of a larger history of humanities computing and struggled to articulate the role of DH in generating scholarship. Many demonstrated an anxiety about digital publishing. They demonstrated concern with getting ‘credit’ for digital work and contended with the notion of ‘publish or perish’ that is enacted in the academic workplace.
I found each of these subjects interesting, but given my current concerns (the structure and direction of my dissertation) I find myself still struggling to articulate or conceptualize my place within the field (and leaning closer to Prescott’s critique- that we focus our energy on internal debates when we should be informing theory). The central question, the one that will underlie my thinking on the readings in this course, is the same question I’ve been asked, repeatedly, by former advisors and historians: why digital history? With the option of going down a traditional history path, why have I thrown myself into the digital?
Climbing into the “big tent” of DH for my answer, I’m finding myself attracted to work by McPherson, Losh, Edwards, and Williams (in Gold, et al) which highlight for me the multiple communities within which my work will operate. The transformative, iterative and accessible nature of DH work provides for new and innovative activities regarding production and presentation of scholarship- methods which could transform the way in which deaf history is envisioned. In many ways, I’m inspired by Parry’s assertion that DH represents an understanding of new modes of scholarship, as a field that represents change, not only in tools but in the nature of scholarship itself. Burdick, et al make a similar point, “…Digital Humanties activity seeks to revitalize liberal arts traditions in the electronically inflected language of the 21st century: a language in which… text is increasingly wedded to still and moving images as well as to sound …” (122).
Deaf Studies scholars have posited a similar shift in literary theory and analysis regarding American Sign Language literature (Bauman, et al). The emergence of video technologies that record and preserve signed performances has resulted in significant shift in the discussion and analysis of ASL literature and poetry and furthered exploration in new and innovation means of performing and documenting these works. To what degree will Digital History effect change on the collection, distribution, analysis, interpretation, of deaf history?
As Robertson’s article suggested, answers regarding the issues significant to Digital History are more often found outside of the texts we’ve read this week. Few works dealt with the subject of Digital History directly. Approaching these texts, I hoped for a discussion of Digital History on par with the approach to literary text analysis offered by Ramsay. But there was no text that suggested a major shift in the theory and process of historical scholarship. In fact, Gibbs and Owens, demonstrated that historians using digital tools are yet to fully embrace new tools and methodologies – rather, they are more likely to utilize digital tools on traditional projects in traditional ways. Scheinfeldt suggests that time is needed, as is the understanding that sometimes tools are built to answer questions and sometimes tools produce their own questions.
As a newcomer to the field, I’m yet to articulate my point of view. Reading this week indicates that the field, particularly digital history, has yet to do the same. But discussion at work and in class leads me to believe that the places where the Digital History is realized are centers like CHNM. Looking forward to more discussion next week.