Digital Critical Editions of Homer

Research student Chiara Salvagni from KCL’s Department of Digital Humanities kick-started the Digital Classicist 2012 seminar series with a presentation on Digital Critical Editions of Homer. You don’t often come across people whose academic paths are almost identical to yours, so you can imagine my excitement when I discovered Chiara’s talk on the Digital Classicist summer programme!

While my interests lie more in the computational, user, front-end aspects of digital editions, Chiara has a more theoretical, editorial and classical approach. I’ve never been really good at conceptual frameworks so I’m lucky to have someone like Chiara to learn from.

Ok, so in yesterday’s seminar Chiara discussed her work on the creation of an open-source digital edition of book 1 of the Odyssey, with particular emphasis on the scholia. This is a cyclopean (check out my Odyssean terminology ;-)) project not only because of its scale and intricacy but, most importantly, because the oral register of Homer poses some difficult theoretical and encoding questions. After spending a year on reviewing state of the art printed and digital editions of Homer (including the Homer Multitext and The Chicago Homer), Chiara has now begun XML-ing the text in accordance with TEI standards. An analytical discussion on encoding choices and methodology was followed by the thorny issue of digitally reproducing the critical apparatus. The textual density of critical apparati in printed volumes of Homer as well as other major authors is visually tiring and, frankly, off-putting. Chiara’s research is trying to address this issue by assessing and proposing the digital edition as an advantageous means of unpacking textual compactness into a more readable format. Some people argued that the tool would only benefit those with large computer screens, that the digital edition should offer something the print cannot, etc. – all fair points but, hey, we can’t make everybody happy. Homer is huge and Chiara’s tool would enable users to filter Homer so to avoid squinting over large amounts of tedious text. Think also about the teaching and learning possibilities a digital edition would offer… Finally, in line with her initial discussion on open-source editions, Chiara intends to make her raw data available through a clean, simple online interface, similar to that of the Vincent Van Gogh: The Letters project.

Like Chiara, I’m spending the first year of my research reviewing existing literature in the form of a Google Spreadsheet, where I record extant digital editions, their features, strengths and shortcomings. My data collection is already revealing some interesting patterns so hopefully it won’t be long before I move to the next stage.

As ever, the whole series will be recorded and the podcasts + slides uploaded to the Digital Classicist website. If you can’t attend, you can always follow or join the conversation via the Twitter #digiclass hashtag.

Hope to see you next time!

Manuscript Digitisation: how applying publishing and content packaging theory can move us forward

CeRch‘s second seminar on manuscript digitisation was an insightful and beautifully presented paper by Dr Leah Tether from Anglia Ruskin University. With a background in publishing and medieval French literature, Dr Tether introduced some of the issues concerning the production of a digital edition of a manuscript from a publisher’s point of view.

Leah discussed Gerard Genette‘s Theory of paratextual spaces (whereby mixing paratextual spaces hinders the examination and interpretation of the manuscript’s contents) by way of a critical analysis of the Christine de Pizan project, particularly their display of manuscript annotations and decorations. The term paratextual is used to refer to those layers of information connected to a text (peritexts: titles, covers, prefaces, chapter headings, editorial commentary etc.) and those which are placed outside the text (epitexts: interviews, letters, diaries, conversations, etc.).

Cartoon teasing academic booksA preliminary remark stressed how important it is that digital editions be useful and usable (which reminds me, an excellent read on the topic  is Melissa Terras’ paper: Should we just send a copy? Digitisation, Use and Usefulness (2010) – freely available from the UCL repository). Tether believes that these two key aspects might represent less of a problem when scholars and publishers work together. While many scholarly-based projects have produced wonderful results, the scholar’s eye alone is not always sufficient when it comes to layout and interaction. Nor is that of a publisher when it comes to content. The ‘complementary’ relationship of publisher and academic almost mirrors the two facets of a digital edition, whereby the integration of the original object with its virtual representation offers the most complete viewing experience. There is no point in appealing aesthetics if the product doesn’t work. Viceversa, a stable, functional product does not attract if it doesn’t look good!

Genette points to different kinds of paratext accessories which attach themselves to the manuscript culture rather than the print. Scribes use headings, miniatures and punctuation to visually organise the script on the page; similarly, readers have a power over the text in that their annotations have the potential to be included in the next copy or edition of a manuscript. In truth, marginalia and medieval glosses can be considered a form of hypertext because both sit on the ‘margin’ of the text.

A digital manuscript edition isn’t merely a way of preserving and showing off the main content of a folio or a good facsimile. It has to include every aspect of the manuscript’s history, including its paratexts and all its intricacies. Still, a digital edition’s attempt to clearly display and guarantee reliable access to all the nuances of a document is often what makes it difficult to use, more difficult than the document itself (where manipulation is permitted, of course!). Nevertheless, they give readers a taste of scribal life.

Screenshot of Christine de Pizan project

Combination of peritext and epitext

The Christine de Pizan project not only perfectly underpins the issue of paratext display but, more importantly, it epitomises Genette’s theory of paratextual spaces. Christine de Pizan is an AHRC & BL funded project that aims to provide the reader with a comprehensive view of the editorial, decorative and production aspects of MS BL Harley 4431 . In particular, it attempts to coherently display the paratext by allowing the user to create his/her own reading space through the optional juxtaposition and manipulation of images, transcription and annotations. This is all very well in theory, but it entails an awfully complicated quest of the desired bit of text. Equally painful is the need to constantly readjust the image to the text as the former doesn’t synchronise with its textual counterpart. Another drawback is represented by the annotation pop-up windows which the user can only comprehensively view on a large screen (and whose editorial rationale is nowhere to be found). What makes one peritextual element more valuable than another? These pop-up boxes, in fact, only partially fulfil their initial intention as they only acknowledge the information instead of examining it. In some cases, peritext invades epitext (images are included in the transcription – figure on the left) – again, not clear why this was done. Finally, the level of zoom only provides two predefined views of the document.

However, it is important to stress that Christine de Pizan is only one of the many projects that raise these issues and, however cumbersome our experience may be, the digital edition has significantly contributed to our perception and understanding of the manuscript.

A more recent, excellent example of paratextual rendering is SharedCanvas, a data model which enables scholars to annotate a shared document using information from different repositories. In a nutshell, SharedCanvas works with layers of information  in the form of pop-ups which the reader can freely strip away and manipulate. As these pop-up windows are appended to the folio, paratextual space is preserved, thus allowing the reader to experience the text as it was originally intended.


The Morgan Library M.804 demo shows some of the SharedCanvas features

A digital manuscript can itself be seen as a paratext of the original document: its frame and format have an impact on reader-reception, experience and understanding of the text. If you think about it, online and medieval reading are very similar, in that as our disorderly bouncing between hypertexts almost emulates the medieval practice of flicking through folia.

To conclude, Dr Tether emphasised how valuable a publisher’s contribution to scholarly digital editions can be: publishers, in fact , are also know as content-packagers for their vast experience with engaging reading spaces, different platforms and formats which they all pack-up in wonderful, seamless ways.

[learn_more caption=”Learn more” state=”close”] Click here for the event website. The Twitter hashtag for the CeRch seminar series is #cerchseminars[/learn_more]


Seth Denbo: DH background and Q&A session

Today’s event clash was indicative of just how many DH seminars are going on at the moment: super, but hard to manage! So, while some of us were over at the Engineering Department at the Google talk, others, including myself, stayed in Foster Court where Seth Denbo kindly agreed to talk about his work and answer any DH-related questions.

Image of 18th cent pocket

Mid 18th century embroidered pocket

Seth has a background in history and did his PhD on the history of incest and the family in 18th century England. His DH career began with the Pockets of History project, a digital history project whose aim was to examine and photograph several hundred 17-20th century pockets (fashion items women used to wear under their clothes) to better understand their historical and social context. Thanks to the high quality digital photography researchers made discoveries which would have not been possible otherwise. 

It was this project that further stimulated Seth’s interest in the Digital Humanities. He moved on to work at Reading University on a five-year AHRC & JISC funded programme and later at King’s College London. He also became involved in DARIAH (European-funded infrastructure development project) but finally moved back to the US where he currently works as project coordinator at MITH.

Seth’s ongoing projects at MITH are:

  • Project Bamboo: “The North American equivalent of DARIAH”, as Seth describes it. The aim of Project Bamboo is to build an infrastructure to enable scholars to utilise and explore large scale corpora of digital texts by providing analytical tools. While the only medium considered thus far is text, the plan is to stretch out to include non-textual media (video, images, etc). Project Bamboo is also collaborating with the Hathi Trust, the largest digital library after Google Books but more geared towards researchers and scholars. Hathi and Bamboo are working together to allow scholars to easily access text, capture it (even with Zotero which Seth is very fond of!), run it through a set of tools and share derivative research. 
  • The Black Gotham Digital Archive. The project works around a recent publication by Carla Peterson whose wish was for her  readers to experience black New York in new, interactive ways.

Other MITH projects include:

  • The Shelley-Godwin Archive project, which features digital reproductions of works of Mary Wollstonecraft, William Godwin, Percy Bysshe Shelley and Mary Wollstonecraft Shelley.
  • Bitcurator, a project on the preservation and curation of  born-digital materials.

After sharing his career stories and ideas with us, Seth answered* questions coming from a few Digital Humanities MA students, including:

Q: Whom is Project Bamboo going to be made available to?

A: To as many people as possible. Bamboo (a Mellon Foundation funded project) is currently looking at sustainability models, including enterprise-level adoption by campus IT departments  in parallel with and working closely with humanities scholars to directly address their research needs.

Q: What kind of text-analysis tools do you build at Bamboo?

A: Hhhhm, still working on that! One of the tools we built as part of Bamboo is Woodchipper, a visualisation tool which uses computational linguistics to model topics across texts. We are also working with the University of Wisconsin on WindTunnel (another Mellon-funded project) which is similar to Woodchipper but more sophisticated. Another tool, which has nothing to do with Bamboo but relates to some of the things we are trying to achieve in Bamboo is Voyant Tools. Voyant Tools will display the analytics of an e-Book like, say, Moll Flanders from Project Gutenberg and allow the user to manipulate the data.

Q: Many texts we work on at DH are out-of-copyright material. Do DH people currently work with publishers and do you see a collaboration with publishing companies in the future?

A: Good question. One of the reasons why we work on early material is to avoid complex copyright issues (though this does not mean that derivative works aren’t covered by copyright). Publishers often protect access to scholarly information so it can be quite difficult to collaborate. It is possible but not easy.

To conclude, Seth was asked to share his own perspective on the Digital Humanities:

Digital Humanities has many layers. It shouldn’t consider itself a separate discipline but a tool embedded within the Humanities used to enrich and look at our cultural heritage from a different angle.

You can follow Seth on Twitter at:!/seth_denbo

*I tried to word Seth’s answers as best as I could. If there are any mistakes or misunderstandings, please let me know or blame it on my slow-typing hands!