Manuscript-level records in Omeka

I had some time tonight to figure out how to get the manuscript-level records into Omeka.

I’d already worked out the (very simple) XPath needed to pull out the few bits of information I wanted from the TEI msDesc: siglum, title, URL to the Walters data, BookReader URL, and URL to the first thumbnail image in the ms (which I’m just using as an illustration, a placeholder for the full ms). This evening I did the final checks to make sure the output was right, then just ran the XSLT against all the msDesc files and created .csv files for each one (there’s an Omeka plug-in that will accept CSV input, in order to bulk ingest metadata and files). I used the “insert file contents” function in TextWrangler (bless that program) to pull all of the individual CSV rows into a single document, then ingested that into Omeka. There were a few bugs, of course, but generally it was smooth. I’ve made a few of the records live, just those that already have illumination records tagged in Omeka too. What this means is that you can now go to the Omeka site, go to “Browse Items by Tag” (http://www.dotporterdigital.org/omeka/items/tags), and click on one of the larger tags (each ms and illumination is tagged with the ms siglum; the more illustrations there are, the larger the tag will appear in the browse list). At the moment the first entry in the list will be the record for the manuscript, followed by the record for the illuminations… although I don’t know if that’s just because the manuscript records are newer.

I would like to include each illumination in the ms records (HasPart) and the ms in the illumination records too (IsPartOf), but I am not certain that’s something I’ll be able to do programmatically. Anyway, I think that is the next thing on my list. That and tagging all of the other illumination records with the siglum (so they will be browseable with the manuscript).

Walters in Omeka

This evening I spent some time thinking about the best way to organize the Digital Walters data into Omeka.

I’ve already experimented with bulk ingesting all of the illuminations (pulling all the decoDesc tags from the TEI manuscript descriptions, and creating a record in Omeka for each one). You can see these in the Omeka instance (although it’s not very pretty). I realized that, as fun as that experiment was, in order for it to be useful I need to take a step back and reevaluate how best to move forward.

I created a record for one of the manuscripts: http://www.dotporterdigital.org/omeka/items/show/2618. It’s basic, including the Title (and the siglum under Alternative Title), links to the manuscript’s home on the Digital Walters site and its Bookreader version on this site (both under Description), and under Has Part, links to an illumination record in Omeka, an illumination that appears in that manuscript.

I want to do a few things to start out:

1) Create one record for each manuscript. I will do this using Omeka’s CSV plug-in… I’ve figured out how to pull all of the information I need from each of the TEI MS Description files, now I need to figure out how to pull all of it into one file and make that file a CSV file. Ithink I can use Xinclude to do that but I need to try more than I had time to tonight.

2) I would like to have a way to automatically attach the illumination records that are already in Omeka to the new manuscript records. The link that’s in the test record is one I added by hand, but Omeka has a collection hierarchy and I need to play with that to see if there might be something in there that can be used for this purpose. What I fear is that the hierarchy is only at the full level – that is, I can say that all of the illuminations are under all of the manuscripts, but I can’t say that some subset of the illuminations are under some particular manuscript. I will need to find out more!

It’s good to be back in the site. I still have an article to finish (and another already started) but I would like to make some progress on the Omeka catalog in the next month or so.

What I’m up to

I know there was a while, when we were in the throes of the Digital Walters Bookreader project when I was updating this blog every night! Then I had to slow down to go to the Digital Humanities 2012 conference, and the ESU Culture and Technologies Summer School, and I had to finish an article (submitted! yay!) and I haven’t really gotten back to this. I did post a long-promised update this evening describing the process of the BookReaders. And I’m planning to continue working on the Digital Walters Omeka – which if it works will be a full catalog of the works and illustrations in all of the Digital Walters manuscripts (including links both to the DW site and to the BookReaders). Hopefully Doug Emery will be willing and able to help with that as he did with the BookReaders project.

I’m currently on research leave, working on an article (maybe two, if I can swing it) on medievalists’ use of digital resources (the topic of my paper at Kalamazoo, my poster at DH2012, and related to my lecture at the ESU School and the article I submitted last month). There’s just too much interesting stuff to say about that, it’s a bit overwhelming. And then of course there is the Medieval Electronic Scholarly Alliance (MESA), which I am co-directing with Tim Stinson at NSCU… just gearing up… and my day job as well (which I’ll be back to on September 4) to keep me busy. But I will keep working here, because… well, because it’s fun.

How to set up your own Bookreader

Now that the Digital Walters BookReaders are all updated and online, I wanted to make a post documenting how you too can create BookReaders along the same lines.

The original BookReader source is available from the Internet Archive. That was the staring point. Doug Emery and I worked together to modify that code to pull most of the information needed by the BookReader out of a TEI Manuscript Description – although the same process could potentially be followed to read from some other XML file containing the appropriate information. I did have to write an XSLT to create the BookReader files themselves (one for each manuscript). Of course it would also be possible to create one BookReader file for all manuscripts to share! But I wanted the BookReaders to be able to be grabbed and used separately, rather than being dependent on server-side scripting in order to use.

You will need:

The .zip file from Internet Archive contains a License, a readme text file, and three folders: BookReader, BookReaderDemo, and BookReaderIA. The modified BookReader.js file available here replaces the BookReaderJSSimple.js file contained in the BookReaderDemo folder. The rest of the files in the folder should be unchanged. The BookReader folder is required for the system to work. We don’t use the BookReaderIA folder.

To run the sample modified BookReader.js file:

  • Replace the BookReaderJSSimple.js file in the BookReaderDemo folder with the .js file from the link above
  • Place the TEI document in the BookReaderDemo folder
  • Open the index.html document in the BookReaderDemo folder

It really is that easy. If you want to take the files I have on this site and host them yourself, please do! The relevant URLs are all formatted as above.

Now, you may want to use these files as a basis to build BookReaders for your own collection. If you have TEI Manuscript Description files you should be able to do it. The file will need to have (or you will need to be able to generate somehow):

  • Title of the book or manuscript
    • TEI/teiHeader/fileDesc/titleStmt/title[@type=’common’]
  • a URL to the official webpage of the manuscript (if there is one)
    • We generated this by supplying the base URL (http://www.thedigitalwalters.org/Data/WaltersManuscripts/) and then filling in the rest of the URL from the TEI (‘.concat(siglum,’/data/’,idno,’/’) – where siglum and idno were pulled from different areas of the file
  • The number of leaves / pages
    • We generated this by counting <surface> tags – but because some of the images were duplicates (one with flap closed, one with flap open) and also included fore-edge, tail, spine, and head images, we had to do a bit of work to keep those from being counted.
    • var surfaces = $(file).find(“surface:[n!=’Fore-edge’][n!=’Tail’][n!=’Spine’][n!=’Head’]”).
      not(“[n*=’flap closed’]”);;
      var leafCount = $(surfaces).size();
  • An indication of whether the manuscript / book is to be read left to right or right to left (we generated this by searching for the language code and specifying which languages are l-r)
    • var rtlLangs = [ ‘ara’, ‘heb’, ‘jpr’, ‘jrb’, ‘per’, ‘tuk’, ‘syc’, ‘syr’, ‘sam’, ‘arc’, ‘ota’ ]
      // get the lang from the TEI
      var lang = $(file).find(‘textLang’).attr(‘mainLang’);
      // set pageProgression if lang is in rtlLangs
      if (jQuery.inArray(lang, rtlLangs) > -1) {
      br.pageProgression = ‘rl’;
      }
  • URLs of the location of the page / leaf files
    • These were generated using the file names that were provided in the TEI document (@url on the third <graphic> tag, which was the resolution file we wanted for the page turning)
    • var path = $(file).find(‘surface’).eq(index).find(‘graphic’).eq(2).attr(‘url’);
      var graphicurl = url + path;
      return graphicurl;
      }
  • The height and width of page/leaf files
    • I tried many different ways to get these. In the first version of the Digital Walters BookReaders I hard-coded the height and width into the .js file (this is what is done in the demo version available from Internet Archive). Unfortunately the image files in Digital Walters are different sizes – although always 1800px on the long edge, the short edge will vary page by page, and the long edge is not always the vertical side. Eventually, the Digital Walters team very kindly generated new TEI files for me to use, with the height and width hard-coded. Ideally there would be some way to automatically generate height and width from the files themselves but if there is some way to do that, I don’t know it!
    • br.getPageWidth = function(index) {
      var widthpx =
      $(file).find(‘surface’).eq(index).find(‘graphic’).eq(2).attr(‘width’);
      if (widthpx) {
      var width = parseInt(widthpx.replace(“px”,””));
      return width;
      } else {
      return 1200;
      }
      }
    • And again for height

One last thing: because I wanted to generate many files at the same time, one per manuscript, I set up an XSLT that I could use to create those files based on information from the TEI documents. That XSLT is available here: http://dotporterdigital.org/walters/TEImsdesc2js.xsl. Aside from the body of the .js there are just a few transformations, and they are (I think) sufficiently documented.

I hope this is useful. I certainly learned a lot working on this project. Thanks to Doug Emery for all his technical help, to Will Noel for his moral support and interest in the project (and for putting me in touch with Doug!). And finally, thanks to the Trustees of the Walters Art Museum for making all of this great data available under Open Access licenses so people like me can do fun and cool things with it!

Walters Bookreaders updated!

Just a quick post to say that, thanks to Doug Emery, the Walters Digital Bookreaders have all been updated to remove the bug that was causing all of the 1-up images to appear as thumbnail size. Doug noticed that in our code, image sizes were being parsed as text and not as numbers, so the Bookreader code couldn’t figure out how to process them. It was a simple change, I was able to make it globally, and everything has been updated as of Wednesday night.

In the process of doing the global replace, however, I discovered that Oxygen (which I use for all of my XML and javascript encoding) doesn’t recognize .js files when doing a replace across files (at least, it was not recognizing my .js files). So I downloaded a tool called TextWrangler (http://www.barebones.com/products/TextWrangler/) and it got the job done in no time. I’ve actually heard of TextWrangler, about 18 months ago I did a consulting gig with some folks down at Southern Louisiana University and they were using TextWrangler for all of them find-and-replace in XML needs. I’m happy to report that it does work very well.

Over the weekend I’m planning to write up a post documenting how the Walters Bookreaders work, along with the code, so others can try setting up their own page-turning versions of open access page images.

I’m Back!

Happy August 1!

I had a really wonderful time at Digital Humanities 2012 (http://www.dh2012.uni-hamburg.de/), where I presented a poster on medievalists’ use of digital resources. Most of the presentations were recorded, and have been made available through the conference program (http://www.dh2012.uni-hamburg.de/conference/programme/).

There were several lectures I attended (and a couple I didn’t) that I’d recommend. On the first day, Lief Isaksen and Elton Barker presented “Pelagios: An Information Superhighway for the Ancient World” (http://lecture2go.uni-hamburg.de/konferenzen/-/k/13918) – proving once again that the Classics are on the forefront of digital work. It’s a bit unfair really; if there were a single gazetteer for medieval place names, we could do this too! I really believe that it’s the gathering of (high-quality, referenceable) data that’s the hard part. Once you do that work, applying technologies to the data to do interesting things is a bit of icing on the cake.

Speaking of Classicists (and because I’m a big fan of papyri.info and the folks behind it), I really enjoyed Marie-Claire Beaulieu “Digital Humanities in the Classroom: Introducing a New Editing Platform for Source Documents in Classics“, in which Dr. Beaulieu presented an extended version of the Son of Suda Online platform (which runs papyri.info), modified to enable the editor to include the image as part of the edition (something that was talked about in the original SoSOL proposal but wasn’t ever made part of that project). I’m in contact with Dr. Beaulieu and her group and will be adding this new tool to this site as soon as the final code is available! That will be fun. Unfortunately the recording of her talk is not available.

If you watch only one lecture from the conference, make it Patchworks and Field-Boundaries: Visualizing the History of English by Marc Alexander (http://lecture2go.uni-hamburg.de/konferenzen/-/k/13931). Dr. Alexander won the best paper award at the conference, and it was well deserved. His team mapped out the English Language by topic, and created several different maps comparing the language as it was a various points in history (Chaucer’s time, Shakespeare’s time, mid-19th century, and today). Very, very cool and interesting stuff. And it doesn’t hurt that Dr. Alexander is such a dynamic and engaging speaker.

Time for me to go to work. I will have another post later with some other suggestions for lectures to see (including some I missed myself, but intend to go back and watch now), and a bit about the lecture I presented in Leipzig. Then, it will be back to page turning and Omeka experiments!

Going on hiaitus

I will be presenting at a couple of conferences in Germany later this month, and I need to get my stuff together. So I will be dedicating the time that I had been giving to this project, to those presentations. I will be back at the end of July.

Walters BookReaders are updated! And more content in Omeka

The subject line says it all.

I’ve uploaded new versions for all the BookReaders, and also updated the index to include all the manuscripts in the collection. (Follow the “Walters BookReader” link in the top menu to find it) The BR are greatly improved, however there is a pretty major bug: all the 1-up images display very, very small. Doug is working on methods to fix this, but for now I would just recommend that you use only the 2-up or thumbnail views.

Also this morning, I uploaded a couple of thousand decoration images into Omeka. Some of them are now tagged (with the siglum of the ms in which they are found) and public. (Follow the “Walters Omeka” link in the top menu) This is taking much longer than anticipated, as Omeka seems unable to accept a CSV file with more than about 90-100 rows. Considering the thousands of image I’m dealing with, loading only 100 or so at a time means a lot of effort and time. I’m going to check out the forums to see if there is a way around this, but for now… enjoy!

CSV, almost ready for import

I spent this evening finalizing the XSLT to convert the msDesc into CSV for import into Omeka. To the fields I mapped yesterday I’m adding the manuscript siglum (so we can easily find which decorations are from which manuscript), as well as the folio number on which the decorations appear (this was Doug’s suggestion; I’m hoping this will make it possible to create some kind of automatic link between the Omeka records and the corresponding folios within the context of the BookReader).

I’m generating the CSV files now. I had really hoped to be able to process everything into one huge CSV file, but I wasn’t able to get it to work (and really, that would be one huge file) so instead I’m generating one CSV file for each manuscript. There is some post-processing to be done, to keep things from getting too messy I actually put XML tags around each row of data, and those will need to be stripped out. I may see about combining some of the CSV files together, so I won’t have to strip quite as many separate files, but 200 together may be too many. We’ll see. I have a holiday tomorrow so hopefully will have some time to work on this, between cooking and wrangling the toddler.

Digital Walters in Omeka!

The second part of my project, after getting the manuscripts all loaded into the Internet Archive BookReader, is to build a more extensive catalog for the manuscripts in Omeka. Eventually I’m going to experiment with some scholarly tools as well (I’m particularly interested in Scripto, which enables crowdsourced transcription, and the just released today Neatline, which supports temporal and geographic research) but for now I’m most interested in getting descriptive metadata out of the manuscript descriptions and into Omeka, where they can be searched and explored.

Tonight I generated a csv file from one of the Walters manuscripts (using XSLT), and then used the Omeka CSV plugin to import that data. I wasn’t really careful about mapping the fields (I’m using extended Dublin Core in Omeka), I’ll probably go back and take another look to make sure I’m mapping to the best DC fields. For now, I’m most interested in making sure the workflow is effective. So far, it’s great.

I’m using another plugin in Omeka that allows for hierarchical collections, so I’ve created one main collection, Digital Walters, and currently one subcollection, for manuscripts that are described according to their decoration rather than their textual divisions. I will create a second subcollection for those described according to textual divisions. I expect there are some (probably several) that have both extensive decoNote sections and msContents sections… I’ll deal with that when I get to it (ah, one benefit of experimenting, I don’t have to have all the answers before I start!).

For now, however, enjoy!

http://www.dotporterdigital.org/omeka/collections/show/2