Methods
We use a wide range of innovative technologies and sustainable digital methods, and offer specialist services, facilities and training to advance research and teaching in the humanities.
The services provided by the Digital Humanities Lab are primarily aimed at research projects within the College of Humanities, but we will consider funded projects from all parts of the University and its partners, as well as services outside this remit, if resources allow.
Capturing high resolution images of fragile manuscripts and documents allows them to be studied without risk of damage to the originals.
Humanities research often involves the study of historical documents and manuscripts. In many cases these are very fragile or in poor condition, and any handling can further degrade their condition. Capturing high resolution images allows them to be studied safely, without risk of damage to the originals. Producing good quality images is also the starting point for transcription of documents, which in turn enables them to be encoded using TEI.
We have a selection of camera equipment available for digitisation work, including a 60 megapixel medium format camera. To enable us to work with a variety of documents, we have a copy stand suitable for flat documents, and a conservation cradle, designed to support manuscripts with fragile bindings.
3D scanning allows us to carry out measurement and analysis that is only possible digitally, and to preserve fragile original objects.
3D scanning allows us to create digital versions of physical objects. These can be made available for people to view and study, when the originals may be too fragile or valuable to be handled, or simply not available. Measurements and analysis can be carried out that would only be possible digitally.
The 3D scanning equipment we have available is all portable and can be battery powered, so we are able to scan objects in situ – an important consideration when working with museums and archives that may not allow objects to be removed. We are able to scan objects both indoors and outside.
With basic 3D printing equipment we are able to produce replicas of objects that have been scanned.
The Text Encoding Initiative (TEI) enables us to make our data interoperable with other projects.
Text encoding and analysis is a key area of our research, encompassing a wide range of time periods and subjects. We publish our texts in a way that not only provides optimal searching and browsing functionality, but also ensures that our texts conform to international standards for the encoding and exchange of data.
For this purpose we use the Text Encoding Initiative (TEI) Guidelines as the basis for our XML markup, which enables us to add semantic information and metadata to our texts in a structured way and to query the data to pursue our research questions. As TEI is used widely across a large variety of disciplines, this also allows us to make our data interoperable with other projects.
As part of our commitment to this initiative we are also contributing to the development of standards for the encoding of epigraphic and papyrological materials through Charlotte Tupman's ongoing work with EpiDoc (Epigraphic Documents in TEI XML).
We use a free, open source platform for managing and displaying digital collections, that is designed for scholars, museums, libraries, archives, and enthusiasts.
Omeka is a free, open source platform for managing and displaying digital collections.
It has an active development community that contribute a diverse range of themes and plugins, enabling the customisation and extension of individual Omeka installations. Where required functionality isn’t available, theme templates or plugin files can be modified to meet the needs of a particular project, or new plugins can be developed and released back to the community.
Hosting Omeka on University servers allows us to maintain ownership and control over the data we produce. Researchers (including external partners) can manage collections and add items and other content to their project’s Omeka installation themselves. Items are catalogued using standard Dublin Core metadata, which can be exported in a variety of formats, making our research data openly available for reuse.
Omeka also supports OAI-PMH, allowing for more automated metadata harvesting.
We use a particular set of frameworks for many of our project websites as this allows us to rapidly deploy database systems.
We use a particular set of frameworks for many of our research projects’ websites as this allows us to rapidly deploy database systems for our academics. This includes both the public-facing front end of the database, which will often include advanced searching and filtering functionality, and the admin side to allow selected users to input and edit data. This will be protected behind a login screen.
The frameworks are flexible enough to allow for different types of data to be presented to varying levels of functionality and interactivity. By keeping everything based around a particular pre-built design we can get database sites up and running in as short an amount of time as possible with minimal amount of web creation required.
We use a bespoke framework designed to allow for multiple database “modules”, based within Zend Framework. Our system allows us to have multiple pages of differing design and functionality, e.g. public and admin interfaces, which access the same data sets but remain separate. Zend Framework also allows for more complex sites to be put together. We are also looking into using other frameworks, possibly with an aim towards using less complex ones for smaller sites that need less setup time.
We have several sites which use our existing framework with public and admin access. As can be seen they vary in design and functionality even though they are all very similar “behind the scenes”.
We use a free and open-source content management system that is fully-customisable and mobile-friendly.
Blogs are an excellent way to make your research accessible to a wider audience, and to involve collaborators in discussions.
WordPress is a free, open source content management system for blogs and websites. It has an active development community that contribute a diverse range of themes and plugins, enabling the customisation and extension of individual WordPress installations. Where required functionality isn’t available, theme templates or plugin files can be modified to meet the needs of a particular project, or new plugins can be developed and released back to the community.
Researchers (including external partners) can maintain their own blogs and websites, which are primarily used to engage the public in their ongoing research by promoting events and facilitating discussion.
Social media can also be an effective way of getting your message across and engaging others in your research. Twitter in particular has an active academic community.
Hashtags such as #twitterstorians and #askarchivists have emerged, promoting dialogue, creating strong links and offering a space to go to for assistance. Your Twitter feed can also be integrated with your blog or website to encourage your readers to get involved in the dialogue, or simply to keep them up-to-date with the latest project news.
Visualisations can show us patterns and trends in data that may not otherwise be obvious.
Humanities research data can take many forms, and is often rich and complex, filled with uncertainties and difficulties in its encoding and structure. Making sense of this data, both to answer research questions and to engage a wider audience, is a key aim in all our digital resources.
For any project using empirical data, it is important to consider the full data lifecycle. Planning database structures and encoding standards, streamlining data collection, and managing large data repositories are key to ensuring that the data used is reliable and consistent.
Interpreting this data can present significant challenges, but can also lead into unexpected insights and open up new research questions. Where possible, data is made available to the public to also encourage further analysis, and we often produce interactive visualisations and gather feedback in order to understand our data from different perspectives.
The ultimate aim of any data-centred project is to further our understanding of a given research field, and visualising data for academic or public viewing is often a key part of this process. Visualisations may provide graphical summaries of data, or may build virtual reconstructions or surrogates for further study by those unable to access the original artefacts.
Having sustainable digital archive practices means that we keep our data both usable and useful in the future.
The web is a transient medium, with the potential for information to be irrecoverably lost. A study of academic references found that over 70% of URLs in academic journals have broken or no longer link to the original citation information.
Technologies change quickly, and formats that are in use now may not be in 10 or 20 years’ time. Data archived on floppy disks, for example, would take a significant investment of time and effort to retrieve today.
Digital archiving aims for the long-term preservation of digital data, while sustainability aims to ensure that it remains usable.
The most effective way to ensure future sustainability of any resource is to consider this need in the planning stages before it is created. A plan for preservation and sustainability is now often a requirement of applications to funding bodies, to ensure that any resources created will be findable and usable beyond the end of the funding period. Best practice involves using preservation rather than proprietary formats, depositing the data in a curated repository, and keeping project documentation.
Cataloguing and metadata of the deposited data are also important considerations – if your data is preserved and accessible but without information about the dataset, it will not be accessible to as wide an audience as possible.
We are engaging with the potential of open licence publishing to enhance the research impact of our texts.
We are engaging with the potential of Linked Data and Linked Open Data (linked data published under an open licence) to enhance the research impact of our texts, by providing uniform resource identifiers (URIs) for people, places, events and other entities within our data. These are unique identifiers that enable us to disambiguate and provide a canonical reference point for entities. They allow other projects to link their materials to our data (and vice versa, where those projects are also using LOD) and help to make our research more discoverable by scholars working in related areas, through a combination of intentional research pathways and serendipitous discovery.
Using Linked Open Data can also help to increase the visibility of the wider research that the College of Humanities undertakes, and will enhance the possibilities for new interdisciplinary collaborations with partners both within and outside academia.