Metadata and Added Value

Metadata

For this project, Dublin Core will be the metadata standard used. Although another standard like TEI might appear more effective at encoding text documents at a high level of accuracy and complexity, there are several reasons related to the specificities of this project why Dublin Core may be sufficient or even better than a more elaborate standard at some tasks. The first is the nature of the source materials. Because the final interface for engagement will be the screen of a smartphone or a mobile computer, there are very real limits on the types of materials that can effectively and efficiently add to the immersive experience provided to a user on site (i.e., standing in proximity to a geotagged location).

Not only will documents have to be relatively simple and modest in size, but it would perhaps be unrealistic to expect mobile users to engage with an attention at the depth and rigorousness of an academic historian. Another important reason is related to the technical requirements underlying this project. Once again, considering the typical user and context of use for a mobile application, the most important capacities, aside from the processing of accurate metadata, are the ability to maintain semantic relationships between documents and to call upon documents quickly and easily in relation to other documents, as well as the user's context. Yet another reason is ease of use at all stages - besides ease of use by inputting project staff, Dublin Core enabled Omeka, with or without an enhanced interface, can be easily grasped by users and will help ensure that making contributions of text or materials will remain as straightforward as possible.

With these considerations in mind, all fifteen of the Dublin Core Metadata Elements will be used in this project. They are:

1. Title
2. Creator
3. Subject
4. Description
5. Publisher
6. Contributor
7. Date
8. Type
9. Format
10. Identifier
11. Source
12. Language
13. Relation
14. Coverage
15. Rights

Added Value

Additional features will be included with the goal of providing opportunities for interaction and collaboration. Simplicity in design and utility will also be necessary in light of the technological context: smartphone or mobile computers (i.e., tablet computers).

Search Engine - Basic - The basic engine will remain as aesthetically simple and compact as possible because this will not be a platform for academic research per se. Simplicity in design, particularly with regards to the mobile app, will help create a more inviting 'user environment.' Furthermore, materials will always be linked together and to site by geospatial data to begin.
One field will be for text input. An adjoining drop-down menu will allow field choice.

Search Engine - Advanced - The advanced engine will have 3 different text boxes, each adjoined by identical drop-down menus. In between each text box and field combination, a small drop-down menu will allow Boolean conditional terms (i.e., 'AND,' 'OR' and 'NOT').

Geospatial Linking - Based on the GIS metadata assigned for each material, document and photographic images will be automatically integrated into both the web interface and the mobile app interface. When in map mode, thumbnails images of the materials will be 'pinned' to geotagged locations. Selecting thumbnail images will activate larger images. When in AR mode, oversized versions of the images will appear permanently suspended or overlayed over the 'real' image being shown in 'real time' in the smarphone/tablet screen.

Timeline - At the bottom of the map or camera display screen, a timeline will be visible with two sliding indicators to limit time periods. By sliding the indicators, the user will be able to control the volume of output on the map or the screen. This will help convey an understanding of the passage of time and provide a way to structure engagement.

Tagging - Users will be encouraged to add descriptive tags. Whenever material is selected (on website or app), an "Add Tags" feature will appear.

Commentary - Whenever material is selected (on website or app), an "Add Commentary" feature will be revealed alongside metadata.

Audio/Video Commentary - Whenever the 'pin' marking a geospatial location is selected, "Upload Audio" and "Upload Video" features will be revealed, which will use the built-in audio and video recording features of the smartphone. For the sake of simplicity, brief instructions will indicate that the user can close the Digital Chinatown app to record an audio or video clip about the current location. (After restarting the DC app, either "Upload Audio" or "Upload Video" can be selected. The user will be directed to either the audio storage area or the video storage area typical of most smartphones.)

Other Resources - General historical information and other information will be provided for reference. These will include overarching historical narratives as well as shorter narratives about key developments or figures in history. In the AR app mode, selecting an image will present a button marked 'more information' below descriptive information. Selecting this button will bring up "pop-up" boxes containing relevant information to the image file. When there is a lot of information, the user will be able to scroll through the text using a scroll bar along the right edge of the "pop-up" box.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License