Access 2018 Conference Day 2 Afternoon Sessions #AccessYMH

Integrating Digital Humanities Into the Web of Scholarship with SHARE: An Exploration of Requirements
Joanne Paterson

osf.io/pkvtu
Today going to talk about SHARE, ways to use, integrating DH scholarship, emerging themes and initial thoughts. What is SHARE? Schema agnostic approach to aggregate diverse metadata. Community open source initiative. Scholars are doing various things, how can we bring all that together so we can see their body of work and things that are related? ARL initiative started in 2013. Aggregates metadata. Looks at research cycle and various outputs of research. To aggregate metadata, they put out a call to ask someone to help them build this, answered by Center for Open Science. (OSF - looks at research workflow, allows you to collaborate with others and share easily). OSF free and open project support, can work privately or publicly.

SHARE - harvested datasets from wherever they're open, metadata about scholarly research - scholar's portal, figshare, arXiv, PLOS, DSpace@MIT, etc. 171 sources right now, over 62 million metadata records. Way to populate your own IR, can look at research workflow and assist, linking of repositories, integrating with faculty profiling systems like VIVO.

Many pieces you can interact with - an API, several endpoints like creative works (can harvest and see what outputs of a person or institution are). SHARE API is compliant with JSON. There is a discovery layer with elastic search tags and Boolean operators for less technical folks. They have advanced search with filters. Output of institution. recently added an Atom feed in SHARE Notify. Linking researcher, funder, and institutional affiliation. SHARE - people think of Open Science framework, but how bring in DH into this disparate web of scholarship? Investigating scholars link to to all the components of their work, how librarians can steward and preserve projects. Mixed methods approach - survey and focus groups. Looked at DH tools and practices, especially focused on thinking about Tadirah. What kinds of assets have you created through your DH project? Diverse - digital audio, data, text, digital images, metadata, online exhibits, more.

Discussions at workshop. Difficult to peer review DH projects because folks don't know what they look like, or folks aren't experts in the area; lack of recognition or reward for final project; too many projects, not enough programs; no final product. 2nd day of workshop, came up with user personas and dashboard ideas. Schoalr's Story - what can we do to help tell individual scholars' stories some metric or interest. People don't know when folks are using their materials. Held focus groups at various places; emerging themes include creating metadata is time consuming and people don't want to do it, Dublin core is popular, always rights and copyright issues, projects may be ephemeral/orphaned.

Some initial conclusions: article remains prime as research output, easy for others to talk about metrics and impact etc. And yet think of all the products someone creates when they are involved in a DH project. DH peer review is difficult, came up time and time again, how do you talk about impact if you don't have someone who can peer review. Folks need ot get credit for all parts fo intellectual work. Should creating a db be as important as publication traditionally? Libraries can curate and tell stories of impact. research data management techniques can be used for these projects.


Navigating through the OER Desert with OASIS
Bill Jones and Ben Rawlins

Collaborative vision between OER services SUNY, Library admin, and Library ITS, Way to make a discovery of OER easier for faculty. Single solution to point faculty to to help increase discovery of OER in one place. (Sounds like Merlot out of the CSU?) Started April 2018, launched Sept 2018 with 52 resources and 155,000 records. Discovery: textbooks, courses, etc. Wanted to curate for quality control, didn't want all of the OER out there just the high quality ones with open license.

OASIS was built on PHP, MySQL, Bootstrap 4, HTML, CSS, Javascript, and Python. Host at SUNY Geneseo, pull content from each site using Python, use Google analytics. Met weekly to prioritize development and discuss progress. Used Github to keep track of milestones. Put it through accessibility testing. Shared with SUNY OER institutional leads prior to launch for feedback. Search interface was built from scratch. 212 institutions link to OASIS. Future pans: detailed item view, save and share collections, expand collection, iOS mobile app.

oasis.geneseo.edu


(Why Aren't We) Solving Common Library Problems with Common Systems?
May Yan and MJ Suhonos

ERM = problem solving and troubleshooting. Much is technology related, even more is not tech related and is more looking at questions like we've lost access and investigate why no access to something you used to have access to. Looking at systems where metadata should be the same but isn't, looking at licenses. They have to look at acquisitions records - license agreements, title lists, kbart files, emails - distributed. Attempts over the years to gather to make investigations easier. use these records on a daily basis and finding these records to answer questions is always challenging. After figure out analysis project, time to implement solution. records retention schedule is meant mostly for paper records. ER records almost always digital and want them to be handled digitally and master copies to be electronic, don't want to go to a paper system. Needed to solve this problem within the library.

Define functional requirements: must integrate with current erm workflow, easy to use, restricted to users within system. Metadata, search, accession/preservation/destruction based on file type. AtoM - a common library system, met requirements to handle files, allow metadata arrangement. required some customization. Documentation became a manual. Faceted search, full text indexing, etc. worked well, allowed to focus on relationships and was important in terms of who was involved in a particular license, etc. Because focused on archival arrangement, required change of taxonomies. Reshaped known product to fit problem domain.

How move from AtoM to next thing? During 2016-17. Define technical requirements for collections systems (document tech needs for managing library collections to provide info to inform strategic decision making) - 5 recommendations - Saas (software as a service) model, Choose best of breed and swap out tech as often as needed, when there is no significant value add from library vendors look outside library tech.

Strategic goals to drive tech and tech decisions forward. Surveyed current systems and held against goals to check for common areas of problems could potentially address across multiple systems. More problems than expected appeared - identity management, single sign on, index and querying, long term storage, etc. all reappearing in multiple systems. There's a mature ecosystem of services and tools that are built for wordpress that can do all of these things. What we need to do is look at functionality requirements and ask how we can use them in wordpress to build applications. SeO , html best practices, commenting on items, etc all come for free from wordpress. Let's take easy stuff for free and then apply our knowledge to the harder things we would have to customize anyway.


Developing a Digital Initiatives Center at a University Research Library
Shannon Lucky and Craig Harkema

4 aspects of dig initiatives - metadata , digital asset mgt system, curation, and ow would do differently. History at USaskatchewan - early 1990s library has been involved in wide variety of digitization, metadata, etc. Funding was used to develop staff expertise. Then no sustainable funding, so couldn't offer something consistent. 2016 - external review asked why we weren't doing more with data visualization and DH and digital work in general, so now in major transitional state. DI used ot be in archives and special collections focused on digitization and presentation of archives. Now more associated with library IT and moving to campus service model focus as its own entity. Culture shift from the top is required - need senior leadership support. need sustained resources. Need to respond to needs to research community to demonstrate value of DI on campuses.

Along with use of office space for grad students and coffee, most requested DI services have had to do with metadata - setting up db of some kind, using digital asset management system, TEI, tagging, etc. Went to DHSI and attended TEI class.

They don't have a metadata librarian expert on staff, especially not one dedicated to digital initiatives. Not that they don't have expertise in the area, learn as we go, collaborative effort. Expected to know this stuff at the library, think of library as place to go for metadata expertise.

DAMS (digital asset management systems) - long history of web exhibits and creating abs esp in archives. AtoM, contentDM, now Islandora. Adding a preprint is different than adding oral history database with transcription and text analysis, etc.

Historically focused on digitizing relevant archives or working with donations, But collaborations with researchers is future for digital initiatives on campus. Having those conversations about what work will look like and what it will take and helping folks scope their projects will be very important. More about focusing and working with single researcher or team, changes focus and definition of impact. More than digitization and making web projects. Who will do this work? Library? Grad students? Do people need space? Who has rights and copyrights? Do you need format shifted or is it ready to work on? Goal is flexible stock templates using the DAMS that can serve most, and then add on in terms of immediate need to be impactful. Ideally would love to focus on interaction ad web design, but need for digital preservation and migration and basic hosting is immediate need.                          

Digital initiatives space - having a space for people to come to is key, nicer the space is, the more successful you'll be. Never been properly resourced to have equipment used by faculty and students, so moving in that direction. Symbolic of showing space and service is new, and more functional like having office hours for drop-in. Point was always to have equipment open, but logistics couldn't be handled before.

Continuum of digital archives to DH projects, can more easily support traditional library aspects fo digital scholarship via collection building. Tricky bit is negotiating where service support starts and ends. UVic's services menu helps immensely to define in kind contributions and what it is you will provide researchers.

DS needs to be flexible while still giving an identity and some sense of what you do. need to facilitate researchers who work with communities off campus. Need to be patient and convince leadership over course of 15 years that this is important. Working hard on collaboration and getting people to form good relationships and partnerships across campus.

Open Badges for Demonstrating Open Access Compliance: A Pilot Project
Christie Hurrell

Experiment with open badges to see if would act as incentive to authors who are depositing articles to comply with open access. requires grantees to make work available in 1 year through open access journal or institutional open repository. Open badge is a relatively robust program esp among students, metadata attached to them - who issued, dat issued, criteria for how received badge, so can display on linkedIn or wordpress. Maybe badges would be an incentive to encourage grant recipients to make work available in institutional repository. Uploading post prints found to be a pain by authors, also a no cost way to achieve objective of policy which is nice when grant funding is scarce. Looked at whether open badges had been successful in incentivizing. Predominant in psych journals, more journals using it, bio journal opened badge to incentivize uploading open data and when compared with peer journals, increase of deposit of open data of 20%. .Data is encouraging. Small pilot study methodology: 1st was survey of folks who had received triagency funding to ask about whether badges would be useful incentive; second was user testing with subset of group 1 to see fi adding badge was feasible. Short web survey started with description of open badge and how it could be used, how familiar they were with triagency policy, if had used repository before, how much time they were willing to spend. List of 225 funded faculty received survey, received 48 responses (22%). Majority indicated moderate awareness, 2/3 had never used repository. Those who knew more abut triagency data were more likely to use repository in the past. Split among perceptions of badges. People did not want to spend much time uploading article to institutional repository.

For user testing, mocked up IR (easier than test instance of dspace). Given scenario - researcher with triagency funding, have post print, told they could upload that version of the article (help faculty usually don't get). Given series of tasks to log in, upload, describe item, and asked questions when finished. User xp testing results - range in how long it took people - under 5 minutes (filled out only required fields) someone took 17 minutes (read licensing agreement, etc). Averaged 11 minutes. Describing submission is most time consuming because so many metadata fields to fill in manually. Most tasks less than 2 minutes except the description. Heavy sighs, eyeballs, not being able to figure out interface. What seems simple - "select a collection" - you have to choose your faculty at the very beginning and is a very long dropdown list. When upload, don't get an 'upload complete' prompt. No clarity between labels in fields, descriptions of fields, and what we were asking people to type in, long dropdown lists for faculty and departments. At end, asked what stood out for them and whether had noticed field asking about compliance and people were so overwhelmed with different metadata fields that it didn't even register.

Extra field in IR deposit process is probably not going to be an incentive to get folks to deposit items more frequently into repository. When testing component of user interface, you are really testing whole user interface.

Closing Remarks

Dave Binkley Memorial Lecture: Settler Libraries and Responsibilities in a Time of Occupation
Monique Woroniak


Comments

Popular posts from this blog

First Impressions & Customer Service Failures

On the Great Myth of the Librarian Grays

Jobseeker Tip 1: CV/Resume Objectives, And A Contest!