Access 2017 Conference Day 2 Notes Sessions 1-3 #accessYXE

Session 1: The UX of Online Help - Ruby Warren

About 2015-16, web redesign. Back to basics - usability testing was already done, but it was a more fundamental issue. Did interviews about library's website with different user groups - UG, grad students, faculty, regional folks. What they go for, what they do, when it happens. Only when they have a problem that needs fixing and they cannot wait anymore (midnight, weekends, weekend midnights). Needed asynchronous help option. Internally called Help Hub - series of 55 videos and text tutorials arranged according to usergroup (U of Manitoba Libs) built in LibGuides. Usergroup appropriate language.

After 8 months, do they use it? Everybody hates new things. Yes, they did. Spike in September, usage follows pattern of academic year. They're going to it, but does it work? 9 usability tests (high) - ensuring users can get to help area, navigating to tutorials should be intuitive, language makes sense; 35 interviews (12 currently complete) (also high) - participants asked to compare content of videos (different styles), video vs text, general questions regarding online learning (approach, resources, etc).

Results: Terminology issues-no clear links to talk to a person (make an appointment was in 'about us', no one could find that), some videos didn't have intuitive titles, (new item vs known item), help area didn't have help in the title (called "Library Services" and then by user groups). Video length issues (won't watch a 9 minute video even if they know they need it); top of page navs remain invisible. Old content ruins literally everything (people WILL find it even when it's not linked to website). User group divisions do seem intuitive and users liked them, know it's relevant to them.

Interviews: video> text and images > interactive > just text
Interactive content viewed as best for learning, but will only use only if mandatory. (!)Seek out video first. Most expressed need for video, text, and images to all be available because there are different situations where they need different things. Prefer video, but if figuring out something in public with no headphones, need option. Contextually dependent. Want video first time, but later will want text or images.

In video - preferred clickthroughs to cutesy design cartoon. They prefer unobstructed views of the screencaps practically demonstrating what they need to do. More boring, but slower speaker, clarity, and clear screen was preferred. Preference for consistent audio-same voice, same kind of audio. Content chunked into small tasks (under 3 minutes)--chunking is important because anything longer than 3 minutes they balk at. But also get annoyed when video doesn't teach them everything they need to know. Exception is when you can daisychain videos. Text: make it Buzzfeed: easily skimmable.

Adding panel to each of tutorial videos to facilitate daisy chaining.

Q&A - there are users who search for library videos *directly* in YouTube

Session 2: Can Link: A Linked Data Project for Canadian Theses / Cana Lien: Un Project de Donnes Liees pour les Theses Canadiennes - Rob Warren

Canadian Linked Data Initiative (CLDI) loose collaborative of academic libraries in Canada in 2015. Goal of CLDI is to work together to plan, facilitate, seek funding for linked data projects and grow skills. By involving many, bridge units in libraries, work across different types of libraries, grow expertise, bridge between Canadian, US, European linked data initiatives. CLDI consists of several working groups and steering group.

Can Link/ Cana Liens - highlight intellectual contributions of grad students in Canada through focus on theses to demonstrate power of linked data to serve as unexpected connections spanning space and time linking individuals and topics of study.

Asked for as many records as possible, authority data (URIs), MARC, XML, CSV for initial bootstrap of 5,000. Addition of authority files and accessible digital objects when possible. Linked Open Data approach, setup used as many formats as possible. 90% use case is citation or getting thesis to read. Fully dereferencable URIs. I

It is ontology backed--computers and people are the same in that they are not psychic. If you are going to share data, must tell computer and people how things work and how you see the world. (Ontology slide is amazing.) FRBR, LOC, FRBR, DOI, ORCID.. (get this slide)

Infrastructure: also get this slide. Daily data dumps, twitter pushes, GitHub ticketing integration so librarians can decide correct action and no command line work to resolve issues. Code available on web - PERL, Django, Python...all of the code is available.

Core functionality: web tool for transofrming MARC data. Can upload set of records and will transform according to ontology, go to GitHub to address and correct issues, adn will push back into the triplestore on the server to lower technology expertise barrier. Very basic UI set up, can do sparql queries.

Dr. Shiv Nagaragan "A thesis is a write many, read once document." Long term value of theses created in Canada, exponential increase in theses and dissertations over past 20 years and there's a lot of vaue based in those in terms of spend per student, not to mention intellectual value.

Notes on data quality from MARC records. Lots of creativity in the MARC records, a concern because no one has looked at it since creation, and we see bit rot just at data level. qualifiers like "electronic thesis" added to title, or supervisor note made in author field, makes these unretrievable, increases cost of retrieving record, no one has the computational time to clean up this data so STOP IT. Records differ in level of richness or detail, MARC records have different data than repository records.

Next steps are to enhance UI, looking for long term home of project, and developing better data quality checking. Zamboni processes to rebuild records with missing info like pages, etc.

Every library should have its own sparql server to maintain theses. If you want to clone and put into your own university, contact Rob, he will help.

Perfect point to add ORCID IDs, but would it require permission from individual. UBC looked for ORCID IDs and included when they found them, but ran into challenge in terms of amount of effort. Repository records often richer in metadata, so often use those instead of MARC records.


Session 3: User Experience from a Technical Services Point of View (UX for Technical Services) - Shelley Gullickson and Emma Cross

UX of students doing academic research online, then looked at that UX from a tech services perspective.

Why? TS and UX don't generally go hand in hand. TS finding it difficult in prioritizing work of staff. What work would have biggest impact on users? What makes a difference to our students? Student-led, exploratory. Where search for info, how search, how deal with search results in terms of what they were looking at. Students search for something they need for research project (not specific), asked to do what they would normally do even though weird to be watched; also explicitly told them they shouldn't feel like they had to use library resources if they normally wouldn't. Think-aloud when searching, so prompted with questions when quiet. To logical end, 15-45 minutes, most 20-30. Emma took notes and observed, but used video capture which was important because students worked fast. Coded key themes:

1. Overwhelming use of single search box - Summon or Google Scholar, not so much the catalog. Not much difference between UG and grad students (except only UGs used catalog). Don't extrapolate, qual study looking for patterns.

2. Popularity of the Get It button and link resolver in general. Students very verbal about liking this. Link resolver presented no problems for them. Research from other uni libraries show students do stumble over this.

3. Metadata: looking at vs searching for. Examined: titles, publication date, abstract. Records without abstract or snippet they get confused and skip (monograph records). Students looking at these fields but not searching them; all keywords all the time.

4. Fast and easy access, generally don't go past first page, or 10 results in Summon (UGs often didn't get through 10). Many students open about being busy, cannot waste time, have to get through things fast. tendency to skip over reserves, storage, books checked out, documents taking too long to download all skipped. Even if they did pursue, not happy about it. But usually they would find things they felt were just as good but easier to access.

No real big surprises. But how can Technical Services staff react to these results? response: teach students to search catalog, give them a booklet to search catalog. Head and supervisor reacted proactively. Summon and link resolver: call out vendors when there's a consistent problem--can be pushy because we know this is what students rely on. Loading ebook records - no keywords, but searchable summaries that will keyword pop. Take more time to see from where Summon pulls information and how we can improve that information. Big change from giving students a booklet.

Great fit to combine UX and Technical Services, TS makes decisions about how our stuff is found online.

Q & A: Were students asked if they had library instruction? No. Sad panda story about bad library instruction (searching keywords as subjects). We need to take care of KB because it degrades as a tool; do we put more resources into this? Do we need more research on effectiveness of link resolver?














Comments

Popular posts from this blog

First Impressions & Customer Service Failures

The Dissertation Problem and ProQuest's "Legitimacy" Lie

On the Great Myth of the Librarian Grays