Gathering Feedback on Library Furniture

When the UX team moved out of the technology department to be housed centrally in administration, I knew we’d expand our scope to include projects related to physical space. I didn’t know that for two weeks at the end of the fall semester, I’d be immersed in a furniture study to quickly gather feedback from students on dozens of furniture options. It was a new thing for the UX team. It was interesting. And I thought I’d share what we did.

two people talking in a collaborative furniture setup
Preliminary user interviews about furniture in spring 2018 (that’s me on the left)

Context

We’re undergoing a massive, multi-million dollar renovation to our Main and Science-Engineering Libraries. Part of the renovation includes new furniture for our ground floors. In early November, 2018, we received dozens of chairs, a few tables, and a couple pieces of lounge furniture to pilot for a few weeks. They were placed throughout the Main Library ground floor, with tags identifying them.

We were given a quick timeline to gather as much feedback as we could from students to guide decisions. The UX team (3 of us) worked with our new assessment librarian, Lara Miller, and staff from access services, John Miller-Wells and Michael Principe.

Methods

Michael and John created a survey with Qualtrics that people could take online or fill out in person. They have student workers dedicated to data collection who collected observational data (primarily counts of furniture usage) and transcribed the print survey results.

Screenshot of survey questions: letter of option, rate this option, and tell us more
Survey we provided in person and online

Lara and the UX team conducted more qualitative observations, and did some informal interviews with people using the furniture.

We placed stock photos that the companies gave us (just a selection of some of the furniture) on boards in the space and asked people to mark their favorites with sticky notes. Realizing quickly that this did little to tell us why pieces were marked, we asked participants to also describe the furniture in a few words.

Stock photos with instructions to tell us what you think by putting sticky notes next to your favorites and describing in 1-2 words
Stock photos with instructions for passersby to vote on their favorites and describe them

Getting close to the end of the pilot, we realized we didn’t have as much feedback as we wanted on particular choices, such as the laptop tables. We also wanted to compare some specific stools and specific chairs side-by-side.

To get this feedback, we posted photos of the pilot furniture on a large whiteboard and then placed the actual furniture nearby. We asked participants to mark their favorites with green sticky dots and their least favorites with red sticky dots. (We since realized this would be an issue for colorblind users, so in future might use something like stars and sad faces instead).

Whiteboard with votes surrounded by pilot furniture and students testing the furniture out
Pilot furniture placed around a whiteboard where students could provide feedback

Limitations

We were short on time, it was near the end of the semester, and we had a lot of furniture to get feedback on. The stock photos also didn’t match the pilot furniture exactly.

Stock photos of lounge furniture with sticky notes and descriptions like "flexible" and "outlets!"
Feedback on popular stock photos – we didn’t actually have this furniture as part of the pilot

It’s hard to get authentic feedback on this type of thing. Most of the data we collected was attitudinal rather than behavioral. And if we really want to make the best decisions for our students, we should know what they do not just what they think. The best way to discover how students actually use the furniture and what they prefer might be an ethnographic study, but we didn’t have time or resources for that.

A significant issue with most of our methods is that students could vote on a chair for aesthetic reasons (color, shape) when they haven’t actually used it in any real capacity. So a chair could score highly because it’s attractive but not particularly functional, especially for long periods of study.

The decision making process at the end of the day was also unclear, as it’s a negotiation between the library project team, the architects, and the vendors. We can provide the data we collected, but then it’s essentially out of our hands.

Findings

We ended up with 283 completed surveys, 606 sticky notes on the stock photos, and 573 sticky dots on the whiteboards (we removed the stickies as we went so the boards wouldn’t get overwhelmed). We also had 13 days worth of usage data and a handful of notes from qualitative observations.

While we had a couple of hundred survey results, since each survey only referred to a single piece of furniture it was hard to make any conclusions (just 0-5 pieces of feedback per piece). We found that the comparative data was much more useful, and in retrospect would have done more of this from the get go.

We put together all the data and in December were able to present a selection of furniture we recommended and didn’t recommend for purchase.

In the chairs category, most of the recommendations were adjustable, on wheels, with arms, and with fabric seats. For stools, the ability to adjust up and down for people of different heights was especially important. Those chairs we didn’t recommend tended to have hard plastic seats or metal arms, be non-adjustable, or be less comfortable for longer term use.

whiteboard with images of stools and chairs and voting with sticky dots
Stools and chairs comparison – results of preference voting

The winning laptop tables had larger surfaces (to fit a laptop and a mouse/notebook), felt sturdier, and the legs could fit under a variety of chairs or tables.

whiteboard with images of laptop tables and voting dots on each image
Laptop table comparison – results of preference voting

Overall, we didn’t find anything groundbreaking in the data. But we do now have some solid recommendations to share with the powers that be. And we did learn a lot just through the process, which was in many ways an experiment for us:

  • how to gather data on people’s attitudes around furniture
  • how to act quickly and iterate on our process
  • it’s possible to gather a bunch of data in a short, focused amount of time
  • a mixed methods approach works best for this type of thing (as it does for most things!)

Usability Testing Course Lectures

I’ve taught Do-it-Yourself Usability Testing for Library Juice Academy for the past four years. I’m stepping back from teaching due to other commitments, so thought it would be a good time to share my lectures publicly. These were last updated about a year ago.

Hope these prove useful even outside the context of the course. Much of the course content is also reflected in my usability testing guide from 2014. Feel free to use, adapt, and share these videos!

Week One: Writing Tasks and Scenarios

Usability Testing – Week One – Writing Tasks and Scenarios from Rebecca Blakiston on Vimeo.

Week Two: Creating a Usability Testing Plan

Usability Testing – Week Two – Creating a Usability Testing Plan from Rebecca Blakiston on Vimeo.

Week Three: Conducting a Usability Test

Usability Testing – Week Three – Conducting the Usability Test from Rebecca Blakiston on Vimeo.

Week Four: Analyzing Results

Usability Testing – Week Four – Analyzing Results from Rebecca Blakiston on Vimeo.

Testing and Customizing the Primo Interface

The project

At the University of Arizona Libraries, we’re replacing Millennium and Summon with Alma and Primo later this month as our library services platform and primary discovery tool. Needless to say, it’s a critical piece of the library’s experience. It’s the main way people find library materials (both digital and print), access full text, request holds, and manage their accounts. So checking and improving its usability is key.

The team

The focus of our (small but mighty) UX team the past couple months has been Primo. It’s critical, so we were all in.

We hired a grad student intern from the School of Information, Louis Migliazza, who focused his summer internship on Primo usability. That was awesome. Student worker Alex Franz and content and usability specialist Cameron Wiles took turns pairing with Louis for the testing.

We also met weekly for an hour with Erik Radio, our metadata librarian and product owner for Primo. He helped us come up with solutions and worked on the backend to make improvements, contacting Ex Libris when needed.

And we met weekly with the broader Primo implementation and discovery teams, which included stakeholders from throughout the library and were led by our fearless project manager, Joey Longo. At these meetings, we regularly shared our findings and gathered feedback on our plans to address them.

The methods

Louis and Erik’s leadership over the past 6 weeks made it possible for us to conduct a ton of usability testing and make significant customizations to the interface. Building on preliminary research and testing we’d started earlier in the spring, we ultimately tested 22 tasks with 91 participants.

We used Tiny Café (our pop-up food/drink station in our Main Library lobby) to recruit passersby for testing, who were by and large undergraduate and graduate students. We had a handful of library staff stopping by, too. We did this on Tuesdays and Wednesdays for two hour blocks (4 hours a week total). Ultimately, we held 27 hours of Tiny Café, intercepting 84 passersby.

Two women sitting at a laptop at a table with snacks
Tiny Café

We’d usually test 2 or 3 tasks per participant. And the tasks changed nearly every week as we learned what we needed to learn, made adjustments to the interface, tested again, and/or moved on to the next tasks we needed to test.

We also recruited faculty with help from our liaison librarians. The majority of those sessions were lengthier, moderated, remote tests using Zoom.

All told, participants included 36 undergrads, 31 grad students, 7 faculty, 7 library staff, and 10 community users.

Findings

Searching and filtering

Participants were 100% successful at:

  • Finding a specific item by title
  • Searching for journals by topic or title
  • Renewing an item

One grad student said, “Looks pretty intuitive, pretty easy to navigate.” And one undergraduate said, “I think it’s easy to find what you’re looking for.”

navigation bar and search box with descriptions
Original Primo basic search interface, where participants would start their search

They were mostly successful at using filters, specifically:

  • Using facets to narrow results (84% success)
  • Finding an item at a specific library (83% success)

Some of those who weren’t successful would only use the search box to narrow their results, avoiding filters entirely. Or they would skim through their results to try and find an item like what we were describing. (In our scenarios, we intentionally didn’t ask them to “use the filters” to avoid leading them in that direction. Rather, we asked them to “narrow your results to only books from the last ten years” and “find a book that’s in the Health Sciences Library.”) That said, we did make a number of changes to our filters along the way (described later on), hopefully making them a bit more useful.

Primo search results page
Original Primo search results page, without customizations

Signing in

We observed some issues with signing in, too. Only 83% of people successfully signed in to see the due dates of checked out items. To address this, we changed the language in the top-right utility navigation from “Guest” to “Sign in/account.” We also removed the word “English” along with the language options, which caused confusion (and disappointment, since Spanish wasn’t an option and we’re a Hispanic-serving institution).

History icon, chat link, sign in/my account link
Customized utility navigation

Saving items

Interestingly, only 2 of 8 students successfully saved an item to their account. But then 4 faculty and PhD students were successful on first attempt. This might tell us that undergrads don’t tend to think of or use this feature. It doesn’t fit with their mental model when it comes to saving articles.

When asked, they said they would use other methods outside the tool, such as bookmarking the URL or emailing themselves. We didn’t make any changes to the interface based on this, but found it interesting.

citation, email, and pin icons
Only faculty and PhD students were successful in using the pin icon to save their results

Requesting items

The most challenging task was requesting a hold. Only 40% of participants were successful. This is because you have to be signed into Primo in order to see the “Request” link. If you’re not signed in, the option doesn’t appear. We ran a subsequent test where we were already signed in, and 100% of participants were successful.

By default, Primo has a yellow bar that says “Sign-in for more options,” but people didn’t notice this most of the time. Especially since it’s in the “View it” section of the records, and people tend to be looking near the “Find it” section.

Primo item record
“View It” and “Find It” sections of Primo item records, without customizations

We found it problematic that the interlibrary loan option appears when not signed in, but the hold option does not. In contrast to requesting a hold, 90% of participants were successful in requesting an interlibrary loan, using the link: “Borrow this item from another library.” By requiring users sign in, the interface essentially hides functionality from the user, causing a significant usability issue.

Ideally, we think, the “Request” link should always appear and upon click, the user is prompted to sign in. (This is how our current catalog works). But with this not being possible, the only thing we could really do is customize the message. So we changed it to say: “Want to place a hold? Sign in.”

Want to place a hold? Sign in.
Custom text to indicate people need to sign in to place holds

User impressions of usability

A few weeks into our testing, we decided to add on a System Usability Scale survey after each session to gather overall impressions in a more systematic way. Of the 32 people who filled it out:

  • 89% thought various functions were well integrated
  • 84% would use Primo frequently
  • 83% thought most people would learn to use Primo very quickly
  • 78% think Primo is easy to use
  • 70% were very confident using Primo

graph representing SUS results

Needless to say, we were pretty happy with these numbers.

Other customizations

Changing terminology

We found that some of the default terminology wasn’t ideal. For example, one grad student said, “Loans? I think about money…I just don’t like the word ‘loans.’”

Here are a few terms we updated:

  • Item in place > In library
  • Full text not available > Available to request
  • Fetch item > Find a specific item
  • Expand my results > Include results beyond UA Libraries
  • Loans > Checked-out items

Updating the homepage search box

To mimic existing Summon and Catalog options while focusing on the most common search behavior, we:

  • Created drop-down options for title, author, and call number
  • Put an Advanced search link in the bottom right
  • Added buttons to take users to the most common alternate starting points: See all databases and Find a journal

search box with drop-down menu and buttons linking to journal and databases
Simplified homepage search box

Removing less helpful or redundant elements

To make things as intuitive as possible, we:

  • Removed the vendor product name
  • Removed the sign-in banner for users on campus
  • Removed redundant title attributes from the tile menu
  • Removed Personalize option (redundant with subject filters)
  • Reduced and re-ordered the Send to options

Options for citation, email, permalink, print, refworks, endnote
Customized “Send to” options and ordering

Making search results and filters more intuitive

To reduce cognitive load, we:

  • Moved filters from the right to the left
  • Re-ordered filters so that the most-used options are at the top
  • Changed “Availability” to “Show only” as a filter category
  • Customized the icons to be more consistent and recognizable
  • Removed filters that were confusing, less useful (e.g. “Call number range”), or redundant (e.g. “Location” in favor of “Library”)
  • Reduced number of items displaying by default underneath each filter category

set of Primo filters
A preview of our filter customizations

Search results with custom icons
Customized icons for book, article, and multiple versions

Putting content at point of need

To help users along, we:

  • Added a Locate button to print records that takes people to the call number guide
  • Put searching tips below the Advanced search option as well as the Library search landing page
  • Added suggested librarians for when people search particular disciplines
  • Added a link to WorldCat above the search box

links to Library search, Find a database, Find a specific item, Browse, Search Worldcat
Customized primary (tile) menu

Next steps

This is a huge change for our users, but we’re feeling pretty good about where we’re at. We did a lot of testing and a lot of tweaking, and participants were overall successful at completing the primary tasks we’d identified.

We go live on July 20(ish), and are sure to discover more UX concerns once we have people using the system in their daily lives.

We’ll continue to gather feedback and make adjustments as needed.

Perhaps one undergraduate said it best: “I think I liked the old interface better because I’m comfortable with it…I’m sure once I get used to [Primo], it will be ok.”

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Usability Testing: a Practical Guide for Librarians

And in other news…. I published a book! It came out at the end of September and I’m hoping it will be useful for anyone who is interested in dabbling in usability testing for the first time or leveling up their skills. Whether you’re on a string budget with little staffing, or you have a larger web team that’s committed to improving the user experience, this should be a worthwhile read.

Check it out. I’d love to hear your feedback. Now available on Amazon.

Book cover for Usability Testing: a Practical Guide for Librarians

Do-it-Yourself Usability Testing: An Introduction

Yesterday, I presented a webinar sponsored by the Arizona State Library, Archives, and Public Records. They organize professional development for library workers across the state. This was a great opportunity to share an overview of how to conduct usability testing easily and on a budget.

We had a few technical issues at the start, and some of my slides came out funky or incomplete, but other than that I think it went well.
Webinar recording (1 hour)


Rebuild of the Center for Creative Photography Website

I presented with my colleagues Ginger Bidwell & Josh Williams yesterday, “Extreme Website Makeover: Center for Creative Photography Edition.” It was at the annual Arizona Library Association (AZLA) conference held in Phoenix.

I started off by discussing who was involved, how we communicated with stakeholders, what user research we conducted (survey, personas, remote card sorting), our competitive analysis, and how we developed a purpose, voice & tone for the new website. Ginger discussed all things Drupal, including how we built structured content and why it’s so important, and Josh discussed the visual design decisions and how & why we went with a responsive design. The audience seemed very interested in the process, and for many of them working in public libraries across the state, this was at the first time they had heard of techniques like personas, card sorting, structured content, and responsive design.


Gathering user input the easy way

I’m the product manager for our library website, and over the past few months I’ve learned that it’s actually pretty easy to gather user input. It’s data that’s extremely important and should guide your website decisions, but so many of us neglect to do it in any frequent, systematic way, often due to fears of time and budget constraints.
Well it doesn’t have to be that way, especially if you are fortunate enough to have a physical location & therefore your primary audience all around you. Here are two methods for gathering quick and dirty user input:
5 Minute Intercept Usability Testing
Spend 20 minutes coming up with your key tasks you want to test and scenarios in order to test them (I recommend Steve Krug’s Rocket Surgery Made Easy for a quick read on this). Grab a laptop and some candy bars, and preferably a colleague to take notes, and then go out in the world to solicit volunteers. If you are on a university campus, it’s super easy to find students willing to trade 5 minutes of their time for a candy bar (king sized, of course). The student union after lunch won’t ever fail. I’ve been able to conduct 8 tests in the course of an hour or two. And learned so much in the process.
10 Minute Card Sorting
This method is often used to guide an entire website’s navigation, but it can also be used to test sections of your website. It’s a great method for testing your own assumptions about how your audience thinks about your content; you can use this technique to come up with an organizational structure that makes sense and labels that are more meaningful. Don’t get bogged down coming up with perfect descriptions of content or the ideal labels you want to test. Treat it as an iterative process. This week we’ve been testing all of our “help” content. We began the first round with open card sorting using 28 cards; more were added when we realized not every type of content was captured, and others were taken away when we realized they were confusing. We then added labels and now do a blended version of “open” and “closed” sorting where we show them the labels after they’ve established an organizational structure to see if any labels make the most sense given their structure. Similar to usability testing, you can find users willing to trade minutes of their time for a candy bar. We’ve actually found that students enjoy the activity, as well. They like the library and like knowing they are contributing to improving our website.  
I’d like to hear if others are conducting similar sorts of user testing on a dime. Intercept usability and card sorting are the two I’ve had success with. We have also managed to recruit faculty members to conduct some more formal testing later this month (we offered them lunch). I hope to continue to conduct testing on a regular, systematic basis. In an ideal world, all of our serious website decisions should be based on user input.