Quick and Easy Reference Evaluation: Gathering Users’ and Providers’ Perspectives

Judith M. Nixon, Editor
Jonathan Miller, Guest Columnist

Print version (Adobe Reader required)
Imagine a reference survey instrument that is very simple to administer, requires only a pencil to fill out, and gathers data specifically on whether users get the help they need and are satisfied with reference service, and whether in the process they learn about how to find and evaluate information. If you are interested, read on. This third article in the new Management column is written by Jonathan Miller about the reference survey he and his colleagues developed at the University of Pittsburgh. I first met and heard Miller at the 2006 American Library Association (ALA) Annual Conference when he presented his research at the Reference Research Forum. I found it a fascinating and practical approach to obtaining user input; a survey that combined some of the strengths of the Wisconsin Ohio Reference Evaluation Project (WOREP) survey while overcoming some of the shortcomings of that standard workhorse instrument. I was especially interested because this new survey was developed to build upon the survey data libraries gathered from LibQUAL+. LibQUAL+ does not specifically measure reference quality; this survey provides a way of gathering useful evaluation of reference service.

A future column will be about a mid-life librarian who changed careers from a special librarian to an academic librarian, including a discussion of the ups and downs of such a change and the energizing effect of the change. This is of timely interest to administrators and managers because within the next few years a large number of librarians will retire, opening up many higher level positions. Mid-career librarians will be in the prime position to apply for these jobs, if they are willing to make some career changes.

For future columns I am looking for articles on new approaches to managing reference service, such as outreach services in non-library locations, or the use of expert systems, or using instant messaging (IM) as a way of communicating with high school or college students. If you have experiences with any of these or other practical ideas for providing reference service, please e-mail me.—Editor

This research grew from my concern as a public services librarian-manager to find a quick and easy way to evaluate reference service. Most, perhaps all, libraries measure how much reference assistance we provide. Usually we simply count the number of transactions, sometimes classifying these transactions by complexity (directional, ready reference, and so on) or in terms of medium (in-person, phone, and online). Sometimes we even collect information on how long it takes to complete the transaction. As a profession we have developed a variety of definitions of what constitutes a reference transaction to help with the collection of data.1RUSA has developed reference behaviors guidelines that I have found to be very useful during training and development of reference providers (my preferred, if awkward, term for all those employees—librarians, staff, interns, and student employees—who may provide reference service).2It is more difficult to measure the quality of the reference service we provide, particularly if we want to avoid overburdening our reference providers and our users. The LibQUAL survey of library service quality asks about “employees who have the knowledge to answer user questions,” “employees who deal with users in a caring fashion,” and “employees who understand the needs of their users.”3All of these statements could be related to reference providers, but LibQUAL does not single out individual library services, such as reference. Instead, quite rightly, it asks for users’ perceptions of library service quality as provided by the whole library. These perceptions could be formed on the basis of interactions with any library personnel with whom the user interacts.

Reference Evaluation Literature

The literature on reference evaluation is huge, and this column is not the place to review it. But two works in particular helped me understand some of the underlying issues of reference evaluation and amply repay review by anyone interested in evaluating reference service. Saxton and Richardson’s 2002 book is an excellent evaluation study.4They also critically evaluate earlier reference research. Jo Bell Whitlatch’s 2000 book provides a very thorough practical guide to reference evaluation.5

The research that is most relevant to the work presented here combines the users’ and the providers’ assessment of the reference transaction. The idea of gathering data from both the user and the provider has been used by a number of researchers, including Whitlatch in 1990.6 The most prominent research in this area is the Wisconsin Ohio Reference Evaluation Project (WOREP.) The Reference Transaction Assessment Instrument (RTAI) that grew out of WOREP was developed by Charles Bunge, Marjorie Murfin, and Gary M. Gugelchuk.7 WOREP and the RTAI are currently available at http://worep.library.kent.edu. The great benefit of the RTAI is that it is standardized, it is statistically valid and reliable, the results are machine readable (thus saving time and resources on data entry), and the results can be compared to other reference service units. But the instrument itself is daunting, especially from the provider’s perspective. As one colleague put it upon seeing a copy, “it looks like the SAT.” It takes some minutes to complete (minutes that the provider might not have and that the user may be unwilling to invest), and asks for a level of detail about the interaction that may be greater than the user in particular is able to provide. The challenge that we faced was to combine the power of evaluations from both sides of the reference desk in evaluating reference service and yet not overwhelm providers or users of the service.

Developing a Method

Rather than developing an evaluation method at a single reference desk, we brought together a variety of academic library reference operations in Pittsburgh, including public and private, large and small, and libraries that support a wide range of users working in many subject areas and at many levels. Thus the questionnaire we developed has been tested in a variety of settings, all academic, but representing different libraries with varying models of reference service and users. The specific libraries involved were the Hillman Library Information Desk and the Engineering Library at the University of Pittsburgh, the Hunt Library, Engineering and Science Library, and Arts Library of Carnegie Mellon University, and the Chatham College Library. We formed a working group of representatives from each library to develop and test the instrument and analyze the results.

The first step was to agree on the outcomes of reference service. Since our aim was to develop an easy and quick way of measuring the quality of our service, our outcomes were necessarily simple and general. They were certainly not detailed or specific enough to meet the requirements of formal social scientific research. They were:

  1. The user gets the information they need.
  2. The user learns something about how to find information.
  3. The user learns something about how to evaluate information.
  4. The user is satisfied with the interaction.

Pages: 1 2 3

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *