Saturday, April 28, 2012

The Need for Speed

It's the time in the semester when everyone is starting to look a little frazzled. We're seeing a lot of last-ditch efforts in the library. Several students I worked with in the classroom earlier this year are back, having seemingly forgotten everything I covered. They've exhausted the open web and now need the library sources that their professors are for some reason requiring. Any library sources will do, as long as they are available RIGHT NOW. The clock is ticking, and the librarians are either Everything or Nothing.

At no other time is our array of services put through such a stark pass/fail test -- suddenly the flaws in the printer system are magnified, the online resources ever more vital. There is no time for a leisurely stroll through the stacks, or a prolonged search in the databases, or talk of a process with a feedback loop; bring up any of those things and we fall in the Nothing category.

For those of us who spend a lot of time meditating on how best to do research, this is a painful period. Likewise for those of us who obsess about trying to provide useful, careful service, and try to teach information skills at every opportunity instead of just giving away answers. The stress of deadlines is made apparent  through short tempers.

But in a few short weeks it will be graduation, and there will be smiles all around again. Hang in there, everyone!

Friday, April 20, 2012

Comparing Reference Resources

I think one of the most frustrating things about reference librarianship is hearing about research projects, looking at information, and getting a cursory introduction to the topics, but never being required to fully complete the research. My part in the process typically leaves me feeling half-finished. I regularly look back at research projects I worked on as a student and wish I could do them over again. How weird is that?

So I can't tell if this exacerbates the problem or relieves it, but I've taken to coming up with reference questions independently and then following up by testing them in our online reference databases. I still get only a cursory understanding of the topics, but I get to relieve my own curiosity somewhat, in addition to putting our resources through their paces. 

I've been trying to do this this systematically (picture a spreadsheet) since January: I record basic reference questions translated into keywords, run them through our various reference databases as well as google, record the top 5 results, and assign a (subjective) quality grade on a scale from 1-5. In this way I'm learning quite a lot about the strengths and weaknesses of our databases, as well as google.

For example, I am consistently looking for neutral, objective information, and I wonder how much google is customizing my results these days (watch this TED Talk to become paranoid about this). Most of what comes up when I use google are wikipedia entries. As much as librarians like to debate the merits of wikipedia, in the searches I've been doing google would have been close to useless without it. Instead of comparing our reference databases with google, it has quickly become how our reference databases measure up to wikipedia. I've already run through that exercise to my satisfaction previously, with the conclusion that as long as I have access to an academic library and am up to date on the scholarly resources there, I prefer the reliability and quality of the information in online reference databases to what is found in wikipedia -- even when an article in wikipedia is longer and has more pictures. The only time wikipedia 'wins' is when I truly can find nothing anywhere else, and frankly I'm always disappointed when something in wikipedia is all I have to go on. It might as well have been something I overheard on a bus.

Another thing I'm starting to understand is the strengths of certain subscription reference databases compared to others. Some of this is malleable, meaning we can select and swap the underlying titles as needed, but in other cases the underlying content is just not there, no matter which titles we choose to include. Many reference databases claim to cover all subject areas, but the only one I've found that comes close to doing this is Encyclopedia Britannica (Academic Edition), which may come as a surprise to some who had written it off as an anachronism.

Now that I've started, I'll probably continue to do this for the foreseeable future. I've benefited in unanticipated ways, by finding myself knowledgeable about the strengths of our resources and how their interfaces work. In fact, I should have started doing this sooner!

Thursday, April 12, 2012

How to Measure the Success of a Collection

Accountability is a bit of a buzzword in higher education. Everywhere I turn, it seems to be in the air. As my job starts to include some responsibility for the library collection, the question of how to assess it has come to mind. So far I haven't found a single standard that would apply, because so much depends on local factors and needs.

But one obvious variable is usage. An indication of needs met is a collection that experiences heavy use. Books and other materials that circulate, online resources that get a lot of hits -- these numbers are all easy to generate, and we do so regularly. A collection that experiences 100% use is unrealistic, so what precise proportion of the collection would have to be used for it to be considered "successful"? 10 percent of the collection? On an individual item level, if one particular thing is accessed or circulated 27 times during the first year while it's hot, 2 times the year after that, and then 0 times for the next 30 years, was it a successful purchase? If a comic book circulates and a philosophy tract does not, should we purchase more comic books and fewer philosophy tracts?

Another, almost contradictory, variable is comprehensiveness. If the library has a vast, endless collection -- resources beyond resources -- intuitively that seems like it would be a good thing. Any topic that someone could think of to research would be covered. But inevitably, the bigger the collection, the smaller the percentage of it that could be humanly used in a given amount of time. So while comprehensiveness might at first seem like an obvious indication of success, if much of it goes untouched over the course of a year, thus lowering the percentage of the overall collection accessed, it might not be a useful measurement in the end.

Other variables include quality and cost. We could create some metric, similar to impact factor for journals, to assess the entire collection. We could use RCL, standard reviews, and other resources to gauge how good the collection is from independent perspectives. But then these wouldn't necessarily be addressing the needs of our local patron population. As for expense, I haven't figured out how we can incorporate cost as a factor of success when the prices of library materials continue to rise while the library budget remains flat.     

An experiment I would enjoy is this: Give me some money for a certain subject area. I'll build a collection to the best of my ability, so that quality and comprehensiveness would be taken care of. Then I would track usage, adjust accordingly, and track every decision. Doesn't this sound remarkably like what collections librarians already do?

And perhaps the elephant in the room is the potential for an electronic collection that could shift and reform every year, depending on the previous year's feedback and usage. True, a certain amount of continuity would be lost, but so would a lot of irrelevance. Usage would be high, with potentially no loss to comprehensiveness and quality. I don't think anyone is there yet, but could this be where academic libraries are heading?

Tuesday, April 3, 2012

Straightfoward Work

Recently I've caught myself thinking along the lines of "Wow, it must be nice to have a job where the responsibilities are obvious." I think it was after I found myself saying No to some ideas that I might have said Yes to in the past, and wondering whether it really made a difference either way. In a field like engineering, for example, it's more obvious what is good and what is bad. If it works, it's good; if it doesn't work, it's bad. What constitutes success or failure with certain ventures in libraryland is less clear. (I'm part way through writing another post right now about how to measure the success of a collection, which expands on this.)

Maybe it's a feature of work in the social sciences, where objectives can overlap in contradictory ways. While librarianship as an occupation includes aspects of many other jobs -- medical and IT fields first come to mind -- there is a uniqueness to what we do that defines us. Or maybe I just need to get better at defining and planning how to measure success before I start projects.

Simultaneously, I've been listening to Nickel and Dimed in the car. And I'm wondering, is work low-paid when it can be reduced to an extremely simple process? Is the fact that librarianship cannot be reduced to a simple process evidence that it deserves its categorization as a profession? Is this why work at a service desk lacks prestige -- because it's perceived to be straightforward, even if it's technically not? Is the element of uncertainty, the need for good judgment and imagination, and the lack of repetitive, simple tasks partly what distinguishes professional work?

So now I'm back to being glad that my responsibilities are not always straightforward -- it is evidence that librarianship is a profession.