Wondering How useful your Journal Subscriptions Are?

Vitoria-University-Library-food-science-journals-4489

If you buy any journals, you know how shockingly expensive they are. So check out this article excerpt about a service called Unsub. It might help your library to save a lot of money on useless subscriptions!

You can read the whole article here.

Unsub is the game-changing data analysis service that is helping librarians forecast, explore, and optimize their alternatives to the Big Deal. Unsub (known as Unpaywall Journals until just this week) supports librarians in making independent assessments of the value of their journal subscriptions relative to price paid rather than relying upon publisher-provided data alone. Librarians breaking away from the Big Deal often credit Unsub as a critical component of their strategy. I am grateful to Heather Piwowar and Jason Priem, co-founders of Our Research, a small nonprofit organization with an innocuous sounding name that is the provider of Unsub, for taking time to answer some questions for the benefit of the readers of The Scholarly Kitchen. “

What is Unsub? 

Unsub is a tool that helps librarians analyze and optimize their serials subscriptions; it’s like cost-per-use (CPU) analysis on steroids. Using Unsub, librarians can get better value for their shrinking subscription dollar — often by replacing expensive, leaky Big Deals with smaller, more custom collections of a-la-carte titles.

How’s it work? There are three stages:

1. Gather the data: Libraries (or consortia) upload their COUNTER reports; we take it from there. 

For each journal, we collect:

  • Citation and authorship rates from researchers at the library’s institution, 
  • costs of different modes of access (e.g., a-la-carte subscription, interlibrary loan (ILL) or document delivery fulfillment), and
  • rates of open access and backfile fulfillment. 

This last category is where a lot of the value of the analysis comes from; we find that up to half of content requests can be fulfilled via open access, for free.

2. Analyze the data. We process all of this data into a customized forecasting model that predicts a given library’s costs and fulfillment rates for the next five years, for each journal. Libraries can customize all the model’s assumptions, reflecting different levels of risk tolerance and creating worst-case and best-case scenarios.

3. Act on the data. In most cases, the models demonstrate that the Big Deal delivers great coverage, but poor value. By relying on open access, and strategically subscribing to high-value titles, libraries can often deliver around 80% of the fulfillment at 20% of the cost. Armed with this data, librarians can a) negotiate with publishers more successfully and b) support decisions to cancel, should they decide to.”

You can read the rest of this article right here!