The Big Map on Campus

I recently began parking in an area of campus that is new to me, and that has had me thinking about maps.  Am I taking the most efficient route from my car to the library?  Am I taking the most efficient route to my favorite parking lot?  How do I get to *those* employee spots? Really, what time of day are those *ever* empty??

Glen Horton‘s recent presentation on integrating web 2.0 tools into a library website also has me thinking.  We’ve sort of haphazardly gone about trying new things at MPOW (my place of work), and we’ve mostly been successful.  I think it might be time, though, to move beyond experimenting and take a more systematic look at the tools that are out there and make sure we’re using them in the most efficient way.

These two thoughts have come together in my head to form this week’s mashup:  the EKU Campus Map using Google Maps.  It’s publicly editable.  Give it a go:


View Larger Map

(the one thing I can’t figure out or that it doesn’t have is the ability to rearrange the order of the placemarks as they appear in the list on the left. Maybe I’m just being too librariany again.)

NB: it’s way incomplete, but many hands make light work, yes?

Thoughts?

An Assessment of Next Generation Catalog Enhancements, Part II: The Scorecard

[See Part I: The Model]

When I started to think about how Next Generation Catalog Enhancements (NGCs) fit into this model, I quickly became overwhelmed, because, as with non-library websites, each product or enhancement exhibits a varying degree of each element. I had hoped that each product would fall easily and neatly into the petal shapes, most likely at the intersection of content and interactivity, but it was not that simple. Instead, I thought about what, to me, were the most important facets to each element, and devised a point system based on these:

Content

  • Content lives natively in system = 3 points
  • Integrated search of articles* = 2 points
  • User-generated content = 2 points [1 if additional cost]
  • Integrated OpenURL* = 1 point [0 if additional cost]
  • Links to content only = 0 points

* These obviously bias library products, but I thought it important to consider this functionality in this context.

Interactivity

  • No dead ends (faceted navigation, tag clouds) = 2 points
  • Google-like effective search = 2 points
  • Effectiveness of results (relevance, ranked properly) = 2 points
  • Personalization, persistence of user preferences within site = 2 points

Community

  • Contacts list = 2 points [3 if granular like flickr]
  • Communication among users (comments, messaging) = 2 points
  • Ability to add to others’ content (tags, wiki pages) = 2 points
  • Integrated licensing options, preferably Creative Commons = 1 point

Interoperability

  • Open API = 3 points
  • Open source or open development = 2 points
  • Uses open standards = 2 points [1 if proprietary technology is used where open technology is available]
  • Badges, feeds, or widgets available for use on other sites = 1 point

First, let’s look at the sites I name as the best of the web: flickr, Amazon, Wikipedia, and Pandora:

Amazon.com = 26 points

  • Content: 3 + 0 + 2 + 0 = 5 points
  • Interactivity: 2 + 2 + 2 + 2 = 8 points
  • Community: 2 + 2 + 2 + 0 = 6 points
  • Interoperability: 3 + 1 + 2 + 1 = 7 points

flickr = 28 points

  • Content: 3 + 0 + 2 + 0 = 5 points
  • Interactivity: 2 + 2 + 2 + 2 = 8 points
  • Community: 3 + 2 + 2 + 1 = 8 points
  • Interoperability: 3 + 1 + 2 + 1 = 7 points

Wikipedia = 27 points

  • Content: 3 + 0 + 2 + 0 = 5 points
  • Interactivity: 2 + 2 + 2 + 2 = 8 points
  • Community: 2 + 2 + 2 + 1 = 7 points
  • Interoperability: 2 + 1 + 2 + 2 = 7 points

Pandora.com = 20 points

  • Content: 3 + 0 + 2 + 0 = 5 points
  • Interactivity: 2 + 2 + 2 + 2 = 8 points
  • Community: 2 + 2 + 2 + 0 = 6 points
  • Interoperability: 0 + 1 + 0 + 0 = 1 points

Next, let’s look at a random selection of NGC products on this scale: Encore, WorldCat.org, LibraryFind, and Scriblio.

WorldCat.org (OCLC) = 16 points

  • Content: 0 + 2 + 2 + 1 = 5 points
  • Interactivity: 2 + 2 + 1 + 2 = 7 points
  • Community: 0 + 0 + 2 + 0 = 2 points
  • Interoperability: 1 + 0 + 1 + 1 = 2 points

LibraryFind = 14 points

  • Content: 0 + 2 + 0 + 1 = 3 points
  • Interactivity: 2 + 2 + 2 + 1 = 7 points
  • Community: 0 + 0 + 0 + 0 = 0 points
  • Interoperability: 0 + 2 + 2 + 0 = 4 points

Scriblio = 14 points

  • Content: 0 + 0 + 1 + 0 = 1 point
  • Interactivity: 2 + 2 + 2 + 0 = 6 points
  • Community: 0 + 0 + 1 + 0 = 1 point
  • Interoperability: 3 + 2 + 1 + 0 = 6 points

Encore (Innovative Interfaces) = 10 points

  • Content: 0 + 1 + 1 + 1 – 1 = 2 points
  • Interactivity: 2 + 2 + 2 + 0 = 6 points
  • Community: 0 + 0 + 1 + 0 = 1 point
  • Interoperability: 0 + 0 + 1 + 0 = 1 point

In doing this comparison, I think it important to look back as well as look forward. WebVoyage, the OPAC available for the ILS used at MPOW, scores a measly 2 points (not to mention negative points for the worst product name ever).

Conclusion

The catalog for so long has been an inventory of our assets. The command line public interface and its modern successor, the web-accessible OPAC, were not designed to aid patron discovery. Next Generation Catalog Enhancements are still largely “lipstick on the pig,” meant to address the problem of patron discovery but falling very much short of being “good” web services and search engines for our users, as I have defined them here.

As a library user commented on this blog in April, “The library is not the catalog; the catalog is not the library.” Librarians have long been down this rabbit-hole of thinking that the catalog is the library. Meanwhile, the outside web world has outpaced us so effectively that popular media questions our very existence. Instead of trying on shinier (and ever-more-costly) lipstick, we should look at what the “best” of the web offers our users and become the library version of that.

One of my Wow!PAC partners in crime, John Blyberg, followed on with an excellent presentation titled “The System Redressed: Containers :: Content.” We in libraries have drifted very far from the willingness to tear down and rebuild that is necessary to create discovery systems that our patrons find useful (a sentiment I somewhat poignantly think is true of library organizations and workflows as well), and this unwillingness manifests itself in our relationships with our vendors. Hence the current state of NGCs. John asserts, and I wholeheartedly agree, that we must think of our work flow in terms of the content (our bibliographic metadata) vs its container (the system patrons use to learn what we have).

An Assessment of Next Generation Catalog Enhancements, Part I: The Model

At this year’s Computers in Libraries conference, I had the pleasure and privilege of presenting at a session with Roy Tennant, Kate Sheehan, and John Blyberg, with Karen Schneider serving as our emcee. The title of our session was “From WoePAC to wow!PAC,” a phrase for which I (lamentably) claim credit.

My piece of the double session was titled, “Are we there yet? An assessment of next generation catalog enhancements” [Slides PDF]. In the presentation, I allude to an arbitrary grading system by which I scored some of today’s extant enhancement options against my self-selected “best of today’s web,” namely Amazon, flickr, Wikipedia, and Pandora.

Several people asked my about the scoring system I alluded to, and I will post that, but first, it’s necessary to understand the four criteria that I measure: Content, Interactivity, Community, and Interactivity. Content refers to the published content traditionally collected by libraries: books, journals, and the more contemporary (academic?) consumable unit, the book chapter or journal article. This image illustrates that as time has passed and content has become electronic, that content has become more complex to make and provide digitally. Computing power and capability have also increased and improved exponentially, which may or may not be causal to the increase in complexity.

Interactivity symbolizes activity between a single user and a site, including a site’s searchability: the more effective and user-friendly the search, the more interactive a site can be. A site with a high degree of interactivity engages users more effectively and for a longer period of time. A stale, text-only site would be low in interactivity; a site with an effective search and with continually changing links (navigation by facets, or that makes suggestions on where to go from here, for example) is higher in interactivity.

The third component is Community, which comes into play when users can see each other’s activity on a site. Note that community does not include personalization, which counts in the interactivity category. Power lies within community: consider a single blog post, a plain block of text constituting someone’s thoughts and opinions. Contrast this with a post that has comments, links to related posts and blogs, and tags that retrieve it among similar work, and we suddenly have a conversation. Another example is tagging: tagging by a user community is much more powerful when that community is large and varied.

The fourth property is Interoperability, or the degree to which sites work with one another, or the degree to which a site allows its content to be harvested and used on other sites or in other contexts. The barest form of interoperability began with the networking protocols–allowing interoperability of more than one computer–that made the internet possible. From library standards like MARC, Z39.50 and OpenURL that allow us to create data that can live in more than one system–think sharing bibliographic data via MARC records–to APIs that allow us to pull and remix data from many systems simultaneously, interoperability allows us to create something new. The first interoperable technologies allowed for connection; today’s interoperability allows for connection but also combination

Combining the four elements

The best sites on the web combine all four of these elements. Further, the more elements that a site has, the “better” it is–there are generally more features, more content, and one can use the content in different ways and contexts. As illustrated below, Project Muse is a combination of Content (journal articles) and Interactivity (search, browse). RSS feeds are a combination of the posts, comments or news stories that they contain and the interoperability that allows us to use the feeds in many different ways. Google Maps lies at the intersection of Content, Interactivity and Interoperability: it is possible to use the maps data to create various mashups like the Super Tuesday Google Maps/TwitterVision mashup or the DC Metro map mashup.

The Sweet Spot is where all four attributes intersect. For me, sites living in the sweet spot are flickr, Amazon, Wikipedia, and Pandora. Each has rich content, is highly interactive, enjoys a large community of users who interact with each other and allow their content to be used in other ways.

Up next: Part II: The Scorecard.

Learning 2.0: Mashups

The final two lessons in our Learning 2.0 program are on mashups. For the first “thing,” we’re to experiment with some mashup sites and blog about our experiences. There have been two ways that I use Google Maps mashups: in real estate hunting and in travel planning. I’ve already written a post on “Real Estate 2.0,” but here are a few ways you can use Google maps mashups to plan for travelling:

  • Figure out how to get from the airport to your hotel using local public transportation.
  • Create a customized map outlining all your destinations and save to “My Maps.”
  • Use a Flickr/Google Maps mashup to get photo ideas for a beautiful destination.

Our final lesson is on mashup editors. Since the deadline to finish the Learning 2.0 program is less than 40 minutes away, I am not going to have time to experiment with creating my own mashup. :( But here are links to my favorite mashups created by individuals (rather than companies–I think):

Twittervision
Flickr DNA
My Facebook profile!

No mashup list in a library Learning 2.0 program would be complete without a library mashup. Go-Go-Google Gadget is John Blyberg‘s winning entry to the Talis Mashing up the Library competition: it enables Ann Arbor District Library (his former employer) patrons to display hottest or newest books in the library or to display their current holds or checkouts. Other libraries have followed suit, of course. I would be interested to know if libraries using Google Gadgets have any use stats or have done any usability tests.

Learning 2.0: processing words online

This lesson will be easy for me! In this week’s lesson, we’re to get a Google Docs or Zoho account. I’ve used Google Docs quite a bit before, for collaborative editing of essays, conference presentation proposals, to-do lists, our Christmas card list, meeting minutes and agenda. It’s an excellent way to work across the miles or across the room.

Learning 2.0: Gettin’ Wiki with it

I’m trying to get caught up with my library’s Learning 2.0 program before the grand finish on December 17. One of the week’s lessons I’m lacking is on wikis, and coincidentally, I was in the Library Society of the World meebo room earlier! Worlds collide! OK, maybe I need to clarify: one of my favorite library-related wikis is the Library Society wiki. It’s a place to find library-related silliness and the occasional curse word. OK, more than occasional, so put on your tough pants and wade in!

A wiki idea that I would love to steal replicate is the presentation wiki. Someday, when I have a free few minutes…

As much as I love wikis and can see potential for their use in libraries for collaborative work–my dean created a pbwiki site for us to make collaborative changes to our mission/vision statements and our strategic plan!–I found that I have been using Google Documents a lot more.

Learning 2.0: tagging fun

What a delight to read the Learning 2.0 lessons on tagging from our Technical Services Coordinator, Margaret! Here goes; my sample search for lesson 12, exercise 1 is (what else?): harry potter.

1. Google finds “about” 125,000,000 hits (only 119,000,000 if I search as a phrase).
2. our library catalog, eQuest, finds 39 hits in a keywords search (harry AND potter).
3. when I search Harry Potter as a subject, I get, sadly, zero. Aren’t there books *about* Harry Potter that we own? Aren’t the Harry Potter books *about* Harry Potter? Trying (and failing, I think) to think like an undergraduate, here.

What parallels do I see between the catalog and tagging on the web? I’m not sure I it’s fair to draw parallels between searching Google and eQuest for harry potter and the tagging found there, because I have no way of knowing where Google is finding those words in those pages. I think a fitter comparison might be between flickr and eQuest. When I search flickr for Harry Potter, I get 85,559 images–including one of my own! That’s still vastly more than eQuest. It’s still sort of apples and oranges to compare flickr images–of course I’ll find more hits among the bajillion flickr images than our million-odd records. Anyway, looking at the results that I get on flickr, I see harry and/or potter in tags, titles, descriptions and notes.

Exercise 2 has us reviewing the tags that we use on our blogs, flickr images and in our librarything catalog. I’m all over the tagging map. I’m most consistent in flickr, where I definitely want to go back and find things: my most common tags are “ak,” “b” and “daughter,” which makes perfect sense, as they are my most-frequently-photographed subjects. I use tags in librarything to give books ratings, rather than using the rating system; I’m not surprised that my biggest tags there are threestars, to_read, wishlist (where I used to keep my to_read stuff!) and scifi.

One last tidbit on tagging: it’s common for flickr photographers to inject a little humor by using funny tags.