Subscribe to feed

Archive for the ‘teaching’ Category

I was both flattered and excited to be interviewed on the T is for Training podcast.  I first met the host, Maurice, when he and I both attended a session at a conference and I cyberstalked him by sending him a Twitter message saying that I was sitting right behind him.

We looked back on 2009, looked forward to 2010, and looked *way* forward to the future of technology and how it’s going to impact libraries.  There was, of course, lots of talk about training, instruction, classes, and the perils of the audience’s eye-roll.

Having never listened to audio interviews of myself before, I discovered that I apparently really enjoy the word “really”.  I hope you enjoy the podcast.  Really.

As a Reference Librarian, I’ve been thinking about cataloging a lot lately.  My biggest fear was confirmed while having lunch with a friend, who is wrapping up her MLIS degree with my alma mater, Florida State University.

She joined one of our library’s catalogers and I for lunch to discuss her internship at my library, where she will be learning cataloging under his direction.  While we were in the middle of disucssing  the challenges of cramming the whole scope of cataloging into five months, I brought up RDA.

She had never heard of it.  I asked about her understanding of FRBR.  “What’s that?”

I knew for a fact that she had taken an introductory class on the organization of information, as well as a class on indexing and abstracting.  So I guess somewhere in there, I expected her to learn about these emerging standards.

Imagine the look of horror that spread across her face when we explained what they were.  “But what if I had gone into a job interview and someone had asked about RDA or FRBR?”  Exactly.

MLIS programs should be at the leading edge of exploring emerging trends in our field.  They should be preparing their students for the rapid change that we experience in libraries, and equipping them to evaluate and make tough decisions regarding formats, standards, and techniques of description

I’m not picking on FSU alone here.  In my time at VSU, I’ve served on and/or chaired several search committees.  The number one reason that candidates aren’t selected is that they lack experience, or reveal their ignorance in an interview.  It is my opinion that since librarianship is a practical science, it should be practiced by its students, at least in the form of a mandatory internship.

And no, I’m not talking about folksonomies and tagging here.  Although they are fun and very useful, they are no replacement for standards-based high-quality metadata.  I would never want my library’s catalog to look like my personal photo collection–with spotty tagging and organization at best!  Reference librarians, library staff, other catalogers and users all make use of high-quality cataloging metadata for locating the specific items that they need.  All it takes is a single mistake in a cataloging record to ensure that an item is lost to its user forever.  Catalogers:  take it from a Reference Librarian–what you do is important.

So, my plea is this:

If you teach in an MLIS program, stay in touch with librarians to know what your students should be learning to be prepared for the real world.  Look at the entry-level job ads that are being posted, and ask if the average graduate of your program will leave with the skills necessary to do that job.  Look at the advanced-level job ads that are being posted, and ask if your students are being instilled with the intellectual curiosity and passion that will lead them in that direction.  Make internships required for all your students, so they can at least get a taste of what librarianship is really like.

If you are a cataloger, constantly strive to improve what you do, and stay in touch with the cataloging community.  Think about the long-term effects of your description choices–after we’re long gone, our bib records will remain, either informing or misleading the next generation.  And please pass along your skills and passion to the next generation by offering mentorships and internships.

If you do it for no one else, then do it for our users.  After all, they are the ones who truly suffer if tomorrow’s catalogers are unskilled, and that perfect resource can’t be found.

Library Instruction Assessment is one of the most annoying parts of doing/coordinating library instruction.  Tallying paper surveys and grading quizzes is a pain in the butt.  So here at MPOW, we were looking for a way to assess our instruction skills without wasting a bunch of time.  We now have in place a pre- and post-test that can be tallied in about three minutes.  Here how you can do it at your library:

  1. Figure out what you want to measure. We polled the reference librarians to see what they thought were the most important skills for students to walk out of the session with.  Those are the things we test.  Thanks to Emily for doing this!
  2. Create a form for the pre- and post-test. Ours is here.
    1. We ask them for their ID numbers so that we can match pre- and post-test scores.  Since the librarians do not have access to the students’ ID numbers–and since the professors never see our assessment results– there is no fear of retribution for “bad scores.”  After all, bad scores just mean we’re doing a bad job of imparting the information!
    2. The form (invisibly) timestamps each submission, so you can tell the pre-tests from the post-tests by the time that each entry was submitted.
  3. Have the results dump into a web-based text file. We use the ProcessForm script by MindPalette, which is free.  A big thank-you to Andy in Web Services, and Sherrida and Keith in Automation for making this happen.
  4. Create a spreadsheet for analyzing the data. Ours is here.
    1. The spreadsheet’s calculations are based on the number of students taking the pre- and post-tests.  So if you had 17 students take the pre-test and 19 students take the post-test, you can plug those numbers in to see the percentage correct for the number of students taking each test.  This way, if you have stragglers, you’ll still get appropriate percentages.
  5. Plug the data from the web-based text file into the spreadsheet, and voila!
Library Instruction Assessement tool output

My class of 19 "Intro to Communications" students.

Lessons Learned:

  • We’re definitely getting more assessment done in less time, and with less fuss.  So that part is a complete success.
  • Students will not remember your name (or even their professor’s name, or what class it is).  Write it on the board.
  • We’ll be adding more options for unique (non-identifiable) numbers, since many students do not know their VSU ID numbers.
  • The results get skewed if you have more people take the post-test than the pre-test.  Students who take the pre-test will naturally do better on the post-test because they will know what questions will be asked.  Since stragglers will do more poorly in the post-test, you can actually see a decrease in correct answers.  So if you have stragglers, they need not take the post-test (esp. considering they did not get the same instruction experience as those who are on-time and took the pre-test).
  • Be aware of the language that you use in the instruction session, and how that will affect the results.  For example, the truncation question asks if truncating a term will search for words with the same 1) beginning, 2) middle or 3) end.  If you say in the session that truncation will search for words with the same root, you will get poorer results.

Please feel free to use this method, or tell me your own way of measuring your library instruction sessions!

The folks over at CommonCraft have done it again!  Check out Social Media in Plain English; what a great way to explain it!


Social Media in Plain English from leelefever on Vimeo.

Getting actual quantitative assessment of library instruction is something that most librarians hate to do–it often eats up our too-precious time with the students.  And yet, I find myself dissatisfied with the “how’d I do?” opinion polls that we’ve used in the past.

So as part of our annual goals here at MPOW, we’ve created an online form for students to fill out as a pre- and post-test.   The results write to a tab-delimited text file using ProcessForm 3.0.

By including a hidden date and timestamp, we’re able to separate classes as they are added to the text file, and then import them into a spreadsheet for analysis.  Couple this with the students’ institutional ID number, and we can compare pre- and post-test scores while keeping the students’ anonymity intact.

With a little help (read: enforcement) from friendly professors, this test could be self-administered before and after the library instruction session to prevent eating into precious library instruction time.  Additionally, the test could be performed pre- and post-library instruction, and then again at the end of the semester.  Let’s see how much they really retain!

I welcome comments, criticism and suggestions! A big “thank you” to Andy and Sherrida for making this happen, and feel free to steal the code from the assessment form.