Assessment in One-Shot Instruction Sessions: Preliminary Findings
Introduction
One-shot library instruction sessions provide little enough time to present information literacy concepts, let alone to assess student comprehension before and after the lesson. This semester I piloted an attempt to incorporate assessment into one-shot sessions for gen ed writing classes. The lesson plan for these classes focused on an introduction to the library’s resources and basic research skills.
My goals for incorporating assessment into these classes were to:
- determine the effectiveness of the library instruction session.
- make improvements to the lesson plan.
In order to get pre-test data, I created an online self-paced tutorial in LibWizard (Springshare) and a seven-question quiz on the topics covered in the tutorial. This tutorial aligned with my basic lesson plan, provided brief videos I had created on these topics last semester, and incorporated some hands-on experience.
In addition to providing pre-test assessment data, my goals for implementing a pre-session tutorial and quiz were to:
- distribute self-paced resources for students to re-use as needed.
- increase time in class for more advanced topics.
Methods
Once I had the tutorial and quiz, I needed to distribute it to the students. Both could be linked directly to Springshare, but I wanted to host them on a platform the students were already familiar with: the learning management system (LMS). At West Chester University, the LMS is Desire2Learn (D2L). Hosting the pre-session module in D2L would also give professors more control in seeing completion rates and to use the quiz as extra credit if they wanted to.
I worked with on-campus D2L specialists to learn how to create and work with the kind of module I wanted. After the specialists set up a library course shell, I created a module with the embedded tutorial and a D2L version of the quiz.
At this point I contacted one of the writing professors, and she was happy to work with me to pilot this assessment in her class. D2L made it easy for me to add the professor into the library course so that she could copy the pre-session module into her writing course. She instructed the students to go through the tutorial and take the quiz prior to the day of their library instruction session. She also added me to her course so that I could view the results, but another method would be for the professor to export and send the results (this is what the professor for my second pilot class did).
On the day of the session, I delivered my lesson plan. As a class we took the quiz again to obtain post-test data, this time using Socrative. One student remarked that it was interesting to see the distribution of participant responses for each question.
Results
The pre-session D2L module and post-session Socrative quiz were administered to two classes. Once I had the pre- and post- test data, I calculated the percentage of response accuracy for each potential response. Some quiz questions only allowed one correct answer, but for multi-select questions I calculated accuracy of responses for each response option.
Question 2: Which statements about topic mapping are true? (check all that apply)
☐ There is only one correct way to design a topic map.
☒ Topic mapping narrows down your focus for your assignment.
☒ Topic mapping develops keywords to use in the search bar.
☐ Topic mapping tells you what citation style to use.
Correct response | option is checked when it should be checked | option is unchecked when it should be unchecked |
Incorrect response | option is checked when it should be unchecked | option is unchecked when it should be checked |
The ideal for correct response accuracy is 100%. The ideal for incorrect response accuracy is 0%. I calculated the improvement percentage for each type of answer (an increase in correct response accuracy between pre- and post-test; a decrease in incorrect response accuracy between pre- and post-test). I added those two improvement rates together to get a total improvement rate of 21.37% and 12.89% respectively. Between the two classes, the improvement in student comprehension averaged 17.13%.
Response Accuracy to Information Literacy Quiz, Pre- and Post- Test (%) |
|||||||
Class 1 | Pre-test | Post-Test | Improvement between pre- and post- test | Class 2 | Pre-test | Post-Test | Improvement between pre- and post- test |
Correct response accuracy | 73.41 | 86.31 | 12.9 | Correct response accuracy | 79.05 | 84.45 | 5.4 |
Incorrect response accuracy | 18.38 | 9.91 | 8.47 | Incorrect response accuracy | 17.44 | 9.95 | 7.49 |
Total Improvement | 21.37 | Total Improvement | 12.89 |
I also administered the Socrative post-test to three additional gen ed writing classes without the pre-session module.
Response Accuracy to Information Literacy Quiz (%) |
|||||
Post-Test Response Accuracy to Information Literacy Quiz, Post-Test Only Classes | Post-Test Response Accuracy to Information Literacy Quiz, Classes 1 & 2 | ||||
Class 3 | Class 4 | Class 5 | Average | Average | |
Correct response accuracy | 87.32 | 87.02 | 83.85 | 86.06 | 85.38 |
Incorrect response accuracy | 4.9 | 8.98 | 7.86 | 7.25 | 9.93 |
Discussion
The 17.13% improvement in comprehension from pre- to post- test in classes 1 and 2 is modest but promising. However, comparison to classes 3-5, which did not view the pre-session tutorial, actually shows a decrease in comprehension when the tutorial was viewed. Classes 1 and 2 showed a lower percentage of correct responses and higher percentage of incorrect responses than classes 3-5. This finding could be due to one of several reasons:
- Small sample size. The range of accuracies is broad (e.g. 12.89%-21.37% improvement in classes 1 and 2). Natural variations in class composition play a greater role when the sample size is so small.
- Discrepancy in lesson delivery. Did my in-class delivery change or leave out details in classes 1 and 2 under the assumption that the tutorial was effective?
- Ineffective tutorial. Was the tutorial confusing rather than helpful?
Clearly more data is needed to determine if administering this particular pre-session module is less effective than only administering a post-test.
Calculating the percentages was a time-consuming, manual process. Exported data from D2L and Socrative did not match each other in format. Also, Socrative’s data counts students who logged into the quiz but didn’t answer a question as an incorrect response. For example, if 20 students were logged in but only 18 responded to a question, even if all 18 students responded correctly, Socrative would still count two incorrect responses. I manually made sure percentages were calculated based on actual number of students responding, not total students logged in but potentially dormant. Future iterations of this assessment should include a process to automate or streamline the data collection and evaluation.
Conclusion
Having completed these pilot classes, I made some progress on my initial goals for assessment.
- Determine the effectiveness of the library instruction session. The assessment needs to be administered to a larger sample size of classes to get more accurate averages and determine if the pre-session module is more helpful than harmful.
- Make improvements to the lesson plan. Analyzing the results of the quiz, both pre- and post- test, revealed troublesome topics for the students (particularly Boolean operators). I took this insight and adjusted my lesson delivery to take more time with Boolean.
- Distribute self-paced resources for students to re-use as needed. Feedback from students is needed to determine the effectiveness of the tutorial. Analytics indicate students are not returning to the materials after completing the pre-session module.
- Increase time in class for more advanced topics. This goal was more idealistic than I realized. I still spent considerable class time demonstrating the tutorial topics for clarification and for students who did not complete the pre-session module. Future iterations of this lesson plan should include more flipped classroom activities to increase student engagement and comprehension.
With further data collection and a streamlined process, this type of pre- and post- test assessment could be a viable and effective process to inform iterative lesson planning for one-shot library instruction sessions.