- A Framework of Guidance for Building Good Digital Collections
Initiatives Principle 4
Initiatives Principle 4: A good initiative has an evaluation plan.
Whether the initiative is short-term or long-term, project managers should use an evaluation plan to identify and refine project goals, assess progress toward project goals, determine the quality of project results, measure the impact of the project, show accountability, and demonstrate the value of the project to funding agencies.
Evaluation can focus on the process and/or the outcome. Evaluation of process can involve assessment of a project’s operations—such as staffing and management, workflow, and procedures—and focuses on input measures. While output measures such as the number of items digitized can be useful, recent emphasis is on outcome assessment, which is concerned with how people, collections, organizations, and systems have been affected by the project. The evaluation plan should emphasize the importance of an ongoing two-way dialogue with key stakeholder communities. Outcomes should be closely related to project goals and objectives and should be measurable.
Output measures for a digital collection building initiative may focus on the digital collection’s size, quality, and usage. Other dimensions of the project, such as the functionality and usability of the collection’s website, and users’ experience with the collection and the service, are also output measures. The impact of a digital collection is the best indicator of a project’s value, but it is most challenging to measure because it often involves many factors that are hard to quantify, and it demands considerable input from users. Surveys, interviews, and transaction logs are good for measuring inputs and outputs, while focus groups, interviews, and case studies are good for outcome and impact assessment. It is often necessary to combine various research methods to obtain quality data on a project’s outcomes and impact.
Project managers should begin with clear evaluation objectives and have a plan for analyzing, reporting and implementing evaluation results. Results can be used to improve an ongoing project or to initiate follow-up efforts. A good evaluation plan will provide solid data to sustain a project over time.
Information on developing and implementing evaluation plans:
- Institute of Museum and Library Services, Outcomes Based Evaluation website http://www.imls.gov/applicants/obe.shtm. The IMLS encourages outcomes-based evaluation for their funded projects; this site has a webliography and points to supporting resources.
- Carter McNamara, Basic Guide to Outcomes-Based Evaluation for Nonprofit Organizations with Very Limited Resources (1997-2007) http://www.mapnp.org/library/evaluatn/outcomes.htm.
- Building Better Websites: Evaluative Techniques for Library and Museum Websites website http://www.lib.utexas.edu/dlp/imls/index.html. Developed by the University of Texas with an IMLS grant.
- Usability.gov website http://usability.gov/. Includes information on how to plan for usability testing, conduct usability tests, and analyze test results.
- Thomas C. Reeves, Xornam Apedoe, and Young Woo, Evaluating Digital Libraries: A User-Friendly Guide (2003) http://eduimpact.comm.nsdl.org/evalworkshop/UserGuideOct20.doc. A very useful project evaluation guide.
Case studies and examples:
- Joanne Evans, Andrew O’Dwyer, and Stephan Schneider, Usability Evaluation in the Context of Digital Video Archives (2002) http://www.sztaki.hu/conferences/deval/presentations/schneider.ppt.
- Formative Evaluation of 5/99: The EDNER Project (2002) http://www.cerlim.ac.uk/edner/dissem/brophy-nott-2002.ppt. Provides a framework for designing evaluation projects, with helpful illustrations.
- Michael Mabe, DL Classification & Evaluation: A Publisher's View of the Work of the DELOS Evaluation Forum (2002) http://www.sztaki.hu/conferences/deval/presentations/mabe.ppt. Digital collection managers may appreciate a publisher’s perspective on the evaluation of digital libraries and resources.
Last updated: 04/17/2008