The goal of Project Outcome is to help public libraries understand and share the true impact of essential library services. As librarians, we see every day that library services put people on the path to literacy, technological know-how, or a better job – what we are often missing is the data to support it. Project Outcome provides simple tools and a supportive online community of library leaders to help libraries collect insights and data about all the ways we are meeting the needs of our communities. This understanding can improve the way libraries do business – from allocating existing resources more efficiently to advocating for new resources more effectively. Project Outcome is a free service provided by the Public Library Association with support from the Bill & Melinda Gates Foundation.
Project Outcome provides simple surveys that public library staff can use to help them measure the outcomes of their library services. Any library that chooses to participate in Project Outcome will have access to patron surveys developed by a team of library leaders across the country, plus additional resources to help library staff administer the surveys and use the results for advocacy and strategic decision-making. In return, participants will report their data using a simple online Survey Portal and be able to see their results in a visually interactive Data Dashboard. PLA is dedicated to Project Outcome and will continue to manage and support the project's work beyond the terms of the initial grant, adding it to the long list of other successful PLA services, such as Every Child Ready to Read and Turning the Page.
Public libraries commonly measure services in terms of “how much we do,” focusing on the volume of outputs like the number of activities or services offered. Project Outcome seeks to capture the benefits (or outcomes) of library services in terms of “what good we do,” like changes in patrons knowledge, skills, attitudes, behavior, or status. For example, while outputs measurement may tell us how many library patrons attended a resume-writing workshop, outcomes measurement can tell us whether those users feel more confident, informed, and equipped to seek a new job.
Measuring and understanding outcomes is important for a number of reasons. At the most basic level, data about the impact of your library services provides insights you can use to 1) demonstrate the value of those services, 2) make plans to improve them, and 3) decide how to allocate limited resources. The same data can inform long-term planning about the services your library will offer in the future. Evidence of impact can also play a critical role in strengthening library advocacy, providing library leaders with a compelling case for increased library funding and supportive policies.
Project Outcome’s approach is to provide the simplest format to engage libraries in the process, also to ensure the library is capturing feedback consistently. Pilot testing showed the shorter and more concise the survey, the more patrons responded. However, survey implementation is up to the library. As long as the questions remain consistent, interviews, paper surveys, online links, or group discussions may be used to gather your outcome data. In addition to the Immediate Surveys, the Task Force developed Follow-Up Surveys in spring 2016 to provide libraries with additional ways to gather patron impact.
Project Outcome is designed to complement existing library assessment programs like Edge, Impact Survey, and individual library needs assessments.
The Project Outcome Surveys cover seven essential library service areas: 1) Civic/Community Engagement, 2) Digital Learning, 3) Early Childhood Literacy, 4) Economic Development, 5) Education and Lifelong Learning, 6) Job Skills, and 7) Summer Reading. There are two types of surveys to choose from: Immediate and Follow-Up. The Immediate Surveys use Likert-scale responses to measure immediate benefits gained from a program or service. The Follow-Up Surveys use Yes/No/Please explain responses to measure whether or not patrons continued to benefit or change their behavior as a result of a program or service. All Project Outcome Surveys are designed to help libraries capture their impact on the community.
No. You can pick and choose which service areas are priorities for your library, or start with one and add more later. If your library has a strategic plan or you have identified objectives you would like to achieve, the service area(s) you choose to survey should align with those plans. For example, you may choose to administer a survey on your library’s new job skills program to see how
it performs compared to more established services, or focus on the early childhood literacy programs that you have offered for years to see if they are still meeting community needs. For more information, visit pages 21-23 of your Workbook: Choosing the Right Survey.
Once the survey period is complete and responses are collected, the designated library staff will enter all of the responses into the Survey Portal under the particular program and survey period and hit “close survey.” Once the survey is closed, a summary report is generated and will be available in your My Impact page of the Survey Portal. Visually interactive survey data results will also be available in your Data Dashboard in filtered and aggregated formats. Individual library data submitted twill not be made public, rather, Project Outcome will aggregate and analyze the data to demonstrate the total impact of services and programs nationally.
The surveys are currently available in English and Spanish. PLA and the Task Force are working on assessing the field's demand for other language translations.
Project Outcome offers library groups two different options for survey management and coordination: Group Accounts and Merged Systems. If you want to register multiple libraries as a group, you must first contact us and let us know which libraries are to be grouped together. To learn more about grouping your libraries or to request a group account, contact email@example.com.
All formatted surveys (digital and paper format) live behind the Survey Portal. This is to ensure public libraries understand the planning and scheduling steps needed before surveying in order to enter data back into the system.
Project Outcome is designed to collect insights and data about the ways libraries are meeting the needs of our communities without rigorous data collection requirements. Regardless of number of surveys completed, your goal should always be to achieve high response rates (based on total attendance). The more data you collect, the better; however, you can report any of your findings as
long as you are transparent about the number of responses received and attendance totals. Incentives were not offered in the pilot testing stages of the Project Outcome Surveys and libraries still received a high response rate. The response rate typically drops when conducting Follow-Up Surveys, which libraries should be transparent about. If incentives are given to patrons, make sure your library includes that when reporting the data results.
Yes, survey responses can be identified by branch location. If using a web survey, when a patron clicks on your library's survey link, they will be taken to your library's branch selection page. When a staff member enters a paper survey, the first thing they do will be to select the library branch. You can see a preview of the branch selection page under the Survey Portal "Organizational Profile" at the top of the "Library Outlets" page. You will also see survey responses organized by branch within your .CSV Library Dataset Report.
The survey questions have been designed and tested specifically to be reliable measures of perceived impact. The goal is to ensure that Project Outcome can collect consistent baseline data from all participating libraries. Future surveys may be more advanced and include other question options. If you would like to add more questions to the surveys to collect other specific feedback, please feel free to do so but any additional responses will not be collected or aggregated in the Project Outcome system. We also recommend any additions be limited to 1-2 questions since the surveys were designed to be quick and easy for patrons to complete.
It is up to the library to determine survey periods. Depending on how you want to report your data later, you may want to try a few techniques until you find what works best for you and your staff.A few examples of how you may want to make that decision: 1) If you want to use the reports for a monthly board meeting to update on Storytimes (generally), you may want to measure several
Storytime programs each month. 2) If you are trying to measure Toddler & Two Storytime for a grant application, you may want to just measure that program a few times during the appropriate time period leading up to the grant application due date.
Libraries should emphasize the importance of measuring outcomes to patrons. If they have answered surveys in the past about technology use or collections, explain how Project Outcome surveys measure the actual benefits (outcomes) of services and programs, rather than numbers, and how feedback will help improve your impact on the community. In addition, your outcome measurement plan should never survey the same person twice for a single survey period. Make sure your Data Collection Team knows when and who they are surveying so as not to fatigue patrons. [Example: Deciding the last class of a series has the highest attendance rate and only surveying on that day.]
Libraries must manually enter all survey responses individually. The Survey Portal assigns a unique ID to each survey response so that libraries can see each outcome response tied to the corresponding open-ended comments. This retains response individuality and confirms that data responses are entered and aggregated correctly.
First and foremost, the best way to ensure that surveys are completed accurately is to give clear instructions to patrons about how to fill out the surveys. Regardless, with any user-reported responses you may not always have 100% complete or perfect answers. If you and your team suspect a patron misunderstood the Likert scale or wrote their scores incorrectly, you can either enter the
data as it has been answered or discard the survey in its entirety and not include it in the total response rate. Do not try to edit or interpret the answers. If the library has entered the data incorrectly, contact firstname.lastname@example.org for assistance.
The Immediate Survey will serve as a nice baseline of data if you are following up with patrons. However, Immediate Surveys are not required before doing Follow-Up. We recommend following up with patrons 4-8 weeks after a program or service is completed, but this may vary depending on the program or service you are measuring.
Project Outcome is available for free for all U.S. and Canadian public and state libraries! If you do not fit in this category, you may still register for free to learn more about outcome measurement and access our online resources. The survey and data analysis tools will not be available for non- public library users. Your library does not need a strategic plan to participate. However, understanding your library’s strategic goals will help you select which service areas to assess and decide how best to apply your outcomes data, such as targeted advocacy or to change or improve library services. Project Outcome has developed resources and worksheets to help you align your survey measurement with your library's goals and community needs.
We recognize that library staff have a long list of responsibilities, so Project Outcome is designed to align with the work you are already doing and fit easily into your schedule. You can schedule surveys for any duration of time based on your library’s needs, and collecting your own data means you can see and start to use the findings immediately.
We know that every public library is different, and we are committed to making Project Outcome accessible to all interested library staff, whether you run a large urban library system or a one-person library. Libraries can choose their participation based on time and staff capacity. The surveys and tools were designed to be easy enough for libraries of all sizes to participate and benefit from the outcomes collected.
Project Outcome allows multiple users to register under a single library account; all a user needs is a unique email address to create their account. However, we recommend that libraries make sure to delegate and manage who is accessing the Project Outcome Survey Portal to avoid data inaccuracy.
The Summer Reading Surveys were split based on library feedback received from the 2015 Summer Reading Survey, which told the Task Force that Summer Reading programs vary in outcome goals and audience participation. As a result, the questions were redesigned and pilot- tested in early 2016. The new surveys target three different audiences: 1) Teens/Children participating in the program, 2) Caregivers completing the survey on behalf of their participating child, and 3) Adults participating in the program. This redesign will give libraries the option to collect various outcomes and datasets from their Summer Reading programs.
Libraries can use all three surveys but must be able to closely monitor their responses. We recommend choosing one survey if the library is unable to target or differentiate the person completing the survey. The Summer Reading Caregivers Survey should be the default survey distributed. Keep in mind, this does not capture adult program participants. Adult program participation should be collected separately and the Summer Reading Adult Survey should be distributed.
If program participation is segmented by age groups, then you can use a combination of the three surveys. Teens and children participating can complete the Summer Reading Teen/Children paper survey when they pick up prizes, attend final events, etc. If your library does not survey teens or children, then parents or caregivers can complete on behalf of their child participant while they
pick up their prizes, etc. Adult participants can fill out the Summer Reading Adult paper survey or be sent the digital survey link upon completion of the program.
The three Summer Reading Surveys should be considered different datasets. The outcomes, while similarly aligned, are not asking the same questions across caregivers and teens/children completing the survey. The different surveys offer different learning opportunities and perspectives for the library and should be kept separate from one another. For example, an adult responding they read more often should not be aggregated with a child reporting they read more often. They are different outcomes of application.
If libraries want to measure checkpoints throughout their Summer Reading programs, then they should either schedule a separate survey for each “checkpoint” or schedule a single survey with multiple program sessions representing each “checkpoint. For example, a library schedules one Summer Reading Teen/Child Survey to run June 10 – September 1, with 3 program sessions scheduled under that survey – “June 2016,” “July 2016,” and “August 2016.” The responses would then be distinguishable by the
“checkpoint” time of collection.
We recommend not using the Summer Reading Surveys to capture outcomes of summertime educational programs. For these events or classes, the Education/Lifelong Learning or Early Childhood Literacy Surveys should be used. The Summer Reading surveys are designed to capture Summer Reading program outcomes as a whole (not one-time events within the program).
Libraries should track total attendance outside of the Project Outcome system. However, when entering survey response data, libraries will be prompted to enter attendance. Libraries are welcome to interpret their attendance rates based on how they want their survey response rate to appear in the Survey Portal reports, but attendance should be considered actual participation, not how many people originally signed up for the program. If a library measures outcomes at several checkpoints throughout the summer, then the attendance should be different at each checkpoint, since participation varies throughout the summer.
The higher the attendance rate, the lower the response rate will appear on your Survey Portal reports. The ideal attendance number to enter for response rates is how many people received the survey. However, we know libraries might not be able to closely monitor this number. In that case, participation should be the attendance count entered.