Implementing Your Program With Fidelity

Administer for Success

How can you ensure that all your intentional program planning has the intended impact? Program Leaders must also plan to implement their activities with the highest quality standards and with fidelity. Use this Click & Go to help discover what strategies you will need to employ to get the outcomes you are predicting at the end of your activities.

Objectives

To enable participants to:

  • Understand the key components of a quality program.

  • Develop their own quality and fidelity measures.

  • Utilize goals to guide implementation and measure outcomes.

Zip Link (41.8 MB) Select Zip Link to download the resources in this Click & Go! 

 
 

Mini Lesson: Administer For Success

This mini-lesson will help participants understand how to intentionally implement their program activities by measuring basic components of fidelity and utilizing tools to capture and demonstrate student learning.

Creating and Using SMART Goals

During this 10 minute Podcast, you will discover how to write SMART goals for your program and activities. Goals that are Specific, Measureable, Achievable, Relevant and Timebound will help you intentionally implement your program with fidelity. [Download Transcript and PowerPoint]

Recording Program Outcomes

This 10 minute podcast reviews the main objectives for 21st CCLC program evaluations, presents tips about working with your program's evaluation team, and gives recommendations for making the most of program evaluation data to inform planning. By using a few of the recommended data collection points shared in this podcast, you will be better able to reflect on your program, document the effectiveness of programming and measure the outcomes you intend. [Download Transcript]

Here are several tools to help leaders implement program strategies. Note: Each of the resources are customizable to fit the needs of your program.


Measurement Tools for Evaluating Out-of-School Time Programs: Table 2.7 Program Quality/Program Environment

Description: Provided by the Harvard Family Research Project, this table demonstrates different instruments used to evaluate the quality and the overall environment of your program.  LINK

Classroom Time Analysis Tool (CTAT)

Time and Learning provides several tools that can help your staff develop and use checklists and rubrics to measure fidelity of implementation. LINK

Buck Institute for Education — Rubrics

Here you will find search results for many different rubrics provided by the Buck Institute for Education. Each explains what the rubric is, why it can be useful, and how you can use the rubric. LINK

Edutopia — Student Portfolios

Here you will find a search for many different student portfolios provided by Edutopia. There are articles, blogs, videos and discussions within this list. See what people have to say about student portfolios, and use them yourself! LINK

Program directors who use data to inform planning about their programs often make better decisions about what to change and about how to make systemic and sustainable changes. Programs committed to improvement must analyze existing data and collect and analyze additional data in order to understand program status in these areas:

  • How well the current program is meeting the proposed program goals;
  • How the program and community are changing;
  • Root causes of problems or issues;
  • The current and future needs of the program, students, staff, parents and the community;
  • The impact of efforts, processes and progress; and
  • Areas in need of improvement.

Of all the factors that affect student learning, program processes are the only measures that program directors, site coordinators and staff have direct control over. We do not control where students come from, their home life or why they think as they do. But we do have control over how we interact with students and families; how programs are implemented; and how we respond to changes, challenges and successes.

If an activity is primarily teacher directed, you may see students doing these things:

  • Paying attention (tracking the teacher or teaching prop with their eyes);
  • Taking notes;
  • Listening as opposed to chatting
  • Asking questions;
  • Responding to questions; and/or
  • Reacting with laughs, head shaking and audible words.

If an activity is primarily student directed, you may see students doing these things:

  • Working independently or in small groups;
  • Performing/presenting;
  • Inquiring;
  • Exploring;
  • Explaining;
  • Experimenting; and/or
  • Moving and talking to each other.

Two strategies can help.

a. Be sure that everyone who will observe and rate observations has training on use of your observation tool and protocols. It is important that each observer understands your program implementation plan and knows what components they should see in the classroom. It is also important that you talk and agree on what would score a top rating and what would score a low rating. Require that raters provide documented illustrations for why they give a high or low rating.

b. Rather than trying to score activity components while in the classroom, the observer should just write down everything they hear and see (whether good or bad). They should later then use their notes to determine the extent to which components were implemented in fidelity with the plan and provide supporting evidence from their notes.

For example, the observer notes that every student is actively engaging in the activity, but the teacher is not moving around the room to check in with students.

The observer might determine that students are developing a sense of competence, as they appear to feel confident about working independently; this is a positive indicator. Yet, delivery might score lower because the teacher is not taking the opportunity to use guiding questions to move students to even higher levels of thinking, which is part of your fidelity plan.

Ensuring that you adhere to the same design as the creator planned is critical to achieving the same results that were achieved during the research process. If you change the intended plan for implementation, you can no longer estimate what the effects on students will be. Curriculum designers have “practiced” with components such as dosage and delivery, and provide recommendations based on their experience. For these reasons, it is important to design your implementation plan for research-based approaches based on what designers of that approach recommend.

Ensuring that you adhere to the same design as the creator planned is critical to achieving the same results that were achieved during the research process. If you change the intended plan for implementation, you can no longer estimate what the effects on students will be. Curriculum designers have “practiced” with components such as dosage and delivery, and provide recommendations based on their experience. For these reasons, it is important to design your implementation plan for research-based approaches based on what designers of that approach recommend.

There is no one specific tool that fits all programs. There are some Y4Y tools posted under “Tools” on the Click and Go 3 page that can be customized for your program needs. Also, several External Resource links on the Click and Go 3 page provide additional resources and guidance around fidelity. No predesigned tool will capture the unique nature of your program, so whether you are designing your own tool or using a predeveloped tool, be sure to add measurement criteria for the specific components in your program.

Each tool, observation checklist, student portfolio and rubric provides different perspectives for the various aspects of your program. Think about what you want to learn, who you want to focus on (students or staff), and how much time you have for scheduling, using the tool, and doing follow up. Different measures provide different information, therefore, multiple measures produce a more comprehensive overview of your program. Below you will find some pros and cons to each tool.

Observation Checklists (snapshot measure):

Pro: Provide the best opportunity to “see” what is happening in a program and to watch, listen and feel the interactions between facilitator(s) and students.

Con: Time consuming, may need to be conducted over multiple sessions to get the most comprehensive picture of what is happening.

Student Portfolio (measure over time):

Pro: Provides comprehensive documentation of an individual student’s progress over time.

Con: Can be inconsistent and takes time to plan and organize on the part of the facilitator. Often reflects only finished products and represents a student’s “best” work.

Rubric (inform process):

Pro: Clearly defines the benchmarks or skills that you want to measure with an explanation of what is expected in order to reach those benchmarks.

Con: Rubrics take time to develop, and everyone needs to agree to use them with fidelity for them to be effective.