There are many ways to evaluate. To select an appropriate evaluation strategy, keep in mind stakeholder perspectives and match the approach to the stage of program development.
Strategy: Keep the Perspective of Stakeholders in Mind
Whatever approach you use you should keep in mind the perspectives and expectations of the stakeholders who will be using the results, whether these are funders, program managers, direct service workers, participants, or researchers. Depending on the audience, the primary purpose could be knowledge development (research), ongoing improvement for the program and its individual participants, or demonstrating positive results to funders. Narrow down the options by consulting key stakeholders and make sure to incorporate the perspectives of program participants.
Strategy: Matching the Approach to the Stage of Program Evaluation
The choice of approach to evaluating program performance should fit the stage of your communityâ€™s program. In the beginning stages, the focus is generally on implementation and fidelity evaluation â€” determining whether and to what degree the critical ingredients of the program are actually put in place, whether the communityâ€™s implementation strategy is working as planned, or whether barriers to implementation need to be addressed.
When the Housing First program is more solidly in place, the focus can shift to examining outcomes. In an evidence-based program, such as Housing First, there are expected outcomes; thus, the focus can be on outcome evaluation.
In Housing First, an outcome evaluation would look at housing stability, service usage, quality of life, and community integration, using recognized quantitative measures, as well as qualitative information.
There may also be a need to understand qualitatively more about how the program achieves, or struggles with, certain outcomes, and to understand which components are critical and which need to be adapted. This may be important when Housing First is introduced into a new context (e.g., for youth), or when novel elements (such as supported employment) are introduced. This is known as Theory of Change Evaluation, as it seeks to help understand the reasons underlying why the program processes lead to the expected outcomes.
Another evaluation challenge is that case managers and clinicians may see evaluation as getting in the way of their work. For example, they may see the measures as burdensome to administer, and not relevant to helping meet the needs of participants.
Strategy: Making Measures Clinically Relevant and Feasible to Collect
In order to get buy-in from practitioners, make sure that:
- - the measures chosen are clinically relevant;
- - the team members receive regular feedback about how their participants are doing with respect to important outcomes (housing stability, quality of life, community integration and other recovery-oriented outcomes)
This will allow the team to understand what is working well, pinpoint common problems, as well as identify specific individuals whose needs arenâ€™t being met. This will help clinicians adjust their practice, as well as help the team as a whole to consider new strategies for addressing challenging systemic issues. From a feasibility standpoint, it may be possible to dovetail evaluation data collection with doing regular clinical progress reporting, so that practitioners do not have an additional task.
Most often, when we talk about evaluation, we refer to the level of the program. Assessing whether a program is working well can sometimes deflect attention from the bigger picture context of how well the program is meeting the needs of the wider community, and how well-developed its partnerships are with other agencies in the wider mental health and housing service system.
Strategy: Consider the System Level
In addition to considering the performance of a program, the system level should also be considered. System-level evaluation looks at issues such as how accessible the program is, whether programs target the right participants, and how well Housing First programs are coordinated with agencies providing referrals or complementary resources. System evaluation can also look at issues such as adequacy of resources and accountability structures. Some of these system-level measures are being developed by the Homelessness Partnering Strategy, and will be included inthat will help CEs monitor quality at the system level.
One common problem is that programs can become overwhelmed by the sheer amount of data being collected. In their attempt to be rigorous, programs may end up developing a â€ślaundry listâ€ť of scales and measures with no clear purpose. As mentioned, this can feel burdensome to practitioners and affect buy-in. Another problem with data overload is that it can take up a significant amount of administrative costs and time. It can also lead to problems deciding which data are most relevant.
Strategy: Use the Logic Model of the Program to Guide Data Collection
While it is important to find rigorous measures, it is also important to develop a manageable list of measures that is relevant. The programâ€™s logic model is the guide that helps direct attention to measuring the outcomes that are valued by stakeholders. It also helps select measures that are achievable, in light of the programâ€™s â€śtheory of action,â€ť and in light of its stage of implementation. The logic model also specifies the critical ingredients of the program, and thus provides a guide to focusing an implementation and fidelity evaluation.
Evaluation can be done internally or by an external evaluator. The choice, in part, depends on the purpose of the evaluation. When making the case to funders that the program is successful, it makes sense to hire an external evaluator, or someone who is at â€śarms lengthâ€ť from the program. The Homelessness Partnering Strategy (HPS) asks communities to undergo a self-assessment at baseline, and periodically. It is also advisable to develop the capacity to measure program fidelity internally. Housing First programs that are funded by HPS will be asked to develop a performance management database, which tracks outcomes at the program and system levels.to do an implementation evaluation, both in the beginning stages and later stages of implementation. Over time, the community can gradually