You’ve implemented a risk needs assessment. Your team is using the data to inform decision-making. Now what?
To maintain the health of your agency’s evidence-based practice and find out how effective your efforts are at reducing recidivism, your assessments require assessments of their own. As soon as you have enough data, your agency needs to launch a Validation Study, and then a few years after that, an Outcomes Study.
Let’s take a closer look at both.
Validation Study
Part of what makes validated risk assessments so accurate is that they’re constantly being re-validated with different populations in different communities in different parts of the country. While it’s important for you to implement an assessment that has already been validated elsewhere, it’s even more important for you to validate the instrument on your population in your community.
Validation studies answer the simple question: Is the assessment predicting risk accurately? Validation is critical because you have to know you can trust what the tool is telling you.
Once you have 2-3 years’ worth of data, an independent peer-review team can pull an appropriate sample of data and complete a validation study to determine whether the assessment is accurately predicting risk within your specific population. The study will also tell you what, if any, adjustments can be made to increase accuracy. As populations change, adjustments may be crucial in maximizing predictive accuracy.
Outcomes Study
Validation studies focus specifically on the predictive capability of an instrument, whereas an outcomes study includes the element of programming: Was the programming effective in prompting behavior change? As the name suggests, outcomes studies are looking at the outcomes of your team’s programming decisions and whether those decisions were effective.
Essentially, an outcomes study helps you determine your effectiveness at reducing recidivism.
Outcomes studies take a much broader look at the data than validation studies, and the sample size needs to be even larger. The way your jurisdiction defines recidivism also matters: Agencies gauging recidivism as re-offense within 2 years versus re-offense within 5 years will need to time their studies differently. Most agencies wait a few more years after completing a validation study before launching an outcomes study.
In order to evaluate your programming, you first have to know that the tool informing programming decisions is accurately predicting risks. If you haven’t validated your tool first, then you can’t isolate programming as a variable.
Don’t wait to start assessing your tools! Whether you’re ready to start validation testing now or you’re trying to plan for it in the next few years, we can help. Our business was built by leading criminal justice researchers, and we can help you locally validate your assessment tool and evaluate the effectiveness of your programming with an outcomes study.
Reach out to us today to get started.