Predicting Change: The Best-Fit Model

| September 1, 2013

myoutcomes feedback informed therapy, predicting change, PCOMS outcome measureIn the new version of MyOutcomes® database, the aggregate effect size for our entire client base is .08. This provides a strong indicator of the statistical power of the PCOMS outcome measure algorithms

MyOutcomes later versions do a much better job of modeling reality, or put another way, model the true clinical population. When trying to identify and measure any latent variable, there are two critical factors that come into play. One is the sample size. The larger the sample size, the greater the variance and therefore the greater the likelihood of extracting the variable. The second fact is the statistical model. What one wants it the best-fit model.

For MyOutcomes v11, both of these critical factors have led to improvements that increase the power of MyOutcomes® to predict change. We have well over half a million measurements in the database and a statistical model that fits better with this large dataset.

The old algorithms were a first pass based on 65,000 administrations. The new data set represents seven times the amount of data from a broader range of countries and clinical settings. Based on that fact alone, the new algorithms reflect the population more accurately. Using 20/20 hindsight, the first pass didn’t predict enough change so it inflated outcomes, i.e., targets were reached far easier than they should have been. In other words, the old algorithms predicted that an intake of 20 required only a change of 4.5 points to achieve target. Compare that to the new algorithms, which predict that 8 points of change are needed with an intake of 20.This doesn't mean that the old algorithms were bad. They just weren't as good at reflecting the true population as the new ones. So you should still look at your old saved data with the reference point of the old algorithms but your new data should be viewed in the context of the new information provided by the much larger and more varied dataset.

The new algorithms have passed extensive cross-validation analyses. Based on the development teams many years of clinical experience using the measures, as well as their familiarity with data sets of the ORS, they removed extreme scores, “saw patterns”, etc. to ensure data integrity by starting with a clean data set. This was done because, as many of you who have implemented PCOMS know, there is a learning curve to proper use of the measures. Errors can occur and these errors are entered into the data set. Given that these errors can have an impact on the algorithms' predictions, it was necessary to eliminate these errors.

Once the data set was cleaned and before any models development and testing, development team leader, Barry Duncan, and University of Kentucky professor and statistician, Michael Toland, discussed growth trajectory expectations from a clinical lens and what theory might expect. Based upon research, the development teams a priori assumption was that the curve would describe a non-linear growth function for outcomes. All analyses were conducted by testing whether a cubic model, conditional on intake score, would provide a better fit for the data than a quadratic model that is conditional on intake score. The cubic model was demonstrated fit the data best.

Once the statistical model was developed, more in-depth testing continued. First, the development team double-checked the trajectories by performing descriptive statistic analyses for each intake score, as well as the means across all sessions in the database. The team plotted those scores and created graphs to evaluate how they compared to the expected treatment response predicted by the algorithms. The algorithms passed this very extensive testing process.

Next, they looked at all the data sets from the published randomized clinical trials (RCT) of PCOMS (Anker, Duncan, & Sparks, 2009; Reese, Norsworthy, & Rowland, 2009; Reese, Toland, Slone, and Norsworthy, 2010) and examined how much change occurred in the feedback conditions to validate that the algorithms were not predicting too much change. As predicted, the new algorithms were right on the mark. The average amount of change across the feedback conditions in all three RCTs was 10.1 points. Keep in mind that this included all the clients, those who changed and those who didn't. So any algorithms predicting far less change not only don't match the feedback RCTs, but also inflate outcomes and ultimately our sense of how effective we really are—exactly what PCOMS is designed to prevent.

The increased sensitivity of the PCOMS algorithms to detect change and do a better job of predicting what change to expect should translate into providers feeling even more confident that, with the help of MyOutcomes®. They are providing their clients with the best quality service.

CTA-request-call-back-1

Pinterest Twitter Facebook Linkedin Youtube Email

Tags: , , ,

Category: Feedback Informed Treatment

Comments are closed.