< All Topics
Print

Initiatives FAQ

These are the most frequently asked questions about Initiative Analysis:

What is Initiative Analysis?

Initiative Analysis is a tool for analyzing the effectiveness of student success initiatives at institutions.

What are “derived variables”?

These  are variables created by locating and calculating variables that already exist in your institution’s data set.  An example of a derived variable could be discrete days of any LMS activity relative to the section average. In this set, we collect LMS login data, and then further calculate the average for a section to help surface insight behind a student performing above, at, or below average compared to others in the student’s course section.

How does Initiative Analysis measure initiative efficacy?

Initiative Analysis measures efficacy using prediction-based propensity score matching (PPSM), a statistically rigorous method that relies on an extensive set of predictive variables from the institution’s data.

What are the benefits of using Initiative Analysis?

Initiative Analysis reduces selection bias and accurately measures efficacy, allowing institutions to target initiatives to students who are most likely to benefit and inform the design of new initiatives. It also optimizes data to maximize insights and informs action.

What kind of data can be uploaded to Initiative Analysis?

Initiative Analysis can analyze student data about initiatives that have been in place for at least one previous term and up to four years ago if the data exists. The data is submitted via csv upload where you supply the engine with all participants (marked as “1” in the 2nd column) and all eligible but non-participants (marked as “0”). There are several considerations and confounding variables to consider, all of which can add unwanted noise and can alter your results. Consider a few examples: 

  • Your participant and comparison group may have systemic differences that affect persistence
  • Another initiative might be overlapping with the group in question
  • Your participant and comparison group might be separated by too much time

How does Initiative Analysis determine initiative impact?

Initiative Analysis matches eligible students who participated in an initiative with those who were eligible but did not participate, based on their persistence likelihoods and propensity to participate in the initiative. The impact of the initiative is estimated by measuring the difference in persistence rates between the two groups, otherwise noted as lift. Lift in persistence is calculated by comparing the predicted persistence rates of your participant group with matching students in the comparison group. The summarized formula is as follows:

Lift = Participant rate – Comparison rate

What kind of insights can I gather?

Initiative Analysis provides a breakdown of initiative effectiveness by term and student groups, including the lift in persistence rates for participants and details for each analyzed initiative. It also allows users to review matching details and reference results of all analyzed initiatives submitted by all users at the institution. Groups can outline aspects such as academic level, completed terms, ethnicity or demographic groups, full time or part time, gender, prediction percentile, if the students are enrolled in all online or all in-person, and if they are majoring in a STEM discipline, and others.

It’s important to note that the matching process does not match students based on the groups that display on the page. The number of analyzed participants indicated to each student group shows how many students were successfully matched for analysis with any other comparison student, not necessarily with another student within the particular group. Performing such matching at the student level would result in lower match rate and longer analysis time.

How does Initiative Analysis create persistence and propensity models?

Details on the process behind the model can be found here: https://training.civitaslearning.com/hub/initiatives-ppsm/ 

Initiative Analysis identifies the variables most predictive of student persistence at the institution and creates a persistence model. These variables are tuned to each specific institution. For each initiative, a propensity model is built by assessing a student’s similarity to initiative participants.

The central theme of PPSM (persistence and propensity score model) is that we identify the top covariates that together maximize the predictive accuracy, and then use them in building both predictive and propensity-score models. Conceptually, matching students in propensity score using the top covariates for success ensures that pilot and control students have equal probabilities of participating in treatment. To summarize, PPSM takes PSM (propensity score model) one step further by adding a second dimension, the conditional probability of student success given covariates, as well as providing a mathematical rigor in selecting covariates in matching. 

Conceptually, this two-dimensional matching ensures that the matched pair of students has equal probabilities of success and intervention participation. The only difference is that one student is exposed to treatment while the other is not. 

How far back can I run an initiative analysis?

Four years. As Initiative Analytics looks at Prediction-Based Propensity Score Matching (PPSM), the further we go back in time, the less accurate the model is. Usually, programs or student populations have changed quite a bit the further back we go in time, and so more recent analyses are more relevant and actionable.

Why aren’t my results statistically significant?

Likely one of three issues. An initiative may not have statistically significant impact results if:

  • the number of matched participants and comparison students was low (fewer than 1,000 students)
  • there was a lot of variability in the results of bootstrapped samples
  • the persistence lift was very small

Can I analyze a program with a small number (N)?

Maybe. An initiative with a small number of students included in the analysis may still show statistically significant results if the persistence lift was very pronounced. Results that are not statistically significant could still be meaningful. 

If possible, collect more data before redoing the analysis or rethink your initiative design.

What is Census Date and how does that impact my analysis?

Census Date reflects the day the add/drop period ends for the term. This date is important because it signals when course enrollments are stable. If your institution has provided a census date, the student prediction trend will appear after the first Civitas data workflow runs following this date. If your institution has not provided this date, the Census Date field will read “Not Available” and the prediction trend will switch after the 14th day of the term by default.

Was this article helpful?
How can we improve this article?
Please submit the reason for your vote so that we can improve the article.
Page Contents