Getting Started with Initiatives
What is Initiative Analysis?
Initiative Analysis allows you to quickly analyze the effectiveness of student success programs at your institution. Measure your initiatives’ efficacy using prediction-based propensity score matching (PPSM), a statistically rigorous method for evaluating initiative impact that is dependent on Civitas Learning’s rich data set for your institution.
Smart comparison. Variables that are predictive of success at your institution are used to identify a comparable group of students who did not participate in an initiative but look similar to those who did, by matching on similar likelihoods to persist and to participate in a given initiative. This process reduces selection bias and accurately measures efficacy without the burdens of randomized control trials.
What then? After you run impact analysis, you can see a breakdown of initiative effectiveness by term and across student groups. Use these insights to target initiatives toward students who are most likely to benefit and to inform the design of new initiatives. Export these results to report across the institution or to perform additional analysis.
Key Features
Use Initiative Analysis to:
- Perform quick, statistically rigorous analysis of initiatives that have been in place at your institution for at least one previous term. Initiative Analysis can analyze initiatives that were offered up to four years ago if the student data exists in what has been provided by your institution.
- Submit and validate student data to make sure that analysis is relying on the appropriate data set.
- Study each analyzed initiative, including the lift in persistence rate for participants.
- Drill down on the results, such as the impact of an initiative on the persistence rate during a specific term or for a specific student group.
- Review matching details to verify the similarity of the matched participants and comparison group students used in the analysis.
- Reference the results of all analyzed initiatives submitted by all users at your institution.
How it Works
Using many terms of historical student data specific to your institution, Civitas identifies the variables that are most predictive of student persistence at your institution and uses them to create a persistence model. This model delivers an individual persistence prediction for each currently enrolled student, rating their likelihood of enrolling in the next term at your institution and staying enrolled past your census date (or add/drop period) or graduating.
Modeling — When you submit an initiative to Initiative Analysis, these predictive variables are used to build a propensity model specific to that initiative. This propensity model assesses a student’s similarity to those who participate in the initiative. It is powered by the extensive set of variables in the Civitas data set that have historically been predictive of persistence at your institution. This ensures that when students are matched for comparison, the variables used are the factors most likely to influence whether these students persist.
Scoring — For a given initiative (such as to evaluate your Freshman Writing Center), each student who meets the eligibility criteria to participate (such as first-year student, enrolled full-time) has a persistence prediction, as well as a propensity score. The propensity score measures the likelihood of a student’s participation in treatment. For impact analysis, propensity scores are determined by measuring the similarity of eligible students who did not participate in the initiative to students who did participate in the initiative.
Matching — Next, students are matched on these two dimensions: persistence prediction and propensity score. Each matched pair has one student who participated in the initiative and one student who was eligible but did not participate. The two-dimensional matching ensures that the matched pair of students is similar in both their persistence likelihoods and their propensity to participate in the initiative.
Measuring — After matching, the impact of the initiative can be estimated by measuring the overall difference between the persistence rate of the matched students who participated in the initiative and the persistence rate of the matched students who were eligible but did not participate in the initiative.
Accessing Initiative Analysis
Logon — When deployment is completed, you will receive a link and login credentials to access Initiative Analysis. If your institution uses SSO (single sign-on), Initiative Analysis can be configured to work with those credentials.
Help — Once in the application, select the Help (question mark) button in the lower right of any page to access the documentation. Refer to the Product Updates to learn about recent changes and releases.
These are the essential tasks that you’ll do in Initiative Analysis:
- Submit a New Initiative
- Add details about an initiative that has been in place at your institution for at least one past term
- Upload a student list with details about students who were eligible to participate in the initiative
- Validate the student data you submitted before students are matched for persistence outcome comparison and view the expected outcomes to see significant initiative results
- View Finished and Pending Initiatives
- See a brief overview of each previously analyzed initiative, including the number of analyzed participants and the lift in the persistence rate for participating students
- View initiatives that other users at your institution have submitted for analysis
- Review an Initiative’s Initiative Analysis
- Review overall impact details, such as the lift in persistence as a percentage
- Analyze drill-down results by term and student group, allowing you to see over which terms and for which students the initiative was most effective
- Check details about student matching to ensure that students with similar persistence probabilities and propensity scores were included in analysis
Designing Effective Initiatives
Follow these guidelines to ensure that you’re designing an effective initiative for impact analysis. Some of these recommendations may not be feasible for all initiatives.
- Have the goal of improving persistence.
- Note that persistence outcomes used by Initiative Analysis will be the same as the definition of persistence for your institution used by Administrative Analytics.
- For example, if your institution is on a Fall → Fall persistence model, Initiative Analysis will be measuring the impact of an initiative offered in one Fall term on persistence into the next Fall term.
- Analyze completed terms, for which persistence outcomes are available.
- If you are interested in analyzing the impact of an initiative being offered during the current term, you will have to wait to submit the initiative until the next term. Persistence outcomes for students enrolled during the current term will be known after the next term’s add/drop period is over.
- If your institution is on a Fall → Fall persistence model, you will have to wait to submit the initiative until the next Fall term when persistence outcomes are measured.
- Use terms within the last 4 years.
- Choose initiatives that were offered throughout the entire term.
- Since PPSM used by Initiative Analysis matches students on the census date of a term and then measures persistence outcomes after the term has ended, it is important that data for the initiative being analyzed is distributed throughout that time period for most accurate analysis.
- List clear eligibility criteria for participation. In order to match participants with appropriate comparison students, it is important to understand exactly who was eligible for this initiative.
- For example, was the initiative:
- Accessible to any student?
- Mandatory for all first-year students?
- Targeted towards students in dev-ed?
- Targeted towards students with a GPA above 3.0?
- Consider cases that might be less clear:
- Orientation is open to all first-year students, but transfer students and Business students have different orientation programs and do not participate in first-year orientation. For analysis in Initiative Analysis, transfer and Business students should not be considered eligible.
- Writing Center services are open to any student on campus (that is, all majors, graduate or undergraduate, full-time or part-time, etc.) provided the student is taking an English or Rhetoric course. This means that any type of student can be included as an eligible comparison student if he or she was enrolled in an English or Rhetoric course during the specified term.
- All students with a GPA above 3.0 were targeted with an ongoing outreach campaign during the Fall 2016 semester. Rather than comparing against other students who were enrolled this term with GPAs below 3.0, the eligible comparison group should include students from previous terms who had above a 3.0 GPA to ensure students available for matching are as similar as possible to initiative participants.
- For example, was the initiative:
- List clear selection criteria for participants.
- For example, was the initiative:
- Voluntary (such as drop-in tutoring)?
- Randomly selected/assigned (such as half of ENG 101 sections were randomly selected to use the new course design)?
- Selected/assigned through a specific, non-random criteria (such as all advisors of first-year, FTIC students will try the new nudge campaign)?
- Consider how participation is defined in these examples:
- A free online tutoring resource is offered to 5,000 randomly selected undergraduate students. If you are interested in analyzing the impact of making this tutoring resource available, participants would include all 5,000 students and comparison students would include all remaining undergraduate students. However, if analyzing the impact of using the tutoring resource, participants would be a subset of the 5,000 students who actually logged in and comparison students would include the remaining students who were given access and never logged in.
- The Writing Center offers three review sessions for any writing assignment. Before analyzing the impact of Writing Center tutoring on persistence, determine whether participation is defined as attending a single review session or completing the series of three sessions.
- The required data for impact analysis is ready as a .tsv or .csv file.
- Label the first column student_id, the second column term, and the third is_participant.
- The student_id and term values should match those you typically see in your SIS source system. When you’re uploading your file, you will see example data specific to your institution to help you provide the expected values.
- The is_participant column should contain a Boolean value: 1 for students who participated in the initiative and 0 for students who were eligible to participate but did not. Only students who meet the defined eligibility criteria should be included in the file. If the initiative was available to any student at the institution, then only the participating students need to be defined in the file because Initiative Analysis will pull in the full student body for comparison.
- For example, was the initiative:
- Have 1000+ participants. Ensure that the number of eligible comparison students is at least as large as the number of participants.
- Initiative Analysis requires at least 100 participants for analysis. However, if fewer than 1000 students are used for analysis, there is a lower likelihood of achieving statistically significant results. Drill-down results (that is, Initiative Analysis by Student Group) will also be affected, as the sample size for a specific student group could be very small.
- Document the context. Explain the motivation and implementation of the initiative, as well as any suspect confounding factors.
- Include these details in the optional Additional Description section when uploading a new initiative.
Design Questions
Here are more questions and discussion to help with your design:
- Is persistence the outcome you are measuring? Currently, Initiatives only provides analysis for one outcome: persistence from one term to another. This is “the probability of a student enrolling in a specified future term and staying enrolled past the add/drop date.” For example, if your institution is on a Fall – Fall persistence model, Initiatives will measure an initiative offered in one Fall term on persistence into the next Fall term.
- Do you have enough students for analysis? Initiative Analysis requires at least 100 participants and 100 comparison students for analysis. If fewer than 1,000 students are used for analysis, there is a lower likelihood of measuring statistically significant results. Drill-down results (i.e. Impact by Student Group) will also be affected, as the sample size for a specific student group could be very small. The number of eligible comparison students should be at least as large as the number of participants. For example, let’s say your institution offered a college success course last term that was available to all students and you wanted to know if participation in that course influenced whether students persisted to the following term. Ideally, you would have 1,000 or more students who took the class who would be considered the participant group in the analysis. You would also need at least 1,000 students who could have taken the course but did not to use as the eligible comparison group. If there were fewer than 1,000 students who took the course, you could still perform the analysis but there is a lower likelihood of measuring statistically significant results. If fewer than 100 students took the course, you could not use Initiatives to do the analysis.
- Are you going to analyze a past initiative? If you are analyzing an initiative that occurred in the past, it must have occurred within the last 4 years. Initiatives only has access to your student data from the previous 4 years. All initiatives analyzed using Initiatives must be analyzed after persistence outcomes are available for the students who participated in the initiative and the eligible comparison students. If you are interested in analyzing the impact of an initiative being offered during the current term, you must wait to submit the initiative until the next term when persistence outcomes for the students who participated in the initiative are known. For example, if your institution is on a Fall to Fall persistence model, you must wait to submit the initiative until the next Fall term when persistence outcomes are measured. If you are planning an initiative for the future that you would like to analyze with Initiatives, consider all the questions in this document while planning the initiative to ensure the initiative is a good candidate for analysis using Initiatives.
- Is this a program-level initiative? For best results, Impact should be used to analyze programmatic initiatives that occurred across the span of a term. Because students are matched based on characteristics observed on the census date (or day 14 if the census date is unknown) of the term, it is more difficult to isolate specific confounding variables that may influence results for an initiative that occurred at a single point during the term. For example, if one email was sent to students on day 60 of the term, the matching is based on data from several weeks earlier. The students’ persistence predictions and/or propensity scores may have shifted since then, and this may no longer be a good match.
- Who was eligible to participate in this initiative? Do you know exactly who was eligible to participate and who was not? Do you know who participated and who did not from that eligible population? If not, then you must answer those questions before you move on. For example: Were all students eligible to participate? If so, are there any exceptions? Were participants randomly selected/assigned? Or, were participants selected/assigned based on specific, non-random criteria? Consider how participation is defined in these examples:
- A free online tutoring resource is offered to 5,000 randomly selected undergraduate students. If you are interested in analyzing the impact of making this tutoring resource available, participants would include all 5,000 students and comparison students would include all remaining undergraduate students. However, if analyzing the impact of using the tutoring resource, participants would be a subset that logged in from the 5,000 students and comparison students would include the remaining students who were given access and never logged in.
- The Writing Center offers three review sessions for any writing assignment. Before analyzing the impact of Writing Center tutoring on persistence, determine whether participation is defined as attending a single review session or completing the series of three sessions.
- Were there enough participants and eligible comparison students in the same term for matching? In general, more confounding factors are expected when initiative participants and comparison students are matched across different terms. Initiatives considers student data that may not account for other factors that could influence student success in different time periods, such as change in global economic factors, effects from different interventions, or changes at the institution. If initiative participation criteria are based upon factors that are highly correlated or indicative or persistence or propensity to participate (e.g. persistence prediction score, GPA, academic standing), then it is unlikely that there will be enough comparison students to match against in the same term for comparison. This type of initiative would not be ideal for analysis in Initiatives because some or all the eligible comparison students would need to come from another term.
- Is your data in the correct format? Data uploaded to Initiatives must be in a .tsv (tab-separated values) or .csv (comma-separated values) file format. This file should contain:
- The student id numbers for participant and eligible comparison students
- The term in which they were eligible to participate
- And whether they participated or were eligible but did not participate
- This information should be in the same format as it appears in your student information system. When uploading the file, you will be shown the exact formatting (length of student ID and format of term ID). If you do not have all three pieces of information for the students in the groups, then it must be gathered before the initiative can be uploaded to Impact for analysis.
- What are the potential confounding factors? There are confounding factors, or other circumstances that could affect the initiative participants or comparison group and make it difficult to determine what exactly influenced outcomes. For example, if an institution wanted to pilot a new software bundle but only one advisor chose to use it, then any difference in outcomes between students who were assigned that advisor and those who weren’t could be attributed to the software or to the advisor or both. In other words, since the advisor’s characteristics can’t be accounted for when identifying the comparison group through PPSM, impact analysis results must include caveats about the potential confounding factors. Consider the following common confounding factors during initiative design prior to impact analysis:
- Either the participating group or comparison group contains a single study unit: e.g. one of the groups is representative of a single advisor/faculty member/course/department/etc. In this case, it would be difficult to attribute the difference in outcomes to the initiative or the advisor/faculty member/course.
- The participating group and the comparison group are systematically different in a way that may be directly related to persistence outcomes, e.g. the participants have a high GPA and the comparison group has a low GPA. Since Initiatives measures persistence outcomes, if the participation criteria is chosen based upon something that may correlate to persistence it will be challenging to find enough comparison students to match with the participants.
- Another initiative is offered in conjunction with the same group at the same time, e.g. first-time full-time students are required to attend a Student Success Course and special advising sessions during their first term. This can be a problem because it will be difficult to isolate the effects of one initiative.
- The participating group and the comparison group are from different time periods or terms and persistence outcomes were measured at different points in time. This can make analysis difficult because there may have been different factors that affected students based upon when they participated in the initiative. Even using PPSM with Initiatives, these confounding factors cannot simply be eliminated or ignored from analysis. If confounding factors are a possibility, any impact analysis should include appropriate caveats and results should be interpreted accordingly.
Troubleshooting Your Design
Even with these recommendations, there are confounding factors, or other circumstances that could affect the initiative participants or comparison group and make it difficult to determine what exactly had an effect on outcomes.
For example, if you wanted to pilot Advising, but you made it available to a single advisor only, then you couldn’t determine whether any difference in outcomes between students who were assigned that advisor and those who weren’t were due to the application, that advisor, or both. Because the advisor’s characteristics can’t be accounted for when identifying the comparison group through PPSM, impact analysis results must include caveats about the potential confounding factors.
Common problems
- Either the participating or comparison group contains a single study unit (such as one of the groups is representative of a single advisor/faculty member/course/department/etc.). In this case, it would be difficult to attribute the difference in outcomes to the initiative or the advisor/faculty member/course.
- The participating and comparison groups are systematically different in a way that may be directly related to persistence outcomes (such as one of the groups is taught by the most experienced and qualified faculty, or the intervention is initiated by faculty voluntarily). Since Initiative Analysis measures persistence outcomes, if the participation criteria is chosen based upon something that may correlate to persistence, it will be challenging to isolate the effectiveness of the initiative.
- Another initiative is offered in conjunction with the same group at the same time (such as first-time full-time students are required to attend a Student Success Course and special advising sessions during their first term). This can be a problem because it will be difficult to isolate the effects of one initiative.
- The participating and comparison groups are from different time periods or terms and persistence outcomes were measured at different points in time. This can make analysis difficult because there may have been different factors that affected students based upon when they participated in the initiative.
Even using PPSM with Initiative Analysis, these confounding factors cannot simply be eliminated or ignored from analysis. If confounding factors are a possibility, any impact analysis should include appropriate caveats and results should be interpreted accordingly.