< All Topics
Print

Course Insights FAQ

These are the most frequently asked questions about Course Insights:

Usage

When can the completion prediction be used?

The completion prediction is available immediately for use for each student, including first-year students, without restriction. At an aggregate level, the model applies at any time, across the student’s entire time at the institution. There is a slight improvement of the predication after one year, but, overall, this prediction tracks well when comparing the prediction to actual results.

Can we hide a course?

Yes, we can remove individual classes. Contact your Civitas team to have a particular class hidden for you.

Are retaken courses counted twice?

No. Course Insights only include each student once. For students who have retaken a course, their most recent attempt is included in the calculation. This ensures that each student who completes the course is only counted once for the purpose of calculating insights.

How are letter grades mapped?

Course Insights display letter grades, including A, B, C, D, F, and W.

These are whole letter grades only: there is no differentiation in mapping for +/- grades. For example, a B+ would be mapped as a B and all insights are determined using these letter grades.

Why? Distributing the historical student population among additional grades would reduce the statistical significance of the course insights.

How are pass/fail grades counted?

Students who passed a class as a pass/fail student are excluded from the data set for that course.

Students who failed a class as a pass/fail student are included in the data set with students who received an F in the course.

Can one course have several signals?

Yes, a single course can have multiple signals. If it has multiple signals, an icon will not display on the chart.

This is the hierarchy of signals that would appear:

  1. Yellow Flag
  2. Challenge
  3. Late Hurdle
  4. Qualifier

Why are the filters different here?

Because of the historical nature of the signal detection, Course Insights work with filters that won’t change too frequently over the course of a student’s academic career (major changes notwithstanding).

Can I get signals about individual instructors?

Not currently. The application is designed to understand the relationship between student grades and student outcomes (persistence and graduation) for a particular course, not a section or instructor.

What if our course numbering changes?

If there is a crosswalk between old sections and new sections, and new courses are generally structured the same as older courses (no major changes to curriculum, expectations, etc.) this can be accounted for in the deployment process.

If there is no crosswalk, or a course changes significantly, it will take 6-10 years of data on the new course to show persistence and graduation data.

What is the ‘average course’?

The average course is determined by identifying the top 25 courses (those with the largest enrollment) that are typically taken in the same year as the selected course for the filtered group of students.

To see a list of all 25 courses included in the calculation of the average course Key Insights, click the Avg Course column header on the Course Details page.

Why do the rankings differ? (bubble chart versus list at right)

All courses on the bubble chart are top influencers of lift for either completion or persistence (depending on which you’re looking at). They have an outsized impact on completion or persistence rates, even if they don’t fit one of the four course signals.

The list on the right is a ranking of completion/persistence lift for all classes, no matter which year it’s generally taken. The higher it is on the list, the higher it is on the y-axis of the chart.

Calculations

How is graduation lift calculated?

For each letter grade, we look back at historical graduation rates for students that get each grade. The lift is the average change in graduation between each letter grade.

In other words, we look at the boosts to graduation if students were to move from an F to a D, D to a C, C to a B, and B to an A and average those deltas to calculate the graduation lift.

How is persistence lift calculated?

For each letter grade, we look at historical persistence rates for students across each grade. The lift is the average change in persistence between each letter grade. In other words, we look at the gains in persistence if students were to move from an F to a D, D to a C, C to a B and B to an A and average those deltas to calculate the persistence lift.

The sum of the differences between each grade is then divided by the total number of students who persisted in that course:

students who would have persisted with a higher letter grade

actual number of students who persisted

How is “Year Typically Taken” determined?

The year typically taken is determined by the year in the academic career during which most students take the course, also known as the mode year.

Important:

  • “Year” in Courses is calendar year, not academic year. When a course is in “Year 3” it means “taken in the third year at the institution” not “taken as a Junior”.
  • “Year” does not mean the year a majority of students take the course, but the year it is taken most often.

How many terms of data are used?

We use up to 10 years of historical data provided by the institution, excluding years outside of active completion windows and excluding terms for which outcomes aren’t yet known. Non-degree-seeking students are excluded.

  • Graduation — Only uses data for students who have had a full 6 years to graduate. This means that, if given 10 years of historical data (e.g., 2006 – 2016), only the first 4 years of data have students meeting the 6-year threshold.
  • Persistence — Only uses the most recent 4 years of data provided by your institution.

For insights, are terms based on academic year?

Yes. For example, if it’s May 2023 with a 4-term year, then the following terms are used for both graduate and undergraduate insights:

  • Graduation Insights (based on the last 10 years in which students had 6 years to finish)
    • 2013 – Winter, Spring, Summer, Fall
    • 2014 – Winter, Spring, Summer, Fall
    • 2015 – Winter, Spring, Summer, Fall
    • 2016 – Winter, Spring, Summer, Fall
  • Persistence Insights (based on the most recent 4 years of course outcomes)
    • 2019 – Winter, Spring, Summer, Fall
    • 2020 – Winter, Spring, Summer, Fall
    • 2021 – Winter, Spring, Summer, Fall
    • 2022 – Winter, Spring, Summer, Fall
    • 2023 – Winter (outcomes are not yet known for the other terms)

The earliest term included in the full term set is Winter 2013. This will be the earliest term until December 31, 2023. On January 1, 2024, the earliest term included will be Winter 2014.

How many students must take a course for it to show up?

There is no set minimum. If at least one student took a course, then that data is in the student enrollment data set.

How is Degree Program Alignment scored?

The Degree Program Alignment Z-score (Cumulative) quantifies a student’s program alignment by measuring how similar are the courses they’ve taken to those taken by successful graduates of their chosen degree program, compared to the average of all students in the degree program.

  • A high z-score is good: it means the student has taken more courses than average that align with those taken by degree program graduates.
  • A low z-score is bad: it means the student is taking fewer courses than average that align with those taken by degree program graduates.

A “z-Score” is a statistical measurement of a score’s relationship to the mean in a group of scores. A z-score of 0 means the score is the same as the mean. A z-score can also be positive or negative, indicating whether it is above or below the mean and by how many standard deviations.

What factors drive the completion prediction?

In addition to key factors such as the student’s GPA, completed credits, and degree program alignment score, these are common variables driving the completion prediction model:

  • Is transient — Whether the student is taking courses at your institution with the intention of transferring those courses to another school where they are seeking a degree. Transient students are non-matriculating.
  • Is full-time — Whether the student is considered full-time in the current term. Full-time and part-time designations are institution-specific.
  • Sections attempted — Number of sections being attempted in the current term.
  • Sections attempted next term — Number of sections registered for the next term, weighted by the fraction of other students who have registered sections for that term.
  • Failure Ratio (cumulative) — The student’s ratio of failed sections to attempted ones, across their entire academic career.

Important: Which variables are most impactful varies by institution.

Are Course Insights based on cohorts?

No, they are based on all students in the last 10 years who fall within the group as you have currently filtered.

What is the Score in the Courses export?

In the CSV export of Courses data, the Score for a course is a calculation of persistence impact based on the historical persistence of students who received grades in that course.

The Score calculates the impact on persistence or graduation if the student’s grade for that course would have changed to a higher value, say from B to A. That impact is derived from actual persistence data at your institution. For each letter grade (A, B, C, D, F, W, …), the score works with these counts:

  • Total number of students who received that grade
  • Number of students receiving that grade who persisted
  • Number of students receiving that grade who did not

You can then focus on improving grades for certain courses knowing that it improves their overall persistence/graduation rate.

What is the “boost potential”?

When you drill into a course’s details, you see “What is this course’s potential to boost graduation rates?” This score calculates the impact of a single letter grade increase for one student to project the potential lift in persistence and graduation rates across the entire filtered student group. This insight reveals the possible lift in persistence and graduation rates if all students in the filtered group improved their course grade by one letter. That lift is shown next to the lift for the average (typical) course at your institution, for relevant comparison.

The “boost potential” score projects how much higher graduation rates would have been if the average student’s final grade was one letter higher. It is not calculating a time range over how long into the future it would take for the graduation rate to increase. The purpose is to give you a quick ranking of courses based on impact potential, to help you identify high-impact courses for intervention.

Was this article helpful?
How can we improve this article?
Please submit the reason for your vote so that we can improve the article.
Page Contents