< All Topics
Print

Analytics FAQ

Following are answers to frequently asked questions about Administrative Analytics.

Why does our logo look so small?

The new collapsible left-side navigation changed the dimensions for your logo to display. The area set for your logo is a maximum of 150px wide by 60px high, and larger images are scaled to fit. If you want to change how your logo appears, contact Support to update your file.

Changes in Predictions

Why did last week’s persistence prediction of 90% jump to 95% today?

It’s typical to see more shifts in persistence prediction occur during the open registration period. This is because “next-term enrollment” is an important variable in modeling persistence.

We had no students with a “very low” persistence prediction, and now we have 60!

Critical time thresholds cause updates to the predictions, such as the term starting and triggering unregistered students to change to a status of Inactive. Note that Powerful Predictors are not the same set of data as model variables: they are often a subset.

Degree Program Alignment

What is “Degree Program Alignment”?

The Degree Program Alignment compares a student’s academic career to those of students who graduated successfully in that same program of study. This score quantifies how well they’re progressing compared to your institution’s historical data.

How it works

  • For each course per degree program (including accepted transfer credits), we calculate the percentage of graduated students who received course credit (earned_course_percent). It captures: How many graduates took that particular course?
  • For each student and graduate of a degree program, we calculate the average earned_course_percent of all courses that they earned credits from. This is the degree program alignment score.
  • The degree program alignment Z-score compares students’ alignment scores against the average alignment score value of graduates of the same degree program.

What to expect

  • Because earned credits change once the term completes, expect alignment scores to hold steady during a term.
  • Students with high numbers of earned course credits do not see a strong effect on their alignment score, because the score is an average across all they took.
  • Students with low numbers (who only completed a few courses) could have higher alignment scores than students who are about to graduate.
  • New students (without course credits) will have no degree program alignment score (null).
What is the “Z-score”?

The Degree Program Alignment Z-score (Cumulative) quantifies a student’s program alignment by measuring how similar are the courses they’ve taken to those taken by successful graduates of their chosen degree program, compared to the average of all students in the degree program.

  • A high z-score is good: it means the student has taken more courses than average that align with those taken by degree program graduates.
  • A low z-score is bad: it means the student is taking fewer courses than average that align with those taken by degree program graduates.

A “z-Score” is a statistical measurement of a score’s relationship to the mean in a group of scores. A z-score of 0 means the score is the same as the mean. A z-score can also be positive or negative, indicating whether it is above or below the mean and by how many standard deviations.

Algorithm and Modeling

How many prior years of data does the algorithm use? Will there be generational drift?

In the persistence model, we use the most recent 2 years of historical data (past terms with outcomes) to have enough data for modeling and to ensure an accurate representation of current students. Every time we retrain a model, we use the most recent 2 years of historic data to keep up with your ever-changing student population.

How can a student be predicted to have low persistence yet high completion?

This unintuitive pairing might point to an event or aberration in the student’s situation: their overall long-term indicators match well for academic success, but something triggered the model to predict a short-term problem. It’s these flags that are to help advisors to know when to reach out, and where they might uncover a family, financial, or health crisis needing support.

How can a student be predicted to have high persistence yet low completion?

Long-term indicators, such as cumulative GPA and credits earned, have a significant impact on completion predictions. These indicators are often what separate the High/High group from the High/Low group.

Some behaviors, such as re-enrolling after an absence, attending part-time, and attempting too few credits may have little impact on immediate continuation (persistence) but bode poorly for completing a credential on time.

By heavily weighting LMS engagement, can the algorithm deal fairly with non-LMS courses?

Most of the LMS model variables focus on relative comparisons, such as looking at student LMS activity only relative to peers in the same section or course. That helps the model work around uneven use of the LMS across specific faculty, courses, and time periods.

Data Display & LMS Calculations

When does LMS data refresh? Is it updated in real time?

The data does not update in real time. LMS data refreshes when the application is built, as long as the data is successfully extracted from the LMS. Typically, an LMS extract is performed every night around midnight, and then the application is rebuilt (around 2 AM). Details regarding Data Freshness and Most Recent LMS Activity Date display under the “Data Info” tab.

How fresh is all of the data?

Generally, data is refreshed and backed up daily across all products, as part of the backend workflow that supports your institution.

For Scheduling, which is more time-critical, course and section data updates hourly, seat counts update with each change, and student data updates each time they sign in. Detailed logging about these transactions is kept for 30 days.

What data points are tracked or counted as LMS Activity?

In general, any activity related to attendance, assignments, exams/quizzes, discussions, and/or messaging in the LMS are tracked as LMS activity.

How is Average Days of any LMS Activity (Per Week) calculated?

This is a feature used in the Engagement (LMS) category for Powerful Predictors, and is defined as the average number of days per week a student performs any LMS activity across sections.

On a line graph, what’s the-x axis for predictors such as LMS relative grades or course withdrawals?

Graphs are a product of a process called “normalization.” If you want to know how far below average a student is performing in their classes, the trouble is that the student in question is enrolled in multiple classes. Because students are in multiple classes, each class and each section has different average grade (e.g. chemistry is 56%, math is 72%, biology is 91%). So, the platform then “normalizes” each of those section’s grades so that the average grade for all sections is 0. Then, the system can line all those averages up and determine how far a student or group of students is from 0 in each of their classes (i.e. -5%, +2%, -15%, etc.). Averaging those distances would then display the “average distance from section average,” which then can be compared with other students, all with a “normalized” mean of 0 as shown in the graph.

When reviewing predictors associated with a relative average, the 0 on many graphs represents the section average of every single course offered by the institution. The distribution along the red and blue lines then shows how frequently students are earning grades, withdrawing from a course, or logging in to the LMS, etc. as percentage points above or below their class averages. For example, If you see a number such as -0.05 ,that group in that range is 5% below the average.

With a data set such as course withdrawals, there may likely be decimal values such as .75 or 2.5. This stems from the fact that the platform does not use a histogram to display data that would normally only have discrete values (1, 2, 3, 4, 5, etc.). Instead, it uses a graphing function to build a “normalized curve” that includes a “continuous” visualization with non-discrete values (.75 and 2.5). In general, histograms are more accurate. A set “normalized curve” with values like .75 and 2.5 provides you a more consistent visual experience–so a value like .75 should be read as “around 1″ and a value like 2.5 should be read as “around 2 or 3.”

Was this article helpful?
How can we improve this article?
Please submit the reason for your vote so that we can improve the article.
Page Contents