As part of the UNC Master’s in Digital Communications program, I completed a thesis project utilizing the new skills I developed during my studies. Originally scheduled to complete the degree in Spring of 2020, I had to retract my proposal from IRB, revise, and resubmit for Fall of 2020 due to the COVID-19 campus shutdown.

Working in higher education presents many challenges which are compounded by being a high-level state entity. Many campuses can leverage athletics, appeal to alumni or students’ pride, or tout benefits of research to connect with audiences. Once the campuses connect with the audience and drive traffic to a site, they can gauge the success of the campaign or page by how many visitors convert to completed applications, donations, etc. For many websites, the visitor to completed transition conversion rate is the key performance indicator (KPI).

My previous organization, however, had to rely on other methods to engage audiences and drive traffic to the website. We didn’t have students, we had no athletic team, and we didn’t accept applications. Visitors didn’t really have a specific transaction to complete; and therefore, we had no clear conversion KPI. Without the conversion KPI, how would we know how successful the site is? Could metrics be used to determine the success of the website? That’s what I wanted to find out.

Metrics and the Success of a Higher Education Non-transactional Website

Abstract

The purpose of this study was to discover relationships between metrics and usability standards to assess the success of the University of North Carolina System Office website. Though no relationship was arguably found, the study results will be used to improve the UNC System Office website and other non-transactional websites that convey policy based information to the general public. Remote usability testing with nine users on desktop computers revealed that the site succeeds in terms of aesthetic design and top-level navigation but suffers from critical errors, poor organization, and an overuse of industry-specific terminology. The testing results demonstrate the importance of presenting the wide breadth of information to the general public in a way that is visually appealing and topical for some while direct and deeply specialized for others. Above all, the study provides a case study in conducting usability testing.

Download

Want to hear more?

In the coming weeks, I’ll be breaking out the insights I gathered from my study and including them on the website. I’ll be discussing topics such as:

  • Using Facebook ads to recruit participants
  • Performing remote moderated usability testing
  • Vocabulary inflation in the UI/UX field
  • Testing techniques and what is appropriate when
  • Testing session scripts and easily misinterpreted answers
  • And much more…