Data Viewer Frequently Asked Questions (FAQ)

Table of Contents: Where do I start? How can I benchmark my institution relative to others? Can I explore data (or see benchmarking) on individual questions within the assessment? When will priorities visualization be available? Where does the information about institutions come from? What is a “Contributor” (vs. a “User”)? What is “Coverage”?

See also the Capabilities Model Frequently Asked Questions (FAQ)


Where do I start?

Click on the "Community Demographics" tile to see graphical displays of information about the institutions in the dataset. There are several different types of visualizations to explore.

Click on the "Capabilities Data and Benchmarking" tile to see graphical displays of Capabilities Model Assessment Data in the community dataset.

Click on any of the Filters on the left of a individual page visualization to see options for narrowing the set of data you want to view. Once you have set your Filters and View Options, click on "UpdateView" to see the graph.

Back to top 

How can I benchmark my institution relative to others?

See the Step-by-Step example of benchmarking for your Institution in the Quick Start Guide.

Back to top 

Can I explore data (or see benchmarking) on individual questions within the assessment?

This is a feature we will add soon. Note, however, that this level of data exploration will only be available to contributors (i.e., those institutions that complete an assessment and submit it to the Community Dataset).

Back to top 

When will priorities visualization be available?

We are currently conducting focus groups to better understand what functionality people want for this. If you are willing to participate in a short discussion to help shape this feature, please email capsmodel-help@carcc.org.

Back to top 

Where does the information about institutions come from?

Much of the institutional metadata comes from IPEDS, the Integrated Postsecondary Education Data System. This is a system of interrelated surveys conducted annually by the U.S. Department of Education’s National Center for Education Statistics (NCES). See https://nces.ed.gov/ipeds/.

The Institutional Classification values are based upon the widely used Carnegie Classification with adjustments and extensions to support our community of users (e.g., to include additional types of institutions and those that are not U.S. Higher Education institutions). For more on the Carnegie Classification of Institutions of Higher Education, see https://carnegieclassifications.acenet.edu/

Metadata on Institutional Missions, Organizational Models, and Reporting Structures is self-reported by institutions when they create an institutional profile to complete an assessment.

Back to top 

What is a “Contributor” (vs. a “User”)?

Contributors are those institutions that have completed an assessment using the CaRCC Capabilities Model, and agreed to contribute their data to the community dataset.
Users are those all institutions that have created a profile on the RCD Nexus portal, and/or that requested and downloaded an earlier version of the CaRCC Capabilities Model.

You can explore the demographics of the community of users (and contributors, i.e., those who have completed an assessment and contributed it to the Community Dataset) using the Community Demographics tools of our Data Viewer.

A total of 62 institutions have completed 90 assessments (many have repeated their assessment in successive years). See also the list of contributors.

Back to top 

What is “Coverage”?

The Capabilities Model presents roughly 150 capabilities (in the form of questions) structured around the five Facings. Within each Facing, questions are grouped into topics. Each question is considered through several lenses: Availability across institution, Service Operating Level, and Community engagement and collaboration (these are described in the Understanding Key Aspects of the Model section of the Introduction and Guide to Use).

For each question, users choose answers for the three lenses, and these are combined to produce a numerical coverage value for that capability. Coverage values for the questions are averaged to produce a coverage value for the associated topic, and for each Facing overall.

Back to top