CaRCC Capabilities Model Introduction
Updated for V2.5, April 2025
Table of Contents: What is a Capabilities Model, and why do we need one? Acknowledgements Understanding Key Aspects of the Model: The Five Facings Availability across institution Service Operating Level Community engagement and collaboration Question not applicable Local Priority Domain support levels
This document provides an introduction and background on the CaRCC Capabilities Model, and explains the concepts that underlie the implementation as a web application. For a quick guide to using the tools, see also the Capabilities Model Quick Start Guide.
What is a Capabilities Model, and why do we need one?
Research Computing and Data[1] (computing, data, and related infrastructure and services, and abbreviated as "RCD") is changing at an accelerating rate, while the range of academic fields and disciplines depending on this infrastructure is expanding and becoming increasingly diverse. The CaRCC Capabilities Model was developed to identify the variety of relevant approaches taken to support the sector, and the key factors for providing this support, with a focus on the front lines of RCD infrastructure. The Model is designed for use as an input to strategic decision making and organizational planning, and is intended to be inclusive across types of institutions (large and small, public and private, etc.), as well as different organizational models (e.g., centralized or decentralized).
The Model is implemented as an assessment tool and an associated Data Viewer that allows users to explore the community dataset of contributed capabilities model assessments, and to benchmark their institutional assessment results relative to the community
What are common uses for the Model?
- To identify and understand areas of strength and weakness in an institution's support, e.g., as when conducting strategic planning and prioritization exercises.
- To benchmark your institution's support against peers, e.g., when making an argument for increased funding to remain competitive on faculty recruitment and retention.
- To compare local institutional approaches for RCD support to a common community model (i.e., a shared vocabulary), to facilitate communications and collaboration.
What is the format for the Model?
The assessment tool is presented as a questionnaire that allows institutions to conduct a self-assessment on a series of factors across a range of perspectives. This web-based application allows collaboration among individuals or teams involved in RCD support so they can work together to assess current capabilities at their institution. The next section lays out key aspects of the Model. See also the Quick Start Guide for guidance on using the assessment at your institution.
As of the 2.5 release the assessment tool provide several variants to address different institutional goals:
- The Full assessment includes roughly 150 capabilities covering a broad range of topics. This variant is useful for established programs wishing to do a complete baseline assessment of their RCD program.
- An Essentials assessment includes a subset of capabilities questions that has been identified for new and emerging RCD programs who want to focus on the essential capabilities. Users can still choose to view the full set of capabilities questions and can even add selected non-essential capabilities to include in their assessment.
- A Chart-Your-Own-Journey assessment allows for a customized assessment, in which users
select only those topics and questions that they want to focus on. This option is intended for RCD programs
conducting a follow-up assessment who want to focus on just those areas that are likely to have changed
since you last completed an assessment, or those that want to conduct a partial assessment of capabilities.
With this variant, users have two options to start with :- Copy answers from an earlier assessment, and then choose those topics and capabilities that will be updated in the current assessment. This is useful when an institution identified areas for improvement as a result of an earlier assessment and wants to re-evaluate (just) those related capabilities after having implemented a strategic improvement plan.
- Start from a blank assessment, and choose only those topics and capabilities that will be covered in the current assessment. This is a less common use of the Model, but allows maximum flexibility in choosing what to assess.
Some institutions consider this assessment work to be a measure of "maturity", and are very much interested in ranking/benchmarking against their peers. Others prefer to focus on the capabilities that are broadly understood to constitute RCD support as an inspiration for next steps. In any case, it will not generally be realistic for an institution to achieve 100% across the full range of support areas, nor would it necessarily make strategic sense to do so.
Who should use the CaRCC Capabilities Model?
This Model is designed to be useful to a diverse mix of stakeholders, including campus research computing and data practitioners, along with the principal investigators and research team members (faculty, staff) with whom they work, as well as key partners (e.g., central IT), and campus leadership.
We have limited staff resources; how much work is this going to be?
Smaller and emerging RCD programs can choose to complete an Essentials assessment,
which reduces the effort considerably. There are also some optional components that can be skipped
(e.g., Domain Coverage and prioritization) to minimize the effort involved. Regardless of which variant
is chosen, some institutions choose to do a more streamlined assessment their first time and just include a
few key participants (e.g., an RCD support lead or equivalent plus someone from the library).
The resulting assessment data may not be completely accurate and complete, but this can be a
quick way to learn about the CaRCC Capabilities Model and explore how the resulting data,
benchmarking reports, etc. can inform strategic planning, etc. at your institution.
See also How long does it take to complete the RCD Capabilities Model? in our
FAQ document.
Acknowledgements

The CaRCC Capabilities Model was developed through a collaboration of the Campus Research Computing Consortium (CaRCC), Internet2, and EDUCAUSE. This work has been supported in part by the National Science Foundation via OAC-1620695 and OAC-2100003.
The assessment tool, the data viewer, this guide, and the other associated resources were developed with the generous contributions of time and expertise from the 2018 workshop participants, and the working group members: Alex Feltus, Ana Hunsinger, Cathy Chaplin, Claire Mizumoto, Dana Brunson, Deborah Dent, Doug Jennewein, Gail Krovitz, Galen Collier, Jackie Milhans, James Deaton, Jen Leasure, Jill Gemmill, Jim Bottum, Joe Breen, Joel Gershenfeld, John Hicks, John Moore, Karen Wetzel, Mike Erickson, Patrick Schmitz, Preston Smith, Timothy Middelkoop, and Tom Cheatham. In addition, individuals at a number of Universities provided valuable feedback on earlier versions, for which the working group is very grateful.
Copyright ©2023-2025 Internet2 and CaRCC, and licensed for use under Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0) which grants usage to the general public and rights to share and redistribute for any purpose, even commercially; you must give appropriate credit with a link to the license, and if you remix, transform, or build upon the materials in any way, you may not distribute the modified material.
Understanding Key Aspects of the Model
There are six key concepts that underlie the Model, around which the tool is organized. These are:
- The Five Facings
- Availability across institution
- Service Operating Level
- Community engagement and collaboration
- Question not applicable
- Local Priority
The Five Facings: The Model is organized into sections that reflect different roles that staff fill in supporting RCD, and are named to reflect who or what each role is "facing" (i.e., focused on) [2]. Within each facing, the model poses questions about aspects of RCD support for the associated role; the questions are grouped into Topics.
Larger organizations may have a team associated with each facing role, while smaller organizations may have just a few people who cover these different roles. In filling out the assessment tool, you will likely want to involve people who work in the various roles; they can work to fill out their respective section of the assessment.
Facing Area |
Description |
Example roles |
---|---|---|
Researcher- |
Includes RCD staffing, outreach, and advanced support, as well as support in the management of the research lifecycle. |
Research IT User Support, Research Facilitators, etc. |
Data-Facing Topics |
Includes data creation; data discovery and collection; data analysis and visualization; research data curation, storage, backup, and transfer; and research data policy compliance. |
Research Data Management specialists, Data Librarians, Data Scientists, etc. |
Software- Facing Topics |
Includes software package management, research software development, research software optimization or troubleshooting, workflow engineering, containers and cloud computing, securing access to software, and software associated with physical specimens. |
Research Software Engineers, Applications Specialist, Research Computing support, etc. |
Systems- Facing Topics |
Includes infrastructure systems, systems operations, and systems security and compliance. |
HPC Systems Engineers, Storage Engineers, CI Network Engineers, etc. |
Strategy and Policy-Facing Topics |
Includes institutional alignment, culture for research support, funding, and partnerships and engagement with external communities. |
Research IT leadership: Director, Assistant/Associate Director, etc. |
Table 1 - Description and examples for the Five Facings
Questions for each Capability
The CaRCC Capabilities Model presents roughly 150 capabilities (in the form of questions) structured around the five Facings. Within each Facing, questions are grouped into topics. Each question is considered through several lenses (described below): Availability across institution, Service Operating Level, and Community engagement and collaboration.
- The Essentials version of an assessment includes a subset of just under 40 capabilities questions that have been identified for new and emerging RCD programs who want to focus on the essential capabilities, however the organization and form of the questions is the same as for a full assessment. Users can choose to hide all non-essential questions and topics for a streamlined experience, or they can choose to have the non-essential questions appear as well (e.g., to be aware of the broader set of capabilities), and can optionally include any of these non-essential questions in their assessment.
- Chart-Your-Own-Journey assessments can include only a few questions or however many of the full set that an institution wishes to include. When these custom assessments are based upon (i.e., copy answers from) an earlier assessment, users can choose to see all the questions or just those that are to be answered in the current assessment.
For each question, users choose answers for the three lenses, and these are combined to produce a numerical coverage value for that capability. Coverage values for the questions are averaged to produce a coverage value for the associated topic, and for each Facing overall.
Note: The computed coverage will indicate “(WIP)” (work in progress) if some but not all of the lenses have been completed for a given question. Similarly, the topic (average) computed coverage will indicate “(WIP)” if some but not all of the questions are answered in the associated topic. The average coverage values for each Facing appear once all the questions for that Facing have been answered. For Essentials and Chart-Your-Own-Journey assessments, only the essential and/or included questions must be answered. For Essentials and for Chart-Your-Own-Journey assessments that start from a blank assessment, only the essential and/or included answers contribute to the topic and Facing averages. For Chart-Your-Own-Journey assessments that copy answers from a previous assessment, both the included and copied answers contribute to the topic and Facing averages.
Availability across institution: The Model assessment tool asks organizations to rate the level of availability across their institution, for each capability associated with RCD support. Note that it should not matter how or where support is implemented (a lab, a central campus facility, a national facility, or the cloud). The point is whether researchers have access, and are supported for effective use. Broadly speaking, availability is a rating of the level and especially the breadth of support across the institution.
The scope of an "institution" is primarily intended to be a given University campus
(e.g., Clemson University, UC Berkeley, Oklahoma State University - Stillwater).
However, the tool can also be used with a narrower scope, such as a given school or
college (e.g., a College of Engineering, a Medical School, etc.).
The rating levels should be interpreted given the scope for which you are using the assessment tool.
Availability Level |
Description |
---|---|
No availability or support[3] |
None of this technology, service, or activity is in place and no work is under way or resources committed for this. |
Tracking potential use |
Staff are assigned but are restricted to monitoring and understanding this technology or service (i.e., there is active work in understanding, and generally more than just reading articles or occasional discussions). |
Planning, piloting, and initial deployment |
This technology or service is not yet available (in full production mode) to users, but meaningful planning for availability is under way. A plan for availability is either in development or in place. Staff are investing significant time and resources planning to deploy this technology or service. This includes evaluating options (e.g., specific technologies, vendors, team members, etc.) with an expectation of production availability within a defined timeframe. Evaluation involves at least multiple person-weeks of staff time developing options, a proposal for required funding, and can include piloting, or an initial deployment of the technology or service. |
Availability/supported for parts of the institution |
Full, production-quality technical capability or service is in place with access available to selected users, to specific areas or groups (e.g., just for a given college or certain departments), but not institution-wide. |
Availability/supported institution-wide |
Full production-quality technical capability or service is in place, with equitable access
institution-wide.
|
Table 2 - Levels of Deployment at Institution
The notion of Availability across Institution is easiest to understand for things like researchers' access to consulting, or to compute resources. Where the assessment capabilities focus on aspects like systems management or security practices, Availability/supported institution-wide should be understood to mean that all researchers in the institution can benefit from this, and/or all teams supporting the function within the institution follow or support the practice; if only some have access, or only some teams follow a practice, Availability/supported for parts of the institution is the appropriate response. For capabilities concerning understanding or alignment, this can be understood to mean the existence of the understanding, or the level of alignment.
Service Operating Level: As organizations and services mature, support transitions from ad hoc projects to repeatable and defined services and eventually to managed activities that work to optimize the service operations and functionality. A common model for describing and structuring this activity is known as IT Service Management (ITSM), and is characterized in this context by adopting a process approach towards management, focusing on researcher needs and IT services for researchers rather than IT systems, and stressing continual improvement[4]. The CaRCC Capabilities Model includes this dimension to let organizations assess the robustness, resilience, and sustainability of the support for a given RCD capability.
Service Operating Level |
Description for capabilities |
Description for capabilities |
---|---|---|
No existing service/support or awareness[6] |
No capability or service support is in place and no work is under way and/or no resources are committed to support the capability or service. |
No support is in place, and/or there is no awareness of this practice; no work is under way and/or no resources are committed to provide or develop this support/awareness. |
Very limited support and/or at risk |
The capability or service only supports basic operations, with no meaningful development or improvement. The capability or service is at real risk of failure, generally due to insufficient resources to anticipate and eliminate operational, security, or other failures. |
There is very little support or awareness of this practice, and no meaningful development or improvement. The support/awareness is at risk risk, generally due to insufficient resources, but perhaps as well due to lack of commitment or buy-in. |
Minimal resources & commitment |
The capability or service only supports operations and absolutely necessary enhancements to keep the service operating. There is no significant development or enhancement. While the capability or service is not at risk of failure, any reductions in required funding could reduce the level to Very limited support and/or at risk. |
There is support or awareness of this practice at a minimum viable level. There is no significant investment in developing or growing the practice. While support for the practice is not at risk of failure, any reductions in required funding or leadership commitment could reduce the level to Very limited support and/or at risk. |
Basic sustained service/support & awareness |
The capability or service has good support for operations and some ongoing development or enhancement. Staff are able to respond to special customer demands for extensions or enhancements to the capability or service. |
There is good support and awareness of this practice and there is ongoing development or enhancement. Associated staff are able to respond to some special requests, or extensions of the practice. |
Strong support, awareness, & commitment |
The capability or service has a high level of operations and support (24x7 support), maintains a high level of customer response, and/or has sufficient resources for regular, significant new service development. |
There is a high level of support and awareness of this practice, with a high-level of response to requests, and associated staff have sufficient resources and commitment/buy-in for regular, significant extensions of the practice. |
Table 3 - Service Operating Level Descriptions
While not all capabilities are easily understood as "services", the concept may be mapped to "activity" or "practice", to translate the Service Operating Level to each activity or factor.
Note that many organizations providing RCD support lack the resources to operate at the Strong support, awareness, & commitment level for many of their capabilities. It is not uncommon to support many capabilities at the Minimal resources & commitment, or Basic sustained service/support & awareness levels.
Community engagement and collaboration: The Model assessment tool also asks organizations to rate the level of Community engagement and collaboration, for each of the RCD capabilities. This recognizes that research is often collaborative across institutions (and even whole communities), and so the RCD teams supporting them should also be engaged outside your organization. Engagement and collaboration can range from participation in structured community forums to grant-funded regional projects, and should involve engagement with other institutions and/or communities to develop and share leading practices, get advice and training, etc. Examples of communities include the CaRCC People Network, Campus Champions, The Coalition for Academic Scientific Computation (CASC), Research Data Access & Preservation (RDAP), The United States Research Software Engineer Association (US-RSE), and various open source projects related to RCD, as well as regional network collaborations like the The Great Plains Network and the Ecosystem for Research Networking[5].
Engagement and collaboration Level |
Description |
---|---|
No engagement with community collaboration[6] |
No collaboration is in place and no work is under way or resources committed to form or engage such a collaboration. (Your team is inward focused). |
Exploring a community collaboration |
There is interest in collaborating with others, and initial discussions may be underway. This involves active engagement and/or planning, exploratory discussions with potential partners, and represents more than simply reading articles, documents, or occasional meetings. For example, one or more RCD staff are learning about or only rarely participating in a community organization. |
Engaging a community collaboration |
An initial or early collaboration and engagement is underway to explore the issues, or to build relationships. A plan for collaboration is either in development or in place. Staff are investing significant time and resources planning for this collaboration (or engaging with a new community), partners have been identified, and are actively engaged. For example, one or more RCD staff are members and/or participants in a community organization. |
Supporting a community collaboration |
One or more collaborations and/or engagements are ongoing, have support and momentum, and are likely to continue. Partnerships are well-established, have clearly defined goals and outcomes, and have an established model for sustainability. (RCD teams are well connected to established communities and/or involved in formal collaborations). For example, one or more RCD staff have a contributing role in a community organization, such as membership in a working group, interest group, steering committee, etc. |
Leading a community collaboration |
Your institution is leading/coordinating collaboration and engagement among multiple institutions, and is a recognized leader. For example, one or more RCD staff have a leadership role in a community organization. (You're going above and beyond!) |
Table 4 - Levels of Collaboration across multiple Institutions
Community engagement and collaboration may be more easily understood by some roles/teams than others. For certain activities, the technology or service itself may be part of a collaboration (directly sharing resources among collaborating partners). For others, the staff supporting the service may be part of a community of practice/expertise that develops or shares resources like documentation, training materials, etc. Even for activities that are very locally or inwardly focused (e.g., aspects of data center operation), there can be collaboration on everything from the development and management of the data center, to defining the standards and best practices for staff who perform key functions. Community engagement and collaboration is included as a factor in the model to recognize the importance of working with peers, ensuring awareness of, and even active engagement in the development of community best practices. Few institutions have the capacity to be collaboration leaders across many capabilities.
Impact on the Computed Coverage: This question has less weight in the computed coverage than the Availability and Service operating level questions. If you select No engagement with community collaboration or Exploring a community collaboration, the computed coverage for the associated capability will be slightly decreased. If you select Engaging a community collaboration, this will have a neutral impact on the computed coverage and if you select Supporting or Leading a community collaboration, the computed coverage for the associated capability will be slightly boosted in recognition of the significance of that effort.
Question not applicable: For each capability, you have the option to mark it as Question not applicable. This is intended for use only when support for a given capability makes no sense at your institution. The capability is effectively removed from the model for your assessment and will not contribute to the computed coverage values for the associated topic and Facing. When this value is chosen, the other values for the answer form will be disabled and grayed out to indicate this, although you can add Work notes to annotate your decision to mark this Question not applicable.
Note: This value should only be used in exceptional cases, and not simply when there is little or no support because it has not been deemed important or strategic (e.g., due to resource constraints).
As an example of where this may make sense is the Domain support topic for each facing: If your institution does not have a Medical School, you can mark that as Question not applicable.
Local Priority: As you complete the assessment (or in a review session with your advisory groups) your team can optionally mark each capability as a priority for your organization. This can be used, e.g., to mark items you want to address in your strategic planning. The range of priority values is from 1 to 99 (where 1 is your top priority). Leave this empty or set it to "0" if this is not a priority.
Note that this question (and any answers you fill in) are for your own use, and any Local Priority values have no impact on the calculated coverage. However, you can view the top 10 priorities you marked on the summary assessment page; the full list is available in a view linked from that page.
A good practice is to mark as a priority only those capabilities that you can reasonably expect to devote meaningful resources to improving or expanding in the near term (e.g., in the next few years).
If everything is a priority, nothing is a priority!
Domain support levels: Each Facing includes the special topic Domain support. These topics are optional and do not impact the computed coverage values for the Model. The questions in this topic allow you to assess the level of support across a range of different domains at your institution. Note that these are high-level categories, that Universities and colleges vary in where a given field or discipline might be located, and that there are countless interdisciplinary units. The aim here is a high level of aggregation that roughly corresponds to domains with relatively similar functional needs. While each Facing includes this topic, for some Facings at some institutions there may be less distinction or variation in support across domains.
The summary assessment page averages the domain support levels for each domain across all five Facings. The average domain support will indicate “(WIP)” (work in progress) if some but not all of the questions are answered in each Facing.
Note that you can mark a domain as Not relevant or applicable, if that domain is not at all represented at your institution. This value should only be used if there are no researchers in that domain at your institution; it should not be used simply because the support is weak or there is a perception that researchers in that domain have little need for RCD resources[7].
Note also that users at institutions that have worked with earlier versions of the CaRCC Capabilities Model assessment tools may notice that the v2.0 support for assessing domain support is quite different than in earlier versions. One aspect of the new support model is that support for a given domain can be assigned a priority.
Requesting an additional domain: If you would like to include an assessment for a domain that is not represented in this list, please send your suggested addition to capsmodel-help@carcc.org for our review.
Footnotes
1 "Research computing and data" (abbreviated as RCD) includes technology, services, and people supporting the needs of researchers and research, and is intended as a broad, inclusive term covering computing, data, networking, security, and software. The National Science Foundation (NSF) uses the term “cyberinfrastructure,” and others use "Research IT."
2 For more about the roles associated with these areas, see the initial draft of a "Research Computing and Data Professionals Job Elements and Career Guide", available at: https://carcc.org/wp-content/uploads/2019/01/CI-Professionalization-Job-Families-and-Career-Guide.pdf
3 When this value is chosen, the model will default to the corresponding values for the other questions in the form.
4 See also https://en.wikipedia.org/wiki/IT_service_management
5 (ERN, formerly the Eastern Regional Network)
6 The assessment tool will default to this value for a given aspect/factor, when No availability or support is chosen as the level for Availability across institution.
7 Researchers in the Arts and Humanities and in the Social Sciences may have more modest requirements, but RCD resources are having a significant impact on scholarship. It has been noted that Social Sciences researchers use as much computing resources today as Physical Sciences researchers used just a decade ago.