Introduction
Survival in the modern era of higher education is a practice in the survival of the fittest. Colleges and universities use third-party agencies such as the Accrediting Board of Engineering and Technology (ABET), the Association of Technology, Management, and Applied Engineering (ATMAE), the Higher Learning Commission (HLC), or the Southern Association of Colleges and Schools (SACS) as mechanisms to assess the validity of an institution’s program(s). A key mechanism in this accreditation process for many agencies is a physical onsite visitation of the institution and/or program by an accreditation visiting team. The intent of the visit is to allow for observation and characterization of the institution or program under review, which is used to inform a final accreditation decision. Unfortunately, the 2019 coronavirus disease (COVID-19) pandemic compounded the significance and challenges of institutional and programmatic accreditation. COVID-19 hamstrung the primary instrument—the onsite visit—used by accreditors to assess institutions and programs.
Purpose
ATMAE required ad hoc transitions to conduct accreditation reviews virtually during the spring of 2020 without full knowledge of the impact that virtual site visits would have on the effectiveness and quality of accreditation reviews against its 2019 Accreditation Standards (Board of Accreditation, 2019). It is reasonable to expect similar modes of evaluation to continue into the near future, even though the unprecedented events of the spring of 2020 were unique. Therefore, a critical need existed to evaluate the efficacy of this mode of accreditation review. ATMAE and other higher education institutions, accrediting agencies, and governing bodies (e.g., Council for Higher Education Accreditation [CHEA]) will be able to make data-informed decisions regarding virtual site visits or proactively prepare for similar global scenarios with this evaluation. Therefore, to begin to understand the impacts of virtual site visits on the accreditation review process, this study posed two research questions:
- 1.
What were the perceived effectiveness and quality of virtual accreditation site visits?
- 2.
What lessons were learned that provide preliminary best practices for future virtual accreditation site visits?
Background
The COVID-19 pandemic of 2020 and 2021 put significant pressure on higher education institutions in several ways (Priddy & Pelletier, 2020). One such pressure point was accreditation site visits. Accreditation bodies such as ATMAE include criteria that evaluate “curriculum,” “facilities,” and “institutional support,” suggesting the importance of the onsite visit. To date, limited research has been published evaluating virtual approaches to accreditation visits. Kinzie (2020) suggested that disruptions resulting from COVID-19 are an opportunity for improvement. However, much of the initial discussion has focused on learning outcomes and equity challenges in student resiliency and effective learning. Although policies and logistical guidance for accreditation visits during the COVID-19 pandemic have been published, most address medical disciplines, including nursing and medical transport. Cobourne & Shellenbarger (2021) discussed adaptations in preparation and planning for a virtual accreditation visit in nursing but did not address the quality of the visits related to accreditation. Frazer (2021), speaking on behalf of the Commission on Accreditation of Medical Transport Systems (CAMTS), advised programs to do the best they could to meet standards related to quality and safety and serve the patient. Potts et al. (2021), speaking for the Accreditation Council for Graduate Medical Education and its Review Committee, characterized the virtual accreditation visit as “forced” on them due to the COVID-19 pandemic. They described the tools used in accreditation, disruptions in the accreditation system, and the impact on resulting accreditation decisions. However, surgical, nursing, and medical transport education have a different educational context than the conventional classroom and laboratory learning typical of STEM disciplines.
Eaton (2020), President Emeritus of CHEA, noted that as higher education institutions change and innovate, so must accreditation. Eaton asserted that the most significant shift in accreditation has been how institutions and programs are reviewed and examined. The move from in-person visits to virtual visits through Zoom™ or other video conferencing platforms changed engagement. What is less clear is the impact this change has imposed. The pandemic has been a force for change in many parts of society—It was not expected to change the higher education environment. Virtual accreditation site visits potentially address many social and parity concerns and broaden the scope of reviews to include alternative providers and alternative credentials. However, limited research has examined how STEM-focused accreditation bodies can leverage this review modality.
Methodology
Research design
This study used a mixed-method approach to understand the impacts and lessons learned from ATMAE virtual site visits completed during the spring of 2020. Specifically, qualitative and quantitative survey data were collected from a representative sample from three groups: 1) ATMAE Board of Accreditation (BoA) members, 2) ATMAE visiting team members, and 3) institution contact persons. Study participants in each of the three survey groups were given 4 weeks to complete the electronic survey, with weekly reminders to participate, as recommended by the Tailored Design Method (Dillman et al., 2014). The BoA survey group was responsible for reviewing all visiting teams’ and institutions’ self-study reports and making final accreditation decisions (Board of Accreditation, 2022). The visiting team group was responsible for virtually visiting and reviewing each institution’s program(s) and facility(ies) and meeting with administrator, faculty, staff, and students to collect data to inform a team accreditation report that was submitted to the BoA for final review (Board of Accreditation, 2022). The institutional contact group was composed of the point people at each institution applying for accreditation, who were responsible for developing a program self-study report, coordinating logistics for the accreditation review, and attending accreditation hearings with the BoA (Board of Accreditation, 2022).
Research setting
Traditionally, ATMAE visits take place on the college campus, submitting for accreditation over 2 days. The accreditation team interviews faculty, staff, administration, students, and the advisory board to verify the program’s submitted self-study report, tour laboratories and classrooms, and inspect the equipment utilized to deliver the program to ensure that proper facilities are used and maintained. The team addresses any shortcomings or concerns and provides feedback to the faculty, staff, and administrators on the end of the second day.
Due to the COVID-19 pandemic, ATMAE moved its spring 2020 site visits to a virtual modality. These virtual site visits largely followed the same structure as the in-person visits, minus in-person observations and conversations (i.e., lab tours, discussions with students, etc.). The institutions under review utilized various video conferencing platforms and provided the visiting teams with meeting links to complete the required interviews.
Measures
The study characterized effectiveness, quality, advantage/disadvantage, and leverage measures using the survey questions and data types indicated in Table 1. This survey instrument presented specific definitions for effectiveness and quality to survey participants as follows: “Effectiveness was the extent to which planned activities (e.g., accreditation self-study reports and accompanying site visits) are realized and planned results (e.g., program accreditation decisions) are achieved”; “Quality was the inherent perception of the user (e.g., customer’s perspective), which in this case was the accreditation team and/or institution being reviewed.” Measures of advantage/disadvantage were defined during the analysis based on positive (advantage) or negative (disadvantage) themes in the data set. In contrast, measures of leverage were defined based on “Yes/Maybe/No” responses coupled with themes found in the data set.
Survey | ||||
---|---|---|---|---|
Measure | Characterization | Question | Data Type | Research Question |
Effectiveness | Effectiveness of virtual site visit modality at upholding accreditation rigor | [2 and 3]a | Qualitative | 1. What were the perceived effectiveness and quality of virtual accreditation site visits? |
[2]b | Quantitative | |||
Quality | Impact and rating of accreditation quality given virtual site visit modality | [4 and 5]a | Qualitative | |
[6]b | Quantitative | |||
Advantage/Disadvantage | Perceived advantages/disadvantages of virtual site visit modality | [6 and 7]a | Qualitative | 2. What lessons were learned that provide preliminary best practices for future virtual accreditation site visits? |
[10 and 11]b | ||||
Leverage | Perceived opportunities to leverage virtual site visit modality in future | [8]a | Qualitative | |
[12]b |
BoA survey.
Team and institution survey.
BoA, Board of Accreditation.
Data analysis
Quantitative methods were used to provide a descriptive analysis of the demographic characteristics of the survey sample. Specifically, counts, percentages, and medians were calculated to characterize each survey group based on size, response rate, age, and experience level. Age was specifically evaluated due to the nature of the research setting, in which younger individuals were assumed to adapt to the virtual modality more readily than older individuals. Similarly, the experience level of the BoA and team members was another important characteristic that framed the results of the study.
To answer the study’s research questions, a mix of data analysis methods were used to evaluate qualitative and quantitative data collected on each of the research measures (see Table 1). Specifically, non-parametric binomial exact tests (α = 0.05) were used to analyze differences in proportions of responses to effectiveness and quality measures collected using 5-point Likert scale survey questions as well as differences in proportions of categorical responses (e.g., “Yes,” “Maybe,” “No”) to qualitative open-ended questions. These tests were not used to answer study hypotheses but to statistically support whether significant perspectives were evident in the data set. Next, a qualitative content analysis was conducted that assigned sentiment codes for each of the open-ended responses from the N = 86 respondents. Responses that included affirmative phrasing received positive-sentiment codes (e.g., “We can deal with issue such as virus travel restrictions” was given a “positive” code) whereas responses that included non-affirmative phrasing received negative-sentiment codes. Furthermore, the authors conducted a thematic analysis of the qualitative results to form a rich description of collective meaning for each of the study’s measures, similar to Haughery & Raman (2016). Themes were developed and defined using a grounded theory approach that formed a common language of meaning for each measure across all respondents (Gough et al., 2012). These themes were then defined and reported to support a synthesis of the research results (Borrego et al., 2014). From this mix of analysis methods, answers to the study’s research questions were triangulated.
Findings
Descriptive
This study collected self-reported survey responses of perceptions of effectiveness, quality, advantage/disadvantage, and leverage from the spring 2020 ATMAE accreditation virtual site visits. As indicated by Table 2, the overall response rate of the study was 41% (n = 35 responses from a survey population of N = 86). Not surprisingly, the BoA group had the highest response rate (65%) possibly because they were the most invested in accreditation efforts as they were already the individuals volunteering their time to serve on this board. Interestingly, the institutional contact group also had a response rate above 50%. This could be attributed to the timing of data collection. The survey was distributed before final decisions had been made by the BoA, and the institutional contacts may have been more willing to comply with accreditation-related requests (i.e., responding to the survey).
Group | Population (N = 86) | Sample (n = 35) | Response Rate |
---|---|---|---|
BoA Members | 17 | 11 | 65% |
Visiting Team Members | 21 | 12 | 57% |
Institutional Contacts | 48 | 12 | 25% |
Total | 86 | 35 | 41% |
BoA = Board of Accreditation.
Table 3 shows the age (years) of respondents and institutional type they were employed by. Respondent age across the entire data set was normally distributed, based on a Shapiro-Wilk test of normality (p = 0.8304), and the mean age was 50–59 years. Not surprisingly, most respondents (n = 24) were employed at 4-year institutions offering Bachelor of Science and/or Master of Science degree programs. It is important to note that n = 4 did not provide age data and n = 7 did not provide employment data.
Group | |||||
---|---|---|---|---|---|
BoA (n = 11) | Institution (n = 12) | Team (n = 12) | Total | ||
Age (Yr) | |||||
30–39 | 0 | 0 | 2 | 2 | |
40–49 | 1 | 3 | 2 | 6 | |
50–59 | 3 | 4 | 1 | 8 | |
60–69 | 3 | 4 | 3 | 10 | |
70–79 | 2 | 0 | 2 | 4 | |
Prefer not to answer | 1 | 0 | 0 | 1 | |
Total Count (Age) | 10 | 11 | 10 | 31 | |
Institution Type | |||||
2-Year (AS/AAS) | 0 | 2 | 1 | 3 | |
4-Year (BS/MS) | 7 | 8 | 9 | 24 | |
Other | 0 | 1 | 0 | 1 | |
Total Count (Institution Type) | 7 | 11 | 10 | 28 |
AAS, Associate of Applied Science; AS, Associate of Science; BoA, Board of Accreditation; BS, Bachelor of Science; MS, Master of Science.
Respondents’ experience level related to ATMAE accreditation was collected and is illustrated in Table 4. These data allowed for a fuller interpretation of the results. BoA member experience level was measured in years served on the Board, accreditation team member experience as number of appointments as a team chair or team member, and institutional contact experience as the number of accreditation self-studies prepared. The median years of experience as a BoA member among respondents was MedianBoA = 2.0, and the median number of appointments to a visiting team was MedianTeam = 1.5 (MedianMember = 3.0; MedianChair = 0.0). These results illustrate level of experience. For institution contact experience, the median number of self-studies prepared was MedianInst = 3.0, indicating more experience for this survey group.
Board of Accreditation | Team | Institutional Contact | |||||
---|---|---|---|---|---|---|---|
Service (Yr) | Count | Appointments | Member | Chair | Count | Self-Studies Prepared | Count |
0–4 | 6 | 0-4 | 7 | 10 | 17 | 1 | 4 |
5–9 | 2 | 5–9 | 3 | 1 | 4 | 2 | 6 |
10–14 | 0 | 10–14 | 1 | 0 | 1 | 3 | 6 |
15+ | 2 | 15+ | 1 | 1 | 2 | 4 | 8 |
Median | 2.0a | Median | 3.0 | 0.0 | 1.5 | Median | 3.0 |
Data collected as categorical; therefore, 15+ responses coded as 15.
Effectiveness and quality
The first research question asked what the perceived effectiveness and quality of virtual accreditation site visits were. To help answer this question, the survey instrument asked the BoA members whether virtual visits were effective at upholding the rigor of accreditation reviews. Categorizing BoA members’ sentiments, it was found that n = 3 (27%) felt that virtual site visits were not effective (“no”), n = 1 (9%) were undecided (“unknown”), and n = 7 (64%) felt positive (“yes”), as indicated in Table 5. Furthermore, when a thematic analysis of rationales provided for a sentiment was performed, a strong theme of efficacy became evident in BoA members’ open-ended responses, as illustrated in Table 6. This theme of efficacy was defined as “the intended accreditation results were met, but there was a lack of in-person interaction that posed challenges to the review process.” While the observed theme of efficacy came from the entire pool of responses, regardless of sentiment, a statistically higher proportion of responses (p = 0.049) felt positive toward the effectiveness of virtual site visits to uphold the rigor of accreditation review. These results illustrate the confidence that BoA members appeared to have in the virtual modality, albeit tempered by the practical void of physical interaction that was reported to detract from the review process.
Sentiment | Count | % |
---|---|---|
No | 3 | 27 |
Unknown | 1 | 9 |
Yesa | 7 | 64 |
Total | 11 | 100 |
H0: Proportions of Yes = others (p = 0.049).
BoA, Board of Accreditation; H0, null hypothesis.
Frequency | |||||||
---|---|---|---|---|---|---|---|
Theme | Definition | Code | Per Code | Yes | No | Unknown | Total Theme |
Efficacy | Intended results met, but lack of in-person had its challenges | #effective | 5 | 5 | 0 | 0 | 9 |
#physical | 4 | 1 | 3 | 0 |
Theme Total indicates the total number of responses used to build theme, and Frequency indicates specific counts of sentiments observed for each code.
BoA, Board of Accreditation.
Team members and institutional contacts were asked to rate the virtual modality for each standard to further develop the assessment of effectiveness. All standards except two were rated as “effective” to “very effective” (all p < 0.050; n = 24) on a 5-point Likert scale, as indicated in Table 7. These results did not change between team member and institutional contact responses (per accreditation standard comparison: all p < 0.050). Not surprisingly, Standard 11 (Facilities, Equipment & Technical Support) was not found to be more “effective” to “very effective” compared with “very ineffective” to “neutral” (p = 0.054). The second standard to receive a higher proportion of “very ineffective” to “neutral” ratings was Standard 17 (Advisory Committee Approval of Overall Program) (p = 0.114). This result is more surprising, because the ability to meet with an institution’s advisory committee was still technically available to visiting teams, although this was more logistically challenging due to the newness of a virtual video conference format during the spring of 2020.
Standard | Description | p-valuea |
---|---|---|
1 | Preparation of Self-Study | <0.001 |
2 | Program Definition | <0.001 |
3 | Program Title & Mission | <0.001 |
4 | Program Goals | <0.001 |
5 | Program Learning Outcomes | <0.001 |
6 | Program Structure & Course Sequence | <0.001 |
7 | Student Admission & Retention | <0.001 |
8 | Transfer Course Work | <0.001 |
9 | Student Enrollment | <0.001 |
10 | Administrative Support & Technical Support | 0.008 |
11 | Facilities, Equipment, & Technical Support | 0.054 |
12 | Program/Option Operation | 0.008 |
13 | Graduate Satisfaction With Program/Option | 0.008 |
14 | Employment of Graduates | 0.008 |
15 | Job Advancement of Graduates | 0.002 |
16 | Employer Satisfaction With Job Performance | 0.022 |
17 | Advisory Committee Approval of Overall Program | 0.114 |
18 | Outcome Measures Used to Improve Program | 0.008 |
19 | Program Responsibility to Provide Info to Public | <0.001 |
H0: Proportion of effective & very effective responses ≤ all other responses.
H0, null hypothesis.
Based on the data collected from BoA members, team members, and institutional contacts, it was apparent that spring 2020 virtual site visits were perceived to be effective at upholding the rigor of accreditation review. To assess the impact on quality, survey data from BoA members, team members, and institutional contacts were evaluated and are presented in Table 8. It was found that n = 3 (27%) felt the impact on quality was negative, n = 6 (55%) felt it was neutral, and n = 2 (18%) felt the impact was positive. Statistically, there was no difference in the proportion of negative versus not negative sentiment (p = 0.273) toward an impact on quality. These results lack a distinctive sentiment or, more importantly, do not illustrate a distinctive negative sentiment.
Sentiment | Count | % |
---|---|---|
Negative | 3 | 27 |
Neutrala | 6 | 55 |
Positive | 2 | 18 |
Total | 11 | 100 |
H0: Proportions of Negative = Not Negative (p = 0.273).
BoA, Board of Accreditation; H0, null hypothesis.
To further unpack BoA members’ rationale for sentiments as to quality, the study used thematic analysis to identify a theme of outcomes across all negative, neutral, and positive results. As illustrated in Table 9, this theme was defined as “some standards were challenging, some were easy, and some were the same to assess using a virtual modality.” Even though BoA members did not agree in their sentiment regarding the impact that the virtual modality had on quality, there was a strong focus on outcomes among responses, point to a status quo impact on the quality of the review process. This inference is supported by the statistically insignificant proportion of results among sentiment categories (p = 0.273), as shown in Table 8.
Theme | Definition | Code | Frequency | ||||
---|---|---|---|---|---|---|---|
Per Code | Positive | Negative | Neutral | Per Theme | |||
Outcome | Some standards were challenging, some easy, and some same to assess using virtual modality | #standards | 4 | 0 | 2 | 2 | 9 |
#methods | 2 | 1 | 0 | 1 | |||
#status_quo | 2 | 0 | 0 | 2 | |||
#limitations | 1 | 0 | 0 | 1 |
Theme Total indicates the total number of responses used to build theme, and Frequency indicates specific counts of sentiments observed for each code.
BoA, Board of Accreditation.
To offer a fuller perspective, the survey asked team members and institutional contacts to rate the perceived impact on quality for each standard. As Table 10 indicates, no statistical difference was found between the proportion of “high quality” to “very high quality” and “neutral” to “very low quality” ratings for all standards, except three. The three standards that did exhibit statistically different ratings (i.e., “high quality” to “very high quality” vs. “neutral” to “very low quality”) were Standard 1 (Preparation of Self-Study; all p = 0.022), Standard 2 (Program Definition; all p = 0.022), and Standard 4 (Program Goals; all p = 0.022). Statistical differences in ratings are not surprising, as these standards inherently lend themselves to a virtual review modality. Moreover, no statistical differences were found between team members and institutional contacts per standard ratings (all p < 0.050; n = 24).
Standard | Description | p-Valuea |
---|---|---|
1 | Preparation of Self-Study | 0.022 |
2 | Program Definition | 0.022 |
3 | Program Title & Mission | 0.055 |
4 | Program Goals | 0.022 |
5 | Program Learning Outcomes | 0.121 |
6 | Program Structure & Course Sequence | 0.055 |
7 | Student Admission & Retention | 0.228 |
8 | Transfer Course Work | 0.121 |
9 | Student Enrollment | 0.228 |
10 | Administrative Support & Technical Support | 0.710 |
11 | Facilities, Equipment & Technical Support | 0.928 |
12 | Program/Option Operation | 0.546 |
13 | Graduate Satisfaction With Program/Option | 0.842 |
14 | Employment of Graduates | 0.376 |
15 | Job Advancement of Graduates | 0.546 |
16 | Employer Satisfaction With Job Performance | 0.546 |
17 | Advisory Committee Approval of Overall Program | 0.546 |
18 | Outcome Measures Used to Improve Program | 0.376 |
19 | Program Responsibility to Provide Info to Public | 0.228 |
H0: Proportion of high-quality & very high-quality responses ≤ all other responses.
H0, null hypothesis.
Lessons Learned and Preliminary Best Practices
Qualitative data were evaluated to help answer the second research question to assess what lessons were learned and what preliminary best practices can be used for future virtual accreditation site visits. The first phase of this analysis was to evaluate perceived advantages of the virtual modality. A strong theme of expenditure was observed from BoA, team members, and institutional contact responses. As illustrated in Table 11, this theme of expenditure was defined as “the financial costs were less, travel logistics were less, technical logistics were more, and convenience/time commitment was reduced.” All codes that comprised this, except #technical_logistics, spoke to a sense of reduced resource expenditure required.
Theme | Definition | Code | Frequency | |
---|---|---|---|---|
Per Code | Per Theme | |||
Expenditure | Financial cost less, travel logistics less, technical logistics more, convenience/time commitment reduced | #cost | 13 | 29 |
#convenience | 8 | |||
#time_commitment | 5 | |||
#logistics | 2 | |||
#technical_logistics | 1 |
Theme Total indicates the total number of responses used to build theme, and Frequency indicates specific counts of sentiments observed for each code.
Two themes emerged when disadvantages of virtual site visits reported by BoA members, team members, and institutional contacts were analyzed. As illustrated in Table 12, the first and strongest theme was interaction, defined as “limited first-hand observation, limited in-person interaction, or limited ability to connect with students, faculty, and advisory board members.” The second theme observed in the data set was protocol and was defined as “some standards were harder to evaluate; the process was more difficult from a technical perspective; the method of virtual visits can be refined.”
Theme | Definition | Frequency | ||
---|---|---|---|---|
Code | Per Code | Per Theme | ||
Interaction | Limited first-hand observation, limited in-person interaction, or limited ability to connect with students, faculty, and advisory board members | #physical | 14 | 22 |
#engagement | 5 | |||
#networking | 3 | |||
Protocol | Some standards are harder to evaluate, process is more difficult technically, and methods can be refined | #standards | 5 | 10 |
#technical_logistics | 3 | |||
#methods | 2 |
Theme Total indicates the total number of responses used to build theme, and Frequency indicates specific counts of sentiments observed for each code.
All three survey groups were asked to provide perceptions of whether a virtual modality should be leveraged in the future to understand best practices for future usage of virtual site visits for accreditation reviews. Categorizing sentiments of the n = 31 responses, it was found that n = 12 (39%) felt “yes” this modality should be leveraged in the future, n = 9 (29%) felt “maybe,” and n = 10 (32%) felt “no,” as illustrated in Table 13. From a statistical test of proportions, there was no difference in yes versus not yes sentiments (p = 0.577). In non-statistical terms, this indicated no consensus among respondents about whether to leverage virtual site visits in the future. However, statistically, there are grounds to argue that there was no overwhelming negative (i.e., “no”) sentiment in the data for leveraging this modality in the future. Therefore, to understand further whether virtual site visits should become a best practice review modality, the study evaluated open-ended rationales underpinning respondents’ sentiments. The following paragraph presents this further analysis.
Sentiment | Count | % |
---|---|---|
Yesa | 12 | 39 |
Maybe | 9 | 29 |
No | 10 | 32 |
Total | 31 | 100 |
H0: Proportions of Yes = Not Yes (p = 0.577).
H0, null hypothesis.
In analyzing participants’ open-ended responses explaining why they felt that virtual site visits should or should not be leveraged in the future, three distinct themes emerged. As indicated in Table 14, the strongest theme across all sentiment categories was impact, as indicated by n = 14 total instances of related codes. From the data, impact was defined as “how reviews are conducted affects outcomes: virtual visits can be effective and accomplish the goals of a review, but virtual visits can also be non-effective.” Not surprisingly, this definition echoes the non-definitive sentiment observed in Table 13. The second theme observed was expenditure, defined as “virtual site visits requiring less time and cost but increased technical difficulties.” The last theme was interaction and was defined as “less first-hand observation, and in-person interaction was a negative; however, the virtual modality has the potential to increase capacity and engagement opportunities.” Splitting the data by sentiment categories in Table 13 and evaluating per sentiment code counts in Table 14, #cost was the most commonly occurring code (n = 3) in the “yes” category, whereas #effectiveness was most common (n = 3) in the “no” category, and #methods occurred the most often (n = 3) in the “maybe” sentiment category. These results begin to explain the “why” within the mixed sentiment toward leveraging virtual site visits for future accreditation reviews.
Frequency | |||||||
---|---|---|---|---|---|---|---|
Theme | Definition | Code | Per Code | Yes | No | Maybe | Per Theme |
Impact | The way that reviews are conducted impacts outcome: Virtual visit can be effective and accomplish goals but can also not be effective. | #methods | 7 | 2 | 2 | 3 | 14 |
#effectiveness | 5 | 1 | 3 | 1 | |||
#accomplish | 2 | 2 | 0 | 0 | |||
Expenditure | Less time and cost but at increased technical difficulty. | #cost | 4 | 3 | 0 | 1 | 9 |
#time_commitment | 3 | 1 | 2 | 0 | |||
#technical_logistics | 2 | 1 | 0 | 1 | |||
Interaction | Less first-hand observation and in-person interaction (negative), but virtual modality has potential to increase capacity and engagement opportunities. | #physical | 4 | 1 | 2 | 1 | 8 |
#engagement | 2 | 1 | 0 | 1 | |||
#capacity | 2 | 0 | 0 | 2 |
Theme Total indicates the total number of responses used to build theme, and Frequency indicates specific counts of sentiments observed for each code.
Discussion
Based on the data collected from BoA members, team members, and institutional contacts, it was apparent that spring 2020 virtual site visits were perceived to be effective at upholding the rigor of accreditation review. By triangulating results from BoA, team member, and institutional contact surveys, there appeared to be no marked impact on quality. While BoA results indicated limitations of the virtual modality, there was an overwhelming sense of status quo outcomes. Moreover, team and institutional contact results indicated that, for 16 of the 19 standards, there was no change in quality of the review process during the virtual site visits. Taken with effectiveness results, respondents indicated that the virtual site visits of spring 2020 were effective at upholding rigor while also not negatively impacting the quality of the accreditation review process.
Taking survey respondents’ rationales for advantages, disadvantages, and leveraging as a whole (i.e., Table 11–14 data), a theme of cautious embrace emerged in the data. Whereas a sense of reduced resource expenditure was perceived to be an advantage of the virtual modality, limited interaction and protocol difficulties were reported as disadvantages. Although these themes of expenditure and interaction were echoed in reported rationales for leveraging, there was no clear sentiment regarding whether the virtual modality should be used in the future. However, this study did reveal answers about what advantages and disadvantages were observed from the virtual site visits of 2020. Although these answers offer lessons learned, it was unclear how these advantages and disadvantages should be leveraged for future accreditation reviews or what best practices should be offered for future virtual site visits. Therefore, further research is needed to understand definitively how to leverage virtual site visits for future accreditation reviews.
Implications
The study revealed several results that hint at how future virtual accreditation visits may be improved. Among the inferences noted is that virtual accreditation worked better than expected, regardless of a strong preference of about one-third of respondents for face-to-face visits. Virtual accreditation was overwhelmingly perceived as effective in assessing program rigor across all standards except two. Considering the effectiveness of virtual site visits, Standard 11 (Facilities, Equipment & Technical Support) and Standard 17 (Advisory Committee Approval of Overall Program) were the only standards perceived to have negative impacts. Non-negative impacts for all other standards indicated that they are candidates for future virtual assessment. The quality of the accreditation review process was not perceived to be affected by the virtual visit, with the assumption being that the ATMAE standards addressed by self-studies provide a better indicator of program excellence than personal contact by the visiting team.
Some level of a virtual accreditation process could replace traditional accreditation visits. This was supported by the responses regarding quality impact of the virtual visits. However, a strong preference remains for the validation of Standard 11 and Standard 17 using some form of site visitation. The type and availability of facilities and equipment and the level of technical support is a crucial component for programs that promote project-based learning and hands-on experiences. Furthermore, the support of industry and advice of working alumni continues to be a hallmark of the ATMAE program’s relevancy. Advisor board level of commitment and support is not easily determined via phone call or teleconference.
Virtual accreditation was efficient in eliminating non-value activities. The monetary cost and physical toll of travel on individuals encourage the increased use of virtual methods. It is becoming increasingly difficult to justify the value of travel and face-to-face communications as viable, particularly with ever-changing requirements for masks and social distancing. Virtual accreditation, if organized well, has the potential for reducing the amount of institutional site preparation, participant stress, and overall time spent.
Socializing and face-to-face communication continue to be highly valued as a means of assessing and validating certain aspects of programs. As noted previously, a site visit seems to be perceived as highly valuable for properly assessing the institutional program resources and support. The conventional wisdom that trust is based on eye contact and a handshake is still true. The sudden need for virtual accreditation methods was stressful for those individuals who value face-to-face communication. However, the more the virtual world can be enhanced to replicate the personal touch, the more likely the acceptance of electronic technology for accreditation.
Limitations
The factors limiting the validity and generalizability of this study follow. First, at the time of this publication, very few sources of literature existed related to factors influencing the effectiveness, quality, or lessons learned from in-person or virtual accreditation site visits. Even so, the authors made every effort to fully review all available literature to inform this study. To that end, a review of literature started with a Web of Science search to locate published academic articles. A second index, Dissertations and Theses Global, was used to locate dissertations and theses on the topic, followed by a search of Google Scholar to locate other accreditation reports from non-academic sources. In all searches, keyword search terms included “accreditation,” “accreditation visit,” “virtual accreditation,” “site visit,” “assessment,” and “engineering accreditation.” Furthermore, the sample size was too small to conclude with certainty that virtual accreditation can completely replace the traditional site visit. However, aspects of the virtual accreditation process are already in place and will continue to operate in this fashion. Second, the themes of effectiveness and quality are highly subjective, even with definitions provided. Respondent bias was likely present; individual perspectives about the level to which planned activities were realized and results were achieved varied greatly. Each accreditation team and institution were unique. Even though the study considered the results from a limited period and small survey sample, every effort was taken to rigorously and judiciously evaluate the data to begin to understand impact of virtual site visits on accreditation review effectiveness, quality, and best practices.
Future evaluation
Opportunities for future evaluation exist and should be longitudinal in nature to capture the rhythms of accreditation cycles for institutions not previously accredited or for those seeking re-accreditation. The interpersonal nuances and cooperation between accreditation visiting teams and institutions not previously surveyed will add depth to the analysis. This will also increase the sample size and generalizability of the findings over a wider range of institutions and visiting teams.
Future research could include additional questions that explore themes beyond accreditation effectiveness and quality. An ongoing concern is consistency of accreditation standards interpretation by visiting teams and the way in which institutions demonstrate compliance with the standards. Virtual methods of accreditation could offer opportunities for greater process oversight by directly involving accreditation board members or their proxies.
Conclusion
This study collected data from the spring of 2020 to assess the effectiveness and quality of virtual accreditation site visits and develop lessons learned and preliminary best practices for future virtual accreditation reviews. Results indicated that it is possible to accredit an institution without the need for an onsite visit. Furthermore, the findings provided an initial roadmap for further refinement and improvement of the ATMAE accreditation process. The primary takeaway from this research is that virtual site visits are a viable means of achieving accreditation, provided the self-study contains the information needed to assess program quality and rigor.
Acknowledgements
The authors thank the ATMAE staff, Board of Accreditation, visiting team members, institutional contacts, and journal reviewers for generously supporting this study through their time, input, and perceptions.
References
Board of Accreditation. (2019). 2019 Accreditation Handbook. Association of Technology, Management, and Applied Engineering.
Board of Accreditation. (2022). Accreditation Policies and Procedures Document. Association of Technology, Management, and Applied Engineering.
Borrego, M., Foster, M. J., & Froyd, J. E. (2014). Systematic literature reviews in engineering education and other developing interdisciplinary fields. Journal of Engineering Education, 103(1), 45–76. doi: https://doi.org/10.1002/jee.20038
Cobourne, K., & Shellenbarger, T. (2021). Virtual site visits: A new approach to nursing accreditation. Teaching and Learning in Nursing, 16(2), 162–165. doi: https://doi.org/10.1016/j.teln.2020.11.001
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons.
Eaton, J. S. (2020). Change and innovation in quality assurance: Accreditation and the opportunity of COVID-19. Change: The Magazine of Higher Learning, 53(1), 50–54. doi: https://doi.org/10.1080/00091383.2021.1850130
Frazer, E. (2021). Coronavirus disease 2019 update. Air Medical Journal, 40(2), 91. doi: https://doi.org/10.1016/j.amj.2020.12.010
Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews. SAGE.
Haughery, J. R., & Raman, D. R. (2016). Influences of mechatronics on student engagement in fundamental engineering courses: A systematic review. International Journal of Engineering Education, 32(5), 2134–2150.
Kinzie, J. (2020). How to reorient assessment and accreditation in the time of COVID-19 disruption. Assessment Update, 32(4), 4–5. doi: https://doi.org/10.1002/au.30219
Potts, J. R., 3rd, Lipsett, P. A., & Matthews, J. B. (2021). The challenges of program accreditation decisions in 2021 for the ACMGE Review Committee for Surgery. Journal of Surgical Education, 78(2), 394–399. doi: https://doi.org/10.1016/j.jsurg.2020.08.023
Priddy, L., & Pelletier, S. G. (2020). Trends in accreditation: How will accreditors once again become relevant for higher education? Planning for Higher Education, 49(1), 26–34.