In the Nick of Time: A Pan-Canadian Examination of Extended Testing Time Accommodation in Post-secondary Schools

Laura Sokal, University of Winnipeg

lj [dot] sokal [at] uwinnipeg.ca

Alina Wilson, University of Winnipeg

Abstract

Extended testing time accommodation (ETTA) is the most common accommodation assigned to post-secondary students with disabilities. We examined data on the processes of providing and monitoring the use of ETTA at 48 Canadian post-secondary institutions who provided accommodations to over 43,000 students with disabilities in every province in Canada. Findings indicated that students with learning disabilities were the most likely to be allocated ETTA. The most common duration of ETTA by far was 150% of the standard testing time provided to other students, and was typically assigned in over 70% of cases-- despite there being no valid empirical evidence to support this practice. In almost half of the institutions following this practice, this duration of ETTA was typically awarded upon intake based on guidelines, policies, or the belief that research exists to support this procedure, and in over 40% of these institutions there were no procedures in place for monitoring and modifying ETTAallowances once assigned. There was evidence of some exemplary practices in terms of the decision-making processes that went into determining and monitoring individual student’s ETTA durations. However, concerns were raised in some cases by the rationales for providing specific durations of ETTA, and by the lack of monitoring that together comprised ‘blanket’ accommodations.

Keywords

Acknowledgements

We would like to acknowledge with gratitude the student accessibility service professionals who generously participated in this research and who are our partners in our quest to provide high-quality, evidence-based, and fair services to all students.

In the Nick of Time: A Pan-Canadian Examination of Extended Testing Time Accommodation in Post-secondary Schools

Laura Sokal, University of Winnipeg

lj [dot] sokal [at] uwinnipeg.ca

Alina Wilson, University of Winnipeg

Across Canada, students with disabilities are being provided with accommodations on course-based tests in an attempt to ensure their fair treatment in post-secondary settings. The provision of accommodations is required under law in both Canada (Canadian Human Rights Act, 1985) and the United States (Americans with Disabilities Act, 1990). Therefore, it is alarming to discover that the accommodations being provided may or may not address these students’ individual learning needs and may not, in fact, be facilitating fairness at all. Of particular interest is extended testing time accommodation ( ETTA), which is the most common accommodation provided to students with disabilities (Lovett, 2010; Sireci, Scarpati, & Li, 2005; Stretch & Osborne, 2005). The use of ETTA as a ‘blanket accommodation’ has been questioned in terms of its appropriateness (Brinckerhoff, Shaw, & McGuire, 1992; Lovett, 2011) in providing valid and reliable test results (Lovett, 2010), and in turn its fairness to students with disabilities and those without (Sokal, 2016; Sokal & Vermette, in press). The purpose of the current study is to determine the processes by which ETTA durations are assigned and monitored in Canadian post-secondary schools.

The Rationale for ETTA

The rationale for testing accommodations is persuasive. Accommodations are defined as “a change in assessment materials or procedures that address[es] aspects of students’ disabilities that may interfere with the valid assessment of their knowledge and skills” (Thurlow & Bolt, 2001, as cited in Stretch & Osborne, 2005, p. 1). They are intended to ensure that the assessment procedures used are valid in actually measuring students’ abilities related to the tested materials and skills, rather than their disability. In this way, the rigor of the assessment remains constant, as the resulting grades should be meaningfully comparable to the grades of students without disabilities who complete the same assessment under standard conditions (Lovett & Lewandowski, 2015). To clarify, imagine that a test has 30 questions and that standard testing conditions allow students 60 minutes in which to complete the test. A student with typical abilities is expected, when working to maximum potential, to have the necessary time to read and respond to all 30 questions. A student with a disability related to reading fluency may have the time to read and respond to only 20 of the questions, resulting in a grade out of a possible 20 questions, rather than 30 questions. An appropriate allotment of ETTA would allow the student with the reading disability the necessary time to access and respond to all 30 questions, resulting in a grade that is more reasonably and fairly comparable to the grade of the student without disabilities. Furthermore, the test designer could be more confident that the resulting grades under ETTA conditions are truly measuring how well all students have mastered the tested content, rather than measuring reading speed.

Theoretical Basis for ETTA

Researchers have proposed that in situations like the one we have described, the ETTA provided to the student with the disability serves to “level the playing field” (Sireci et al., 2005, p. 457). This interpretation is supported by the differential boost hypothesis (Fuchs & Fuchs, 2001; Sireci et al., 2005). This hypothesis proposed that although all students probably perform better on tests when provided with more time, students with disabilities provided with ETTA demonstrate significantly greater gains than students without disabilities provided with ETTA, suggesting that ETTA effectively addresses disability needs.

The notion of leveling the playing field through a differential boost can be justified from a variety of perspectives related to conceptions of disability. From the perspective of the medical model, ETTA could be perceived as a ‘compensatory’ measure to ‘supplement’ for static ‘deficits’ demonstrated by students with disabilities (Gilson & DePoy, 2002). Alternatively, from the perspective of a constructionist model of disability (Creswell, 2007), ETTA could be perceived as a re-calibration of opportunity to adjust for the environment’s inability or unwillingness to adapt to differences within the human condition (Hahn, 1994). From this perspective, able-bodied people hold un-earned dominance over people with disabilities (Kattari, 2015), and society has constructed a social hierarchy to maintain that dominance (Sidenius, Pratto, Laar, & Levin, 2004; Walls, 2005). ETTA is therefore seen as a means to challenge a constructed perception of disability evidenced by “impairment with the social, attitudinal, architectural, medical, economic, and political environment” (p. 401) where the dominant culture maintains its superordinate position (Zola, 1989).

Limitations of the ETTA Research Base on Course-based Decision-making

Despite the intuitive attraction of ETTA as a way to meet moral and legal requirements to provide fair testing, the research base that supports ETTA as a fair accommodation suffers from many limitations. These limitations are magnified when one attempts to use the existing research base to determine the appropriate duration of ETTA in course-based assessments of post-secondary students with disabilities.

High-stakes tests. First, much of the research conducted about ETTA has focused on high-stakes tests such as American College Testing (ACT) and Scholastic Assessment Test (SAT) or other standardized tests (Elliott & Marquart, 2004). While this focus is useful for those who are interested in ensuring the fairness of exams as a means for students with and without disabilities to earn acceptance into university, it is unclear whether the findings of this research can be generalized when making decisions about the allocation and duration of ETTA for course-based assessments at the post-secondary level. Research has suggested that the type of test and the intended use of the testing results should be weighed when ETTA is being considered (Brinckerhoff et al., 1992; Lovett & Lewandowski, 2015). This observation supports caution in extrapolating the findings of high-stakes test research about ETTA durations to use in course-based decision-making.

Generalizability of samples. A second limitation relates to the samples studied. Many studies have been conducted on middle school and high school students (for examples, Brown, Reichel, & Quinlan, 2011; Elliott & Marquart, 2004; Lewandowski, Lovett, Parolin, Gordon, & Codding 2007), and it is unclear whether the findings of these studies can be generalized to college populations. Thompson, Blount, and Thurlow (2002) published a review of 46 empirical studies related to test accommodations, and only three of the studies were conducted with post-secondary samples. Caution should be taken when extrapolating findings from grade school to college settings (Lovett & Leja, 2015).

Generalizability of experimental conditions. A third limitation relates to the design of studies about ETTA. It is not uncommon for researchers to use a standardized test in an experimental setting and to ask student participants to complete the test as if they were in a high-stakes setting (for examples, Lewandowski, Cohen, & Lovett, 2013; Miller, Lewandowski & Antshel, 2015). It is highly unlikely that student participants in these experiments feel the same sort of pressure in this setting as in a high-stakes test or even a class-based test, a limitation acknowledged by some researchers (Lovett & Leja, 2015). This observation calls into question whether ETTA research based on experimental testing designs can be generalized to actual testing situations—either high stakes or course-based.

What are we trying to achieve with ETTA studies? A fourth limitation has resulted from the limited focus of the research conducted. One of the main foci of research in this area has been on supporting or refuting the differential boost hypothesis (Sireci, et al., 2005). While this is a worthwhile research focus, the findings of this research provide little guidance to those making decisions about conditions under which they should allocate ETTA and, when warranted, which durations are most appropriate. Furthermore, many studies focusing on the differential boost hypothesis also suffer from serious methodological issues. In most cases, the studies focus on the performance differences between students with disabilities who are provided with ETTA and students without disabilities who are provided with ETTA. The error in these designs is that they assume that greater access to reading and responding to all test items will result in higher scores. This is a faulty assumption. Given that the rationale behind ETTA is equal access to the test questions, and not equal scores, measuring the results rather than the number of questions attempted demonstrates attention to the theory rather than its application (Coulter, 2009). Rare exceptions to this design issue are studies conducted by Lovett and Lewandowski and their colleagues (for examples see Lewandowski, Cohen, & Lovett, 2013; Lewandowski, Lovett, & Rogers, 2008; Miller, et al., 2015). These scholars designed a series of studies where students who had disabilities and students who did not each completed testing under standard time. All students were then instructed to change the colour of their writing instruments and continue writing the test under ETTA. A similar design was used by Runyan (1991), who asked students to circle the questions on which they were working at the standard test time and then allowed students to continue working under ETTA conditions. In this way, equal access to reading and responding to the test questions was measured, rather than simply comparing the number of correct responses alone. These more applicable types of designs focus more on equal access than on supporting or refuting the differential boost hypothesis and therefore provide more relevant information to those making ETTA decisions for course-based examinations.

Homogeneity of people with disabilities. A fifth limitation of previous research on ETTA relates to an assumption of homogeneity within the category of students with disabilities. In some studies of ETTA comparing students with disabilities with those students without disabilities, students with disabilities have been treated as a category (for example, see Elliott & Marquart, 2004). This conceptualization fails to recognize the diversity within that category—a diversity that has implications on the appropriateness of ETTA for individual students. Moreover, research that examined the effects of ETTA on students with specific disabilities has generated diverse results.

Students with Attention Deficit Hyperactivity Disorder (ADHD). Miller et al. (2015) found that under both standard conditions and ETTA conditions of 50% and 100% of standard time, college students with ADHD and students without ADHD who wrote a standardized test under experimental conditions accessed and correctly answered comparable numbers of test questions. However, when the number of questions accessed and performance of students without ADHD who wrote under standard conditions were compared to the number of questions accessed and performance of students with ADHD under ETTAof both 150% and 200% of the standard testing duration, the students with ADHD had a distinct and significant advantage over students without disabilities in both cases. Other research by Lovett and Leja (2015) found that post-secondary students with ADHD symptoms actually performed more poorly when provided with extended time than under standard conditions. The authors explained this unexpected finding by making reference to work by Pariseau, Fabiano, Marsetti, Hart, and Pelham (2010) who showed that students with ADHD simply slowed down their efforts when provided with extra time. These studies suggest that ETTA may not be an effective accommodation for post-secondary students with ADHD, as it failed to support the differential boost hypothesis and also failed to level the playing field.

Students with Learning Disabilities (LDs). Lewandowski, Lovett, and Rogers (2008) cautioned that students with ADHD are highly distractible and therefore the findings regarding ETTA and students with ADHD may not extend to students with other categories of disability. A study by Lewandowski, Cohen, and Lovett (2013) found that college students with LDs who wrote a standardized test under experimental conditions performed more poorly when compared to nondisabled students under typical testing durations, but when both groups were provided with ETTA, the students without LDs made significantly greater gains than the students with LDs. Moreover, when ETTA of double the standard time was provided to the students with LDs only, the students with LDs actually accessed 26% more test questions than those students without LDs did under standard testing duration, suggesting that ETTA of double time provided an unfair advantage to students with LDs. In another related study of high school students with LDs (Lewandowski, Lovett, & Rogers, 2008), the authors also found no evidence of differential boost, in that the students without LDs outperformed the students with LDs in both standard and ETTA conditions. However, this study also showed that students with LDs accessed similar numbers of test items under ETTA conditions when compared to students without LDs under standard testing durations. In effect, the playing field was levelled in terms of access, yet that access did not result in a differential boost in results favoring the students with disabilities. In contrast, an older study conducted by Runyan (1991) compared university students with and without LDs completing standardized tests in experimental conditions under both standard and ETTA conditions. She found that the students with LDs performed less well under standard time and equally well under ETTA when compared to students without LDs writing under standard time. Furthermore, the students without LDs performed no better with ETTA than without ETTA, supporting in part the differential boost hypothesis.

Students with Anxiety Disorders. It is important to note that Test Anxiety, per se, is not a condition recognized by the American Psychological Association. That is, while it could be argued that test anxiety is a Specific Phobia (Lovett & Lewandowski, 2015) or Social Phobia (Zuriff, 1997) under the Diagnostic and Statistical Manual of Mental Disorders V (DSM5) criteria (APA, 2013), test anxiety in and of itself is not legal grounds for accommodation. It can be further argued that most students feel some level of stress or anxiety in testing situations, and that the intent of ETTA is not to create an unfair advantage by addressing the stress of some students and not others. In the case of diagnosed anxiety disorders, however, ETTAhas been suggested as a means to level the playing field for students whose anxiety levels become so extreme as to create an unfair disadvantage to them in the testing condition. Research (Sokal & Desjardins, 2016; Sokal & Vermette, in press; Zuriff, 1997) has shown that the awareness of extra time alone is so influential on some students with anxiety disorders that they do not use any of the ETTA provided. However, a search of the database revealed no studies that have investigated the effects of ETTA on the number of items attempted and the test performance of students with anxiety disorders, supporting the contention of Holzer, Madaus, Bray & Kehle (2009) that “the link between test anxiety and how it is affected by extended time has not been comprehensively explored” (p. 44).

Thus, studies that lump all students with disabilities into one category and then study their performance under ETTA conditions conflate the needs of some students with the differing needs of others. Indeed, these studies collectively suggest that assigning ETTA as an accommodation for students with disabilities both within and across categories of disabilities is problematic, and therefore setting a duration standard across disabilities becomes even more so. Furthermore, students within the same category of disability may respond differently to a specific type of accommodation (Lindstrom, 2010; Medina, 2000). “Certain testing accommodations may benefit some students with learning disabilities, [however] no single accommodation has been shown to benefit all students with learning disabilities” (Brinckerhoff & Banerjee, 2007, p. 247). Caution should therefore be taken in setting standards for providing ETTA as well as for setting ETTA durations within categories of disabilities.

The student voice. The sixth and final limitation is the paucity in the research literature of the voices of students with disabilities themselves (Beauchamp-Pryor, 2012; Vickerman & Blundell; 2010). Castrodale (2015) is an exception to this observation, and his thesis included copious examples of students’ comments about their perceptions of ETTA as a superficial and inadequate accommodation. One student in his study, Mary, summed up the lack of individuation in accommodations as follows:

For everybody, more or less it’s literally a drop-down menu. Do you need extra time? Do you need whatever? It’s phrased this particular way for everybody, which to me I just think is ridiculous. It’s supposed to be an academic accommodation based on the individual’s needs... accommodation has been neutralized, watered down, and standardized for the entire university....” (p. 291)

Likewise, a study by Sokal and Desjardins (2016) also showed how individuation of accommodation has been replaced by standardization. One student participant, commenting on the individual needs of students said, “It’s funny because in university people think everyone has to be treated equally, but that’s not right. They have to be treated with fairness, not equality.”

Given all the limitations within the research findings, it becomes clear that ETTA is a complex and student-specific accommodation (Lovett & Lewandowski, 2015). On what basis then are decisions made about assigning ETTA for course-based testing in post-secondary settings, and how does the research inform best practice?

What Duration of ETTA is Recommended?

Currently, there is a scarcity of research that specifies a recommended duration of ETTA. Only three papers have provided such a recommendation.

Some of the first scholars to formally examine ETTA duration in order to make a recommendation were Ofiesh and Hughes (2002), who conducted a literature review of seven studies that examined the duration of ETTAused by students with LDs in post-secondary environments. They recommended that ETTA of 150% of standard testing time was a suitable benchmark. However, design issues with the supporting research led to questions about the validity of the resulting recommendation. Specifically, this recommendation was heavily based upon standardized tests, which were the form of assessment used in six out of the seven studies that Ofiesh and Hughes examined. It is unclear whether this generalization of findings from standardized tests to course-based tests is warranted. Only one of the studies examined by Ofiesh and Hughes considered the effects of ETTA on course-based tests, an unpublished dissertation with a small sample size and possible sampling bias. That study had inconclusive findings, as it showed that while all students performed significantly better when allowed more time, the difference in performance between standard time and ETTAwas more significant for students without LD. It is therefore unreasonable that Ofiesh and Hughes, using one study of classroom-based tests and six studies of standardized tests, made a general recommendation for 150% of standard time across testing conditions as a benchmark for accessibility service advisors in post-secondary schools. The deficiencies in the evidence base supporting this recommendation have been pointed out by Lovett and Lewandowski (2015) and Sokal and Vermette (in press).

Another study was conducted by Cahalan-Laitusis, King, Cline, and Bridgeman (2006), who used a standardized test to determine the duration of time used by college students with LDs and/ or ADHD. These researchers found that many of these students, who were allowed additional time of 150% of the standard testing duration, did not use most of the ETTAthat was given to them. They subsequently concluded that ETTAof 125% of the standard testing time was a more appropriate starting place.

Likewise, Lewandowski, Cohen, and Lovett (2013) examined how much time college students with LDs required in order to access the same number of test questions as their peers without disabilities. Their research supported the recommendation of Cahalan-Laitusis et al., in that 125% of the standard testing time allowed equal access. Moreover, as previously mentioned, ETTA of 200% gave an unfair advantage to the student with LDs.

Rationale for the Current Study

The research base on durations of ETTA was summed up by Miller, et al. (2015), who stated, “There is no research in support of time and one half and double time as more appropriate and valid amount of time than, for example 25% or 75% extra time” (p. 768). Despite the paucity of research about appropriate durations of ETTA in course-based examinations of post-secondary students with disabilities, recent research conducted on ETTA allocations has suggested that Ofiesh and Hughes’ 150% recommendation is being used as a standard in at least one Canadian province (Sokal & Vermette, in press). By examining the ETTA provided on over 8000 exams across two universities, these researchers found that the majority of students were provided with 150% of the standard testing time, with the second most common allocation being 200%. However, they further found that over 35% of students used none of their testing time and that the average used only 117% of the standard testing time provided to other students. Together these findings suggest that ETTA is being allocated in quantities that may not be sensitive to the assessment needs of individual students, although it is possible that the findings of the Sokal and Vermette study are anomalous and that other Canadian post-secondary schools follow more evidence-based practices. Furthermore, given that those who assign initial ETTA allowances have little guidance from the research regarding appropriate duration, it is also possible that initial ETTA allowances are monitored and changed in response to individual student data. The current research therefore sought to fill a void in the research base by conducting research with two purposes.

Purpose of Research

  1. To determine whether Student Accessibility Services (SAS) professionals in post-secondary institutions across Canada are using 150% of standard time as a benchmark in allotting ETTA; and
  2. To determine the processes for assigning and monitoring ETTA across post-secondary institutions in Canada.

Methods

Design

A survey design was used in the current project. The project design, consent procedures, and survey were submitted to the primary investigator’s Research Ethics Board, where it was determined that the survey required only publically available information and therefore did not require ethics approval. The survey included 12 questions and was posted on a Fluidsurvey page. The link to the survey was sent out to all members of the Canadian Association of College and University Student Services Community of Practice: Accessibility & Inclusion through their listserv. The initial request resulted in responses from 23 members. A reminder email was sent out one week later and resulted in an additional 28 responses, for a total of 51 in all. The data were transferred from Fluidsurvey to a SPSS spreadsheet and then analyzed. Initial cleaning and screening revealed duplication of results from three universities, so the second response was discarded in each case, resulting in usable data from 48 post-secondary institutions.

Participants

The 48 survey respondents reported on ETTA practices that affected 43,284 students with disabilities who accessed accommodations from their Canadian post-secondary institutions (see Table 1). Respondents represented all provinces of Canada. The total number of students attending the institutions represented in the current study could be as low as 511,060 or as high as 929,040, given that institutional student populations were requested and reported as ranges (see Table 2). The sample was mainly comprised of colleges and universities, but one poly-technical school and one professional school also participated (see Table 3).

Results

Who Receives ETTA

The respondents were asked to indicate which category of disability was the most common in students who received ETTA at their institutions. Twenty-nine (60.4%) respondents chose Learning Disabilities (LD), which indicated that LDwas the most common category of students with disabilities receiving ETTAin our sample of institutions, a finding supported by prior research in the United States (Raue & Lewis, 2011). “Other” was the second most common category and was indicated by 12 respondents (25.0%). The respondents who chose this category indicated that the use of ETTAwas evenly spread across students in several categories of disabilities (LDs, ADHD, and Anxiety disorders) or that individual students who accessed ETTA belonged to multiple categories including those previously listed. Anxiety Disorders was the third most common category of disability receiving ETTA, as indicated by 6 (12.5%) respondents. One respondent (2.1%) did not indicate the most common disability category at his/her post-secondary school.

Frequency and Duration of ETTA

Approximately ninety-four percent of the respondents (n = 45) indicated that outside of a quiet testing location, ETTA was the most common type of accommodations provided to students with disabilities at their institutions. This finding is supported in previous research (Lovett & Lewandowski, 2015). One respondent reported that providing a reader was the most common accommodation given at that institution, one respondent disclosed that supplying a note-taker was the most common accommodations provided, and one respondent indicated the category of “other” but did not specify.

Most of the respondents (54.2%, n = 26) reported that 1.5 times the standard testing time allowed to other students was the most typical ETTAallotment provided. Some of the respondents (18.8 %, n = 9) reported that either 1.5 or double the standard testing time allowed to other students were the most typical ETTA allotments provided. Four respondents (8.3%) reported that there was no “typical” duration of ETTA provided at their institutions. Five institutions (10.4%) indicated “other” procedures that also placed them in the category of no “typical’ duration of ETTAprovided to students at the intake meetings. These included: (1) “Extra time is assessed by individual students against the identified barriers. We have provided students with double time. We've also done 15 extra minutes per hour. There is no standard application of extra time”; (2) “Students’ extended time is based on their functional limitations and can fall under 1.25, 1.33, 1.5, 1.75 or 2.0 times extended time”; (3) “Percentages vary from 115% to 200%, usually most students received 133% or 150% [of the standard time provided to other students]; (4) “ETTA is determined as a function of the barrier(s) that they face in testing situations. Individuals with the same diagnosis on paper may experience different barriers or severity or barriers and so may require different accommodated timings”; (5) “ETTA varies from 15% to 100% with 50% being the most common.” Finally, four respondents (8.3%) indicated that a “typical” duration of ETTA was used at their institutions, but that it was of a duration other than 150% or 200% of the standard time provided to other students. These durations included 150% of standard time up to a maximum of one extra hour; 125% or 150% the duration recommended in the student’s psychological assessment; and a range of 115%-200% of the standard time “depending on the individual’s limitations.”

Processes and Rationale for Allocating Initial ETTA Durations

The processes for determining the initial durations of ETTA provided to students varied across institutions. Respondents at 17 institutions (35.4%) indicated that an individual advisor made the initial determination of the awarding ETTA as well as its duration. At 15 institutions (31.3%), the duration of ETTAwas based on general institutional guidelines. At four institutions (8.3%), it was based on recommendations in research findings. At one institution (2.1%), it was based on institutional policy. At 11 institutions (22.9%), “other” processes were used. In examining the descriptions of “other” processes provided by the participants, in most cases-- 8 cases of 11-- they involved an individual advisor or a group that included the advisor making a decision based on documentation. Example statements include, (1) “Based on the recommendations that come with the student from educational or other assessments”; (2) “based on recommendations from professionals”; (3) “medical documentation & advisor.” In three cases the guidelines or policies followed were based on guidelines from outside a specific institution. For example, ETTAwas allotted “in accordance with practice and guidelines in the province/field.” Collectively then, at almost half of the institutions (n = 23, 47.9%) studied, decisions about the allotment and duration of ETTA were based on internal guidelines, internal policies, external guidelines, or research findings.

We followed up on these findings by limiting the sample to only the sub-group who typically awarded 150% or 150%-200% of standard time, in order to determine the underlying processes for this allocation of ETTA. We found that this subsample was comprised of 35 institutions, or 72.9% of the larger sample, and included decisions made by individual advisors in 13 cases, decisions based on institutional guidelines in 12 cases, decisions based on institutional policy in one case, decisions based on recommendations in the research findings in three cases, and decisions based on “other” processes in six cases. In these six cases, “other” included one case of decisions based on guidelines outside the institution and five cases of advisors making decisions based on documentation either alone or with others. Collectively, 17 institutions (48.6%) of the 35 who awarded ETTA durations of 150% or 200% as a matter of course said they did so based on internal guidelines, internal policies, external guidelines, or research findings.

Processes of Monitoring and Modifying ETTA

The processes for monitoring and modifying the ETTA provided after the initial intake meetings varied widely between institutions. The respondents indicated that six institutions (12.5%) did not monitor ETTAat all after the initial allotment at intake, and 14 institutions (29.2%) did so only upon student request. Collectively then, 20 institutions (41.7%) had no formal procedures for reviewing and perhaps modifying ETTAdurations once allocated at initial intake. Seven institutions (14.6%) indicated that they reviewed ETTA durations once per year, and 11 institutions (22.9%) reviewed it more than once per year, usually after end-of-term testing. Ten institutions (20.8%) indicated that they used “other” processes for reviewing ETTA. Examination of the respondents’ explanation of this term indicated great variation. Some institutions catered monitoring to the students’ characteristics. An example is “Depends on the student - 1 time per year, if returning and it’s a student with a stable condition. 2 times a year if it’s a new student or if there are fluctuations in the student’s symptoms.” Another example is “As required. As we are a small institution, we monitor testing times on an ongoing basis.” Other institutions provided vague descriptors such as, “When the original decision seems to be inadequate”, or “when it becomes apparent that the amount of time provided isn't working for the student. We then consult with the student and adjust the amount of extra time given. We usually increase to double time.” These latter monitoring procedures imply that some sort of review of ETTA is completed, but we cannot conclude that this is done on a consistent or scheduled basis.

It should be noted that almost half (n = 15, 42.9%) of the 35 institutions who routinely allotted either 150% or 200% of ETTA at intake did not subsequently monitor ETTA or did so only upon student request.

Summary of the Findings

Overall, we found that ETTA allotment and procedures vary greatly across the 48 Canadian post-secondary institutions in our study. We found that ETTA is by far the most common accommodation provided, and that durations of either 150% or 200% of standard testing times were provided in more than 70% of cases. Furthermore, almost half of the 35 institutions who awarded ETTAdurations of 150% or 200% as a matter of course at intake did so based on internal guidelines, internal policies, external guidelines, or non-existent research findings. Collectively, 20 institutions (41.7%) of the total 48 studied had no formal procedures for reviewing and perhaps modifying ETTA durations.

Discussion

Our research findings suggest some interesting trends. First, ETTA of 150% of the standard testing time seems to be a common practice across Canadian post-secondary institutions. More than 54% of respondents confirmed that they typically award this duration of ETTA and, when we also include those who indicated they typically award 150%-200% of standard testing time, we captured 73% of the respondents. Given that LDs were the most common disability at over 60% of the institutions studied, and given what little research we have on ETTA suggests that 125% is the most appropriate ETTA duration for students with LDs, the common ETTApractices in the majority of Canadian institutions sampled are unsupported by evidence. Second, of the respondents who awarded ETTAdurations of 150% or 200% as a matter of course, 49% said they did so based on internal guidelines, internal policies, external guidelines, or research findings. This is again a very troubling finding, given that Miller et al. (2015) found that there is no empirical basis to support this practice. It is therefore disconcerting that some of the respondents awarded ETTA based on their belief in this non-existent research base, but perhaps even more so that these beliefs have precipitated into guidelines and even policies that are followed by many respondents.

To ensure we had not missed any important research recommending the practice of awarding time and a half as a standard practice, we contacted Larry Lewandowki, a prolific researcher who recently published a book on the topic alongside Ben Lovett. Lewandowski confirmed, “There is essentially no research that indicates 50% extended time is the magic accommodation” (Larry Lewandowski, personal communication, May 24, 2016). We then followed up with the past president of the Canadian Psychological Association, two school psychologists each employed in different Canadian provinces, as well as the Clinical Director of the regional assessment and resource centre at Canada’s premiere university. In each case, we were informed that they were not aware of any guidelines or any research by which ETTA of 150% was empirically supported.

A third trend of concomitant concern was the general lack of monitoring across over 40% of institutions in general, and across 42% of institutions that routinely granted ETTA of 150%-200% of standard time during intake meetings. Coupled with a lack of individualization of ETTA duration, this lack of monitoring suggests that we are currently unable to ascertain whether accommodations are actually fulfilling their intended purpose for individual students.

It may seem logical at this point to place blame on SAS professionals who are making these decisions in the absence of evidence, but that would be hasty. Unless researchers design and conduct quality studies about the best practices for awarding ETTA to students with a variety of learning and assessment needs, even the most dedicated and knowledgeable SAS professionals are left without the necessary tools to make fair and appropriate decisions. Furthermore, they are constrained by lack of resources (Sokal, 2016) and the constant threat of litigation that together place them in a very precarious position.

So, in the general absence of research related to appropriate ETTA durations for different needs, perhaps 150% of the standard time is not such a bad place for SAS professionals to start. We should note that Lovett and Lewandowski’s (2015) work suggested that in the cases of students with LDs, 125% would be more appropriate, and for students with ADHD, other accommodations are more effective than ETTA. In any case, assigning a specific duration of ETTA should not be a one-time event. Just as students’ needs and abilities can change over time, so should their accommodations be monitored to ensure they are having the desired effects. In this way, monitoring of effectiveness may be able to compensate for the lack of research on appropriate ETTA allotments by disability category until researchers can provide the information necessary to support more defensible initial ETTA assignments. Furthermore, students’ requests alone should not be the basis of ETTA review, as they were in almost 30% of institutions studied. Lovett and Leja (2013, 2015) found that student perceptions of their need for extra time were not predictive of benefits from extra time, suggesting that other data should also be considered when making decisions about changing ETTA durations.

Despite these troubling findings regarding the misguided rationale for allocating ETTA as well as the frequent lack of monitoring once it is assigned, we were heartened by the allocation and monitoring practices that strive to go beyond blanket accommodations based on a one-size-fits-all approach. “Blanket accommodations do little to build upon a student’s strengths or compensate for specific weakness, or to ultimately equip the student to meet subsequent challenges after graduation in the world of work” (Brinckerhoff, et al., 1992, p. 418). We therefore wish to highlight the institutions who used multiple sources of data to support allocating ETTA and determining its duration, as well as monitored ETTA on an ongoing, regular basis based on individual students’ data on use and performance under ETTA conditions. These institutions, both large and small, provide a template for other institutions using less individualized approaches. It is also encouraging news that other respondents indicated that they have noticed trends in their students’ data and are open to changing practices to ensure that they are meeting the needs of their students. One respondent commented, “We find many students leave the classroom for ETTA that they actually don't use. I'd be interested to know if this trend is consistent at other institutions and how it can be addressed.” Likewise, another stated, “Feedback from faculty and the exam personnel indicate there are students who do not use all of their extra time. We will be considering a sign in/out or electronic check-in/check-out process to monitor use of extra time.” Together, these comments suggest a willingness to use the student data to adjust ETTA as student needs and performance require.

Limitations

All research studies have limitations, and ours is no exception. We would like to point out four limitations. First, our study reported only on common Canadian post-secondary practices, not on recommended practices. We wish to be clear that common practice is not an indication of best practice, and our findings should not be viewed as validation of current practices. Second, our study requested information about ETTA use with students within all disability categories, and may have obscured differences in ETTA use between them. Had we asked the respondents about their practices related to specific disabilities, we may have uncovered differential use of ETTA by disability category, as recommended by Lovett and Lewandowski (2015). Third, Ofiesh and Hughes (2002) suggested the appropriate duration of ETTA may differ from institution to institution based on entrance requirements that affect the composition of the student body. Our sample included those serving a broad range of students at a broad range of institutions, and our study design based on survey data therefore may have masked some of these differences. Finally, we are cognizant that although the funding to SAS offices is increasing, it is insufficient to keep pace with the growing accessibility needs of the student body (AUCCCD, 2014; Sokal, 2016). We recognize that recommending greater time per student in terms of allocating and monitoring ETTAwill require resources, both human and financial. The power to correct this limitation lies with those who make university budgetary decisions and set institutional priorities.

Future Directions

Given that ETTA is the most common post-secondary accommodation, we are called upon to ensure it is meeting its potential in addressing the assessment needs of various students with special needs. Sadly, Stretch and Osborne made this same recommendation in 2005, over ten years ago:

It is clear that practitioners must develop guidelines for when ETTA is an appropriate testing accommodation and perhaps also assess appropriate magnitude of the extended time that is appropriate for each individual, as it is not clear that all students with disabilities need the same amount of extended time to have valid test scores (p. 4).

We therefore call on researchers to conduct research on appropriate guidelines for allotting and monitoring course-based testing accommodations in post-secondary assessment. Specifically, we request that high quality research is conducted in real (non-experimental) settings, using real (non-standardized) course-based tests, and conducted with post-secondary (not high school or middle school) students. Further, we suggest that researchers investigate which learning needs, not necessarily which diagnoses, respond best to ETTA in terms of providing fair access. Lovett and Leja (2015) concurred that research about ETTA using data from non-experimental, course-based university testing could address many of the limitations of the previous research, and have advocated for including students’ perceptions about their experiences with accommodations as only part of a larger data set routinely reviewed when making ongoing decisions about accommodations (2013).

SAS professionals can likewise collect and respond to specific student data within their own populations. For example, if a student is succeeding in the testing situation and not using all or any of the allocated ETTA, SASprofessionals can make a plan in discussion with the student to gradually decrease his/her ETTA. If the student is using all the time and still not succeeding, then ETTAmay be insufficient or the student may need complementary supports to effectively address his/her needs. Record keeping by invigilators, co-operation from professors in terms of academic outcomes of testing using ETTA, and collaboration with information technologists to create programs that assist in keeping the records that would allow SAS professionals to determine trends in the ETTA provided, the ETTA used, the items attempted at various time points of ETTA, and the academic outcomes under those conditions would aid in making student-specific decisions based in individual data. In this way we can work toward the penultimate situation outlined in the literature: Lovett (2010) suggested standardized procedures for decision-making in order to maximize validity and fairness. By ensuring that data collected on specific student’s performance under specific ETTA durations is used, we can maximize the likelihood that we are meeting our legal and moral obligations to our students.

While using ETTA to the maximum of its usefulness is desirable, some researchers have suggested that a re-working of the way we approach testing and testing durations will garner even better outcomes. Lewandowski, Lovett, and Rogers (2008) suggested removing time barriers for testing of both students with disabilities and those without. In this way, post-secondary institutions can move toward a universal design for learning (UDL) that makes testing accommodations the rare exceptions rather than the norm. UDLhas its roots in Ronald Mace’s architectural designs that sought to make buildings physically accessible to all (Messinger-Willman & Marino, 2010). UDL in the classroom strives to make learning accessible to all (Rose, Meyer, & Hitchcock, 2005). Rather than added-on, separate procedures for students with disabilities, classrooms and learning are designed to accommodate all the variations of student needs. Rose et al. contended that this goal can be achieved by adhering to the three main principles of UDL: First, curricula should be presented in multiple ways to support students to recognize the content; second, the students should be supported to use multiple and flexible modes to express the processes of learning; and finally, the products and representation of learning should likewise be both flexible and multiple. Through differentiation in the ways students experience and practice learning, as well as the ways they demonstrate their competence with its content, UDL points the way to greater access for all students. Lovett and Lewandowski (2015) observed that while universal design for learning has received some attention, universal design in assessment—the third principle of UDL— is largely ignored. However, an unprompted comment from one respondent at one institution in the current study suggested that there has been some progress on this front:

Analyzes of our data indicates that the majority of students do not use the full time and a half allotted. As a result, we have been working with programs to evaluate providing extended time for all students, not just students with disabilities. In select programs we have been able to promote a UDL approach to ETTAwhereby the service/accommodation can be accessed by both disabled and nondisabled students. We are supporting programs to provide the ETTAin advance so that students (disabled or non-disabled) can self select whether they will use the accommodation.

In the mean time, while ETTA continues to be the accommodation of choice, we are united with other researchers in the field that call for individualization and monitoring of accommodations. Lovett and Lewandowski (2015) “underscore the importance of individualized assessment when making accommodation decisions” (p. 84), and stress, “Accommodation decisions should always be an individualized process and one that is repeated each time that there is a new test on which to consider accommodations for the student” (p. 49).

References

Table 1 Description of Participants
Province of Post-secondary Number of Students Registered with AS Number of Institutions
Alberta 4,640 7
British Columbia 8,280 10
Manitoba 3,230 4
New Brunswick 1,540 6
Newfoundland/Labrador 1,750 3
Nova Scotia 1,425 4
Ontario 17,289 10
Prince Edward Island 430 1
Quebec 3,200 2
Saskatchewan 1,500 1
TOTALS 43,284 47
Table 2 Size of Institutions
Size by Student Population Number of Institutions %
Small (under 7,000 students) 17 35.4
Medium (7,001- 15,000 students) 10 20.8
Large (15,001- 30,000 students) 14 29.2
Very large (over 30,000 students) 7 14.6
TOTAL NUMBER OF INSTITUTIONS 48 100
TOTAL RANGE OF STUDENTS

511,060 to 929,0401

Note1 This range is calculated conservatively, with the minimum number of students at a “small” institution set at 3,000 students and the maximum number of students at the “very large” university set at 40,000 students when calculating the total number of students attending each institution.

Table 3 Type of Institutions
Type of Post-secondary Institution Number
University 31
College 15
Poly-technical school (degree-granting) 1
Professional school 1
TOTAL 48


All articles in the journal are assigned a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. See: https://creativecommons.org/licenses/by-nc-nd/4.0/