Article Type:

Report

Subject:

Academic achievement (Research)

Educational programs (Research)

Educational programs (Research)

Authors:

Bowles, Tyler J.

McCoy, Adam C.

Bates, Scott

McCoy, Adam C.

Bates, Scott

Pub Date:

09/01/2008

Publication:

Name: College Student Journal Publisher: Project Innovation (Alabama)
Audience: Academic Format: Magazine/Journal Subject: Education Copyright: COPYRIGHT 2008 Project Innovation (Alabama)
ISSN: 0146-3934

Issue:

Date: Sept, 2008
Source Volume: 42 Source Issue: 3

Topic:

Event Code: 310 Science &
research

Geographic:

Geographic Scope: United States Geographic Code: 1USA United
States

Accession Number:

182975279

Full Text:

Supplemental Instruction (SI) is a national program
designed to aid college student learning. Many researchers have noted that
analysis of the impact of the SI program on student achievement is problematic
as a result of the inherent self-selection bias. We apply a sufficiently
sophisticated statistical technique that controls for the self-selection problem
and test the effect of student SI attendance in freshmen level courses on
graduation success. Our analysis suggests that SI attendance in freshmen level
courses has a statistically significant influence on graduation success. Indeed,
SI attendance, everything else held constant, increases the probability of
timely graduation by approximately 11%.

**********

Supplemental Instruction (SI) is a widely-implemented academic-support program designed to provide optional, informal, peer-mentored leaning support to students in large, survey, or general education courses (International Center for Supplemental Instruction, 2006). The program was designed to combat course-level attrition and improve performance in traditionally difficult courses, and more generally to increase retention and graduation rates. Specifically, the dual goals of the SI program are to improve performance and reduce attrition (Blanc & Martin, 1994).

This paper addresses the issue of whether SI attendance affects graduation rates and is organized as follows: The following section discusses the SI program and the literature concerning its effectiveness. The data and methods applied in the instant research are then presented followed by a section that provides the empirical results. The paper concludes with a discussion of these results.

Supplemental Instruction Program

The SI program was founded at the University of Missouri-Kansas City in the early 1970s by Deanna Martin, PhD (Widmar, 1994) and in 1981 was designated by the U.S. Department of Education as an Exemplary Educational Program (Martin and Arendale, 1994). The International Center for Supplemental Instruction at the University of Missouri-Kansas City defines the program as "an academic assistance program that utilizes peer-assisted study sessions. SI sessions are regularly scheduled, informal review sessions in which students compare notes, discuss readings, develop organizational tools, and predict test items. Students learn how to integrate course content and study skills while working together" (http://www.umkc. edu/ cad/si).

Through the mid-2000s, the SI program has been implemented in more than 50 universities nationally--and staff from "hundreds" of universities nationally and internationally have been trained in the program (International Center for Supplemental Instruction, 2006).

There are four important role-players in the standard SI program: an SI administrator, specific course instructors, SI leaders, and the students themselves. SI leaders attend course lectures, take notes, read all assigned materials, and conduct three to five out-of-class SI sessions a week. SI is a so-called peer cooperative learning program (Arendale, 2005) as the SI leaders are generally more advanced students who have a history of success in college generally and in the targeted course specifically. That is, the SI leader is the "model student," a facilitator who helps students to integrate course content and learning/study strategies. SI sessions include (but are not limited to): reviewing material covered in lectures or in the course-text, hands-on exercises that are unlikely to be utilized in large lecture-classes, discussion based learning that is more difficult to accomplish in large lecture halls, question-and-answer periods that are difficult to accomplish in large lecture halls, and study skills training (e.g., note-taking, textbook use, and exam-taking strategies).

The efficacy of the SI program has been studied since its inception; two recently updated annotated reviews of this literature are available (Arendale, 2005 and International Center for Supplemental Instruction, 2006). Given the goals of the program, the outcome variables of interest generally include student learning and retention. Congos and Schoeps (1993) examined differences in course performance for students who attended SI sessions and those who did not. In this case, they controlled for preexisting differences in SATscores, high school rank, and predicted GPA prior to matriculation. They concluded that while there were no inherent preexisting differences among attenders and nonattenders, there were differences in course performance based on SI attendance. They noted, however, that the self-selection bias remains an inherent problem in the evaluation of the program.

In a large-scale study of the effectiveness of the SI program, Kochenour et al. (1997) examined the efficacy of SI on student success--as assessed via course performance using a sample of over 11,000 participants in eight separate courses in both physical and social sciences. They found that those students who attended SI did not differ significantly, in terms of predicted GPA, from those who did not; that is, those who attended SI and those who did not appeared to be equally prepared. Critically, however, they did perform better in the courses that they attended.

The goals of the SI program are not all short
term in nature. Simpson, Hynd, Nist, and Burrell (1997) reviewed the state and
status of a variety of college-level academic assistance programs (including SI)
and noted that it is important in this field to investigate long-term effects.
That is, it is important to move beyond course performance (e.g., grades) and
into other more potentially long-term outcomes. This is particularly true in
programs such as SI, which includes a mission of knowledge and skills transfer.
The long-term impacts of SI have been examined. For instance, Gattis (2000)
studied the knowledge gains due to student involvement in supplemental
instruction during undergraduate-level chemistry courses. In this study,
students who attended SI sessions during a fall semester course were retested in
the following spring semester; it was revealed that those students who had
participated in SI sessions during the fall scored higher on the exam given in
the spring. This is an indication of longer-term positive outcomes based on SI
participation. The examination of longer term effects are particularly important
given that one of the key aspects of the SI program is that it provides a
successful student to model pro-educational behavior (e.g., distributed
learning, deeper processing of information). In short, the role of an SI leader
is to provide a good model for micro- and macro-behaviors related to successful
long-term educational outcomes.

Many researchers have noted that self-selection bias is a potential threat to any deep understanding of the impact of the SI program (McCarthy&Smuts, 1997; Schwartz, 1992; Simpson, Hynd, Nist, & Burrell, 1997; Visor, Johnson, & Cole, 1992); this is a fundamentally important question. Given that the program is designed to be voluntary, self-selection is built-in (see Burmeister, 1996, for a discussion of this issue by the program's founder), and the issue of self-selection presents a significant statistical problem relative to testing program effectiveness.

This paper addresses the issues of long-term impacts and self-selection bias. Specifically, a sufficiently sophisticated statistical technique is applied to test the effect of student SI attendance in freshmen level courses on student graduation success.

Data and Methods

During the fall semester of 2001 and spring semester of 2002, 3,905 students at a large western land-grant university (i.e., Utah State University) enrolled in courses that offered Supplemental Instruction. These courses are universally freshman level courses. SI attendance, course grades, ACT scores, high school GPAs, and demographic information were compiled for these students. In the spring of 2005, the Registrar's Office at Utah State University provided data on whether students in this earlier data set had graduated by Spring 2005 or had filed an application to graduate after the Summer 2005 or the Fall 2005 semesters. Table 1 provides the joint frequency distribution for SI attendance and graduation success. Table 2 provides some additional descriptive statistics that characterize the data. The data in Table 1 suggest that students who attend SI are more likely to graduate on a timely basis. However, this may not be due to the effect of SI attendance but rather to a third variable that is correlated with both graduation and SI attendance--the self-selection problem.

As students "self-select" whether to attend SI, a single equation regression model will result in a biased estimate of the effect of SI on graduation success (Bowles and Jones, 2003). For example, if inherently motivated students choose to attend SI and are more likely to timely graduate, the observed positive correlation between SI attendance and graduation success will-reflect the effect of this third factor (i.e., inherent motivation) on both SI attendance and graduation success. A potential solution would be to include explanatory variables in the regression equation that serve as a proxy for inherent motivation (e.g., high school GPA), but this is insufficient as there likely are unobserved (i.e., unmeasurable) student characteristics that affect both SI attendance and graduation Success.

The problem of determining the effect of a "treatment" (e.g., SI attendance) on an outcome (e.g., graduation success) where the participants select whether to be treated is a common type of problem in the social sciences. Hence, a statistical model, the treatment effects model, has been developed as the appropriate technique in the presence of the self-selection problem. Indeed, as opposed to single equation regression models, the treatment effects model has become the standard approach for testing program effectiveness in the social sciences (see Greene, 2000, Section 20.4.4; Weiler and Pierro, 1988; Greene, 1998; Hilmer, 2001; Bowles and Jones, 2004).

The treatment effects model includes an equation, the selection equation, that explains the student's choice to attend SI. The second equation explains graduation success and includes as an explanatory variable a measure of SI attendance.

In the current context, the following model is proposed to explain SI attendance, graduation success, and the effect of SI attendance on graduation:

Siattendancei = [a.sub.1] + [a.sub.2] [HSGPA.sub.i] + [m.sub.i]

Graduationi = [b.sub.1] + [b.sub.2] SI [attendance.sub.i] + [b.sub.3] ACT [score.sub.i] + [b.sub.4] [SEX.sub.i] + [e.sub.i]

where SI attendancei = 1 if student i attended SI three or more times, 0 otherwise; HSGPAi = the high school grade point average of student i; Graduationi = 1 if student i had graduated by Spring 2005 or had filed an application to graduate by Fall 2005, 0 = otherwise; ACTscorei = the score on the ACT exam of student i; and SEXi, 1 = male, 0 = female.

The reason for specifying SI attendance as a function of HSGPA is that it is postulated that HSGPA reflects the work ethic and attitude of student i concerning their education. ACT score is included as an explanatory variable in the graduation equation as it is deemed a measure of academic ability and, therefore, a reasonable predictor of graduation success. Additionally, the reason for the sex dummy variable in the graduation equation is due to the rather unique demographic profile of students at Utah State University.

Results

Table 3 presents two sets of parameter estimates for our two-equation model. All parameter estimates are statistically significant and have the expected sign. For the first set of estimates, it is assumed that self-selection is not a problem, and, therefore, the equation explaining graduation success is estimated as a single equation model. This technique results in an estimate of the coefficient on SI attendance of 0.1224, with a t-statistic of 2.58--indicating that SI attendance has a positive and statistically significant impact on timely graduation.

Controlling for the selectivity bias in this analysis requires that the parameters of the two-equation model be estimated simultaneously. The parameter estimates from this approach are reported on the right-hand side of Table 3. In this instance, the estimate of the coefficient on SI attendance is also positive and statistically significant but much larger at 1.4610. This implies that a single equation approach to estimating the effects of SI attendance on student graduation achievement, which by definition ignores the self-selection problem, leads to an underestimate of the effect of SI attendance on student graduation achievement.

Discussion

A plausible story consistent with the above empirical result is that inherently less able students are more likely to attend SI. As this unmeasurable ability is negatively correlated with SI attendance (i.e., the lower is student ability, the higher is SI attendance) but positively-correlated with graduation success, a single equation approach, which by definition ignores this problem, underestimates the effect of SI attendance on graduation success.

Our results are consistent with other studies that compare a system of equations approach to single equation models in testing the effect of some program or treatment where self-selection is a problem. For example, in testing the effect of initial status (i.e., full- vs. part-time student) on educational persistence, Weiler and Pierro (1988) found that the sign of the coefficient on initial status changed when moving from a single equation model to a system of equations model similar to the model used here. Indeed, in discussing the self-selection problem, Greene (2000) noted that the failure to adequately model the problem has "called into question the interpretation of a number of received studies" (p. 934).

Although the coefficient or SI attendance in the graduation equation of 1.4610 (see Table 3) indicates a positive and statistically significant (t-stat = 12.27) relationship between these two variables, it does not represent the marginal effect of a change in SI attendance on graduation success. This is a consequence of the statistical technique used to estimate these parameters. However, this marginal effect has been calculated of 0.1075 (t-stat = 1.816). The estimate of 0.1075 indicates that SI attendance in freshman-level courses, holding all other factors constant, increases the probability of graduation within approximately four years by 0.1075 or 10.75%.

References

Arendale, D.R. (2005). Postsecondary peer cooperative learning programs: Annotated bibliography. Retrieved October 24, 2006. Available: http://www.tc.umn.edu/~arend011/Peerbib03.pdf.

Blanc, R.A., & Martin, D.C. (1994). Supplemental instruction: Increasing student performance and persistence in difficult academic courses. Academic Medicine: Journal of the Association of American Medical Colleges, 69(6), 452-454.

Bowles, T.J., & Jones, J. (2003). An analysis of the effectiveness of supplemental instruction: The problem of selection bias and limited dependent variables. Journal of College Student Retention: Research, Theory, and Practice, 5(2), 235-243.

Bowles, T.J., & Jones, J. (2004). The effect of supplemental instruction on retention: A bivariate probit model. Journal of College Student Retention: Research, Theory, and Practice, 5(4), 431-437.

Burmeister, S. (1996). Supplemental instruction: An interview with Deanna Martin. Journal of Developmental Education, 20(1), 22-26.

Congos, D.H., & Schoeps, N. (1993). Does supplemental instruction really work and what is it anyway? Studies in Higher Education, 18(2), 165-178.

Congos, D.H., & Schoeps, N. (1999). Methods to determine the impact of SI programs on colleges and universities. Journal of College Student Retention, 1(1), 59-82.

Greene, W.H. (1998). Gender economics courses in liberal arts colleges: Further results. The Journal of Economic Education, 29(4), 291-300.

Greene, W.H. (2000). Econometric analysis. Upper Saddle River, NJ: Prentice Hall.

Hilmer, M.J. (2001). A comparison of alternative specifications of the college attendance equation with an extension to two-stage selectivity-correction models. Economics of Education Review, 20(3), 263-278.

Hodges, R., & White, W.G., Jr. (2001). Encouraging high-risk student participation in tutoring and supplemental instruction. Journal of Developmental Education, 24(3), 2-10, 43.

International Center for Supplemental Instruction. (2003). Supplemental instruction national data summary, 1998-2003. Retrieved October 24, 2006. Available: http://www.umkc.edu/ cad/si/sidocs/National%20Supplemental%20I nstruction%20Report%2098-03.pdf.

Kochenour, E.O., Jolley, D.S., Kaup, J.G., Patrick, D.L., Roach, K.D., & Wenzler, L.A. (1997). Supplemental instructor: An effective component of student affairs programming. Journal of College Student Development, 38(6), 577-586.

Martin, D.C., & Arendale, D. (1994). Review of research concerning the effectiveness of SI from the University of Missouri-Kansas City and other institutions across the United States. Annual conference of the freshman year experience, ERIC Document Reproduction Service NO ED 370 502, Columbia SC.

McCarthy, A., & Smuts, B. (1997). Assessing the effectiveness of supplemental instruction: A critique and a case study. Studies in Higher Education, 22(2), 221-232.

Schwartz, M.D. (1992). Study session and higher grades: Questioning the causal link. College Student Journal, 26, 292-299.

Simpson, M.L., Hynd, C.R., Nist, S.L., & Burrell, K.I. (1997). College academic assistance programs and practices. Educational Psychology Review, 9(1), 39-87.

Visor, J.N., Johnson, J.J., & Cole, L.N. (1992). The relationship of supplemental instruction to affect. Journal of Developmental Education, 16(2), 12-18.

Weiler, W.C., & Pierro, D.J. (1988). Selection bias and the analysis of persistence of part-time undergraduate students. Research in Higher Education, 29(3), 261-272.

Widmar, G.E. (1994). Supplemental instruction: From small beginnings to a national program. In D.C. Martin & Arendale, D.R. (Eds.), Supplemental Instruction: Increasing achievement and Retention (pp. 3-10). San Francisco, CA: Jossey-Bass.

TYLER J. BOWLES, ADAM C. MCCOY, AND SCOTT BATES

Utah State University

**********

Supplemental Instruction (SI) is a widely-implemented academic-support program designed to provide optional, informal, peer-mentored leaning support to students in large, survey, or general education courses (International Center for Supplemental Instruction, 2006). The program was designed to combat course-level attrition and improve performance in traditionally difficult courses, and more generally to increase retention and graduation rates. Specifically, the dual goals of the SI program are to improve performance and reduce attrition (Blanc & Martin, 1994).

This paper addresses the issue of whether SI attendance affects graduation rates and is organized as follows: The following section discusses the SI program and the literature concerning its effectiveness. The data and methods applied in the instant research are then presented followed by a section that provides the empirical results. The paper concludes with a discussion of these results.

Supplemental Instruction Program

The SI program was founded at the University of Missouri-Kansas City in the early 1970s by Deanna Martin, PhD (Widmar, 1994) and in 1981 was designated by the U.S. Department of Education as an Exemplary Educational Program (Martin and Arendale, 1994). The International Center for Supplemental Instruction at the University of Missouri-Kansas City defines the program as "an academic assistance program that utilizes peer-assisted study sessions. SI sessions are regularly scheduled, informal review sessions in which students compare notes, discuss readings, develop organizational tools, and predict test items. Students learn how to integrate course content and study skills while working together" (http://www.umkc. edu/ cad/si).

Through the mid-2000s, the SI program has been implemented in more than 50 universities nationally--and staff from "hundreds" of universities nationally and internationally have been trained in the program (International Center for Supplemental Instruction, 2006).

There are four important role-players in the standard SI program: an SI administrator, specific course instructors, SI leaders, and the students themselves. SI leaders attend course lectures, take notes, read all assigned materials, and conduct three to five out-of-class SI sessions a week. SI is a so-called peer cooperative learning program (Arendale, 2005) as the SI leaders are generally more advanced students who have a history of success in college generally and in the targeted course specifically. That is, the SI leader is the "model student," a facilitator who helps students to integrate course content and learning/study strategies. SI sessions include (but are not limited to): reviewing material covered in lectures or in the course-text, hands-on exercises that are unlikely to be utilized in large lecture-classes, discussion based learning that is more difficult to accomplish in large lecture halls, question-and-answer periods that are difficult to accomplish in large lecture halls, and study skills training (e.g., note-taking, textbook use, and exam-taking strategies).

The efficacy of the SI program has been studied since its inception; two recently updated annotated reviews of this literature are available (Arendale, 2005 and International Center for Supplemental Instruction, 2006). Given the goals of the program, the outcome variables of interest generally include student learning and retention. Congos and Schoeps (1993) examined differences in course performance for students who attended SI sessions and those who did not. In this case, they controlled for preexisting differences in SATscores, high school rank, and predicted GPA prior to matriculation. They concluded that while there were no inherent preexisting differences among attenders and nonattenders, there were differences in course performance based on SI attendance. They noted, however, that the self-selection bias remains an inherent problem in the evaluation of the program.

In a large-scale study of the effectiveness of the SI program, Kochenour et al. (1997) examined the efficacy of SI on student success--as assessed via course performance using a sample of over 11,000 participants in eight separate courses in both physical and social sciences. They found that those students who attended SI did not differ significantly, in terms of predicted GPA, from those who did not; that is, those who attended SI and those who did not appeared to be equally prepared. Critically, however, they did perform better in the courses that they attended.

Many researchers have noted that self-selection bias is a potential threat to any deep understanding of the impact of the SI program (McCarthy&Smuts, 1997; Schwartz, 1992; Simpson, Hynd, Nist, & Burrell, 1997; Visor, Johnson, & Cole, 1992); this is a fundamentally important question. Given that the program is designed to be voluntary, self-selection is built-in (see Burmeister, 1996, for a discussion of this issue by the program's founder), and the issue of self-selection presents a significant statistical problem relative to testing program effectiveness.

This paper addresses the issues of long-term impacts and self-selection bias. Specifically, a sufficiently sophisticated statistical technique is applied to test the effect of student SI attendance in freshmen level courses on student graduation success.

Data and Methods

During the fall semester of 2001 and spring semester of 2002, 3,905 students at a large western land-grant university (i.e., Utah State University) enrolled in courses that offered Supplemental Instruction. These courses are universally freshman level courses. SI attendance, course grades, ACT scores, high school GPAs, and demographic information were compiled for these students. In the spring of 2005, the Registrar's Office at Utah State University provided data on whether students in this earlier data set had graduated by Spring 2005 or had filed an application to graduate after the Summer 2005 or the Fall 2005 semesters. Table 1 provides the joint frequency distribution for SI attendance and graduation success. Table 2 provides some additional descriptive statistics that characterize the data. The data in Table 1 suggest that students who attend SI are more likely to graduate on a timely basis. However, this may not be due to the effect of SI attendance but rather to a third variable that is correlated with both graduation and SI attendance--the self-selection problem.

As students "self-select" whether to attend SI, a single equation regression model will result in a biased estimate of the effect of SI on graduation success (Bowles and Jones, 2003). For example, if inherently motivated students choose to attend SI and are more likely to timely graduate, the observed positive correlation between SI attendance and graduation success will-reflect the effect of this third factor (i.e., inherent motivation) on both SI attendance and graduation success. A potential solution would be to include explanatory variables in the regression equation that serve as a proxy for inherent motivation (e.g., high school GPA), but this is insufficient as there likely are unobserved (i.e., unmeasurable) student characteristics that affect both SI attendance and graduation Success.

The problem of determining the effect of a "treatment" (e.g., SI attendance) on an outcome (e.g., graduation success) where the participants select whether to be treated is a common type of problem in the social sciences. Hence, a statistical model, the treatment effects model, has been developed as the appropriate technique in the presence of the self-selection problem. Indeed, as opposed to single equation regression models, the treatment effects model has become the standard approach for testing program effectiveness in the social sciences (see Greene, 2000, Section 20.4.4; Weiler and Pierro, 1988; Greene, 1998; Hilmer, 2001; Bowles and Jones, 2004).

The treatment effects model includes an equation, the selection equation, that explains the student's choice to attend SI. The second equation explains graduation success and includes as an explanatory variable a measure of SI attendance.

In the current context, the following model is proposed to explain SI attendance, graduation success, and the effect of SI attendance on graduation:

Siattendancei = [a.sub.1] + [a.sub.2] [HSGPA.sub.i] + [m.sub.i]

Graduationi = [b.sub.1] + [b.sub.2] SI [attendance.sub.i] + [b.sub.3] ACT [score.sub.i] + [b.sub.4] [SEX.sub.i] + [e.sub.i]

where SI attendancei = 1 if student i attended SI three or more times, 0 otherwise; HSGPAi = the high school grade point average of student i; Graduationi = 1 if student i had graduated by Spring 2005 or had filed an application to graduate by Fall 2005, 0 = otherwise; ACTscorei = the score on the ACT exam of student i; and SEXi, 1 = male, 0 = female.

The reason for specifying SI attendance as a function of HSGPA is that it is postulated that HSGPA reflects the work ethic and attitude of student i concerning their education. ACT score is included as an explanatory variable in the graduation equation as it is deemed a measure of academic ability and, therefore, a reasonable predictor of graduation success. Additionally, the reason for the sex dummy variable in the graduation equation is due to the rather unique demographic profile of students at Utah State University.

Results

Table 3 presents two sets of parameter estimates for our two-equation model. All parameter estimates are statistically significant and have the expected sign. For the first set of estimates, it is assumed that self-selection is not a problem, and, therefore, the equation explaining graduation success is estimated as a single equation model. This technique results in an estimate of the coefficient on SI attendance of 0.1224, with a t-statistic of 2.58--indicating that SI attendance has a positive and statistically significant impact on timely graduation.

Controlling for the selectivity bias in this analysis requires that the parameters of the two-equation model be estimated simultaneously. The parameter estimates from this approach are reported on the right-hand side of Table 3. In this instance, the estimate of the coefficient on SI attendance is also positive and statistically significant but much larger at 1.4610. This implies that a single equation approach to estimating the effects of SI attendance on student graduation achievement, which by definition ignores the self-selection problem, leads to an underestimate of the effect of SI attendance on student graduation achievement.

Discussion

A plausible story consistent with the above empirical result is that inherently less able students are more likely to attend SI. As this unmeasurable ability is negatively correlated with SI attendance (i.e., the lower is student ability, the higher is SI attendance) but positively-correlated with graduation success, a single equation approach, which by definition ignores this problem, underestimates the effect of SI attendance on graduation success.

Our results are consistent with other studies that compare a system of equations approach to single equation models in testing the effect of some program or treatment where self-selection is a problem. For example, in testing the effect of initial status (i.e., full- vs. part-time student) on educational persistence, Weiler and Pierro (1988) found that the sign of the coefficient on initial status changed when moving from a single equation model to a system of equations model similar to the model used here. Indeed, in discussing the self-selection problem, Greene (2000) noted that the failure to adequately model the problem has "called into question the interpretation of a number of received studies" (p. 934).

Although the coefficient or SI attendance in the graduation equation of 1.4610 (see Table 3) indicates a positive and statistically significant (t-stat = 12.27) relationship between these two variables, it does not represent the marginal effect of a change in SI attendance on graduation success. This is a consequence of the statistical technique used to estimate these parameters. However, this marginal effect has been calculated of 0.1075 (t-stat = 1.816). The estimate of 0.1075 indicates that SI attendance in freshman-level courses, holding all other factors constant, increases the probability of graduation within approximately four years by 0.1075 or 10.75%.

References

Arendale, D.R. (2005). Postsecondary peer cooperative learning programs: Annotated bibliography. Retrieved October 24, 2006. Available: http://www.tc.umn.edu/~arend011/Peerbib03.pdf.

Blanc, R.A., & Martin, D.C. (1994). Supplemental instruction: Increasing student performance and persistence in difficult academic courses. Academic Medicine: Journal of the Association of American Medical Colleges, 69(6), 452-454.

Bowles, T.J., & Jones, J. (2003). An analysis of the effectiveness of supplemental instruction: The problem of selection bias and limited dependent variables. Journal of College Student Retention: Research, Theory, and Practice, 5(2), 235-243.

Bowles, T.J., & Jones, J. (2004). The effect of supplemental instruction on retention: A bivariate probit model. Journal of College Student Retention: Research, Theory, and Practice, 5(4), 431-437.

Burmeister, S. (1996). Supplemental instruction: An interview with Deanna Martin. Journal of Developmental Education, 20(1), 22-26.

Congos, D.H., & Schoeps, N. (1993). Does supplemental instruction really work and what is it anyway? Studies in Higher Education, 18(2), 165-178.

Congos, D.H., & Schoeps, N. (1999). Methods to determine the impact of SI programs on colleges and universities. Journal of College Student Retention, 1(1), 59-82.

Greene, W.H. (1998). Gender economics courses in liberal arts colleges: Further results. The Journal of Economic Education, 29(4), 291-300.

Greene, W.H. (2000). Econometric analysis. Upper Saddle River, NJ: Prentice Hall.

Hilmer, M.J. (2001). A comparison of alternative specifications of the college attendance equation with an extension to two-stage selectivity-correction models. Economics of Education Review, 20(3), 263-278.

Hodges, R., & White, W.G., Jr. (2001). Encouraging high-risk student participation in tutoring and supplemental instruction. Journal of Developmental Education, 24(3), 2-10, 43.

International Center for Supplemental Instruction. (2003). Supplemental instruction national data summary, 1998-2003. Retrieved October 24, 2006. Available: http://www.umkc.edu/ cad/si/sidocs/National%20Supplemental%20I nstruction%20Report%2098-03.pdf.

Kochenour, E.O., Jolley, D.S., Kaup, J.G., Patrick, D.L., Roach, K.D., & Wenzler, L.A. (1997). Supplemental instructor: An effective component of student affairs programming. Journal of College Student Development, 38(6), 577-586.

Martin, D.C., & Arendale, D. (1994). Review of research concerning the effectiveness of SI from the University of Missouri-Kansas City and other institutions across the United States. Annual conference of the freshman year experience, ERIC Document Reproduction Service NO ED 370 502, Columbia SC.

McCarthy, A., & Smuts, B. (1997). Assessing the effectiveness of supplemental instruction: A critique and a case study. Studies in Higher Education, 22(2), 221-232.

Schwartz, M.D. (1992). Study session and higher grades: Questioning the causal link. College Student Journal, 26, 292-299.

Simpson, M.L., Hynd, C.R., Nist, S.L., & Burrell, K.I. (1997). College academic assistance programs and practices. Educational Psychology Review, 9(1), 39-87.

Visor, J.N., Johnson, J.J., & Cole, L.N. (1992). The relationship of supplemental instruction to affect. Journal of Developmental Education, 16(2), 12-18.

Weiler, W.C., & Pierro, D.J. (1988). Selection bias and the analysis of persistence of part-time undergraduate students. Research in Higher Education, 29(3), 261-272.

Widmar, G.E. (1994). Supplemental instruction: From small beginnings to a national program. In D.C. Martin & Arendale, D.R. (Eds.), Supplemental Instruction: Increasing achievement and Retention (pp. 3-10). San Francisco, CA: Jossey-Bass.

TYLER J. BOWLES, ADAM C. MCCOY, AND SCOTT BATES

Utah State University

Table 1 Joint Frequency Table SI Attendance and Graduation SI attendance Timely graduation (%) Number Yes No Yes 34.3% 1,084 65.7% No 30.5% 2,821 69.5% Table 2 Sample Descriptive Statistics Mean HSGPA Mean ACTScore Attended SI 3.46 21.71 Did not attend SI 3.35 22.26 Graduated 3.50 22.84 Did not graduate 3.32 21.78 Table 3 Estimates of SI Attendance and Graduation Equations (Total Sample) Single equation Treatment effects model Variable Graduation Graduation SI attendance SI attendance 0.1224 * 1.4610 * -- (0.0474) (0.1191) ACT score 0.0590 * 0.0394 * -- (0.0533) (0.0053) SEX -0.1556 * -0.0964 * -- (0.0429) (0.0355) HSGPA -- -- 0.3476 * (0.0441) Notes: A symptotic standard errors are reported in parentheses. * Indicates statistical significance at the 10% level.

Gale Copyright:

Copyright 2008 Gale, Cengage Learning. All rights
reserved.