A Longitudinal Study Comparing Undergraduate Student Performance in Traditional Courses to the Performance in Online Course Delivery Gary Ury Department of Computer Science and Information Systems Northwest Missouri State University Maryville, MO 64468 ABSTRACT As online instruction has become more prevalent at the college and university level, researchers have attempted to measure the success of these programs through a variety of methodologies, instruments, and sample sizes. A need for continued exploration and study to assure quality instruction has existed as technologies and pedagogies change. The purpose of this study was to compare course performance over time between online and traditional classroom students enrolled in a required management information systems course included in the business school’s common professional component and an elementary programming course taught by the Computer Science Department. In both courses, the online delivery method was found to be effective, but performance, as measured by final course grades, showed a significantly lower mean score than students enrolled in traditional sections of the course. Keywords: online student performance, online instruction, online teaching, grade comparisons 1. INTRODUCTION Distance education has been available for several decades in the form of correspondence courses, instructional television, and computer aided instruction. These courses were usually designed for specific markets that involved continuing education, certification, or recertification. While many were robust and contained quality training materials, this type of instruction was generally not considered to be of high enough quality to provide advanced degrees. If colleges and universities were involved with this type of instruction it was generally through an outreach or extension center separated from other academic endeavors encompassed as part of a degree program (Uhlig 2002). From the mid 90s to the present time the growth and availability of the personal computer coupled with the Internet have caused great upheaval in the world of higher education. Delivery methodologies used for centuries in universities and colleges to educate degree seeking students have come under fire as being ineffective and boring. Administrators, instructional technologist, students, and progressive teachers have been pushing to implement more technology into current curricula and for the development of stand-alone online courses and programs. Teachers are being forced into new arenas of knowledge that have little to do with their subject matter and more to do with the delivery of the materials in increasingly complicated and voluminous formats (Boser 2003; Frey, Faul, & Yankelov, 2003; Garson 1998). Few proponents of the online craze have stopped long enough to truly examine the impact of all these technological advancements on the student. Primarily, studies have fallen into the two general categories of pretest-posttest models and opinion surveys. Most of the studies have relatively small sample sizes, are performed over short time periods, and measure a single teacher’s experience with the two delivery methods of traditional and online (Bearden, Robinson, & Deis 2002; Cooper, 2001; Holman, 2000; Miller, Cohen, & Beffa-Negrini 2001; Smith, Smith, & Boone 2000; Thirunarayanan & Peres-Prado 2002). This study will examine undergraduate student performance as measured by final semester grade percentages over a four year period with the involvement of multiple instructors. 2. BACKGROUND Previous studies that attempted to measure the quality of online versus traditional (classroom) instruction exist in fields from dental hygiene to computer science. Methodologies vary between the administration of a pretest to a limited number of students followed by a post test at some later date and the administration of opinion surveys completed by students or teachers or both. Despite its limitations, the pretest/posttest model is still a popular measure in quantitative studies performed in the area of education. The survey has been another popular form of instrumentation as researchers attempt to measure customer satisfaction and perception. A survey of recent studies has found no significant difference between traditional and online delivery when using the above methodologies (Miller et al. 2001). Holman (2000) performed a study of 42 students who completed an online library instruction module and 27 students who received classroom instruction as part of an English course. Holman’s findings indicate no significant difference between the two groups on the pretest or the posttest. Thirunarayanan and Peres-Prado (2002) studied 29 online students and 31 classroom students in English as a second language courses. They found the online students scored significantly lower on pretest than the traditional section, but no significant differences in the posttest. Smith et al. (2000) completed a study of 58 students in an integrated software course and found no significant difference in academic outcomes. Outcomes were measured in the area of lectures, guided instruction, and collaborative discussion. Miller et al. (2001) examined results from 35 online students and 434 traditional students over one semester in a nutrition course and found no significant differences using final course grades as the measure. Bearden et al. (2002) studied 54 dental hygiene students during the fall semesters of 1998 and 1999. They found no significant differences between online and traditional students on posttest results or performance on the National Board Dental Hygiene Exams. Significant differences have been found when GPA was examined. Bearden et al. (2002) stated that “trend analysis indicated that students with lower GPA who enrolled in online courses performed lower than on-campus students.” Miller et al. (2001) also found that “older students taking the online course had significantly higher final grades than both their younger online and all large-class lecture counterparts.” Cooper (2001) surveyed 37 online students and 94 traditional students in a computer literacy course to determine their perceptions and used course grades to measure student performance. Cooper found no significant differences in grade distributions, but determined that newly instituted enrollment requirements for online classes (minimum GPA of 2.5) may have attributed to a lower drop rate in the online course than was experienced in previous semesters. Cooper further reported that students did not view online courses as replacements for traditional classroom instruction. Garson (1998) conducted a study of an online and a traditional section of a history survey course during summer session 1997 and found that 50% of the online students would have preferred a traditional format. Survey results reported by several researchers indicate that online courses offer flexibility with no loss of performance. Cooper (2001) reported that given the proper subject coupled with the right student and a capable teacher, online instruction can provide effective educational results. The 2003 Sloan Survey of Online Learning polled academic leaders … [and] asked [them] to compare the online learning outcomes with those of face-to-face instruction; a majority said they are equal. Two out of every three also responded that online learning is critical to their long-term strategy (Roach 2003 p1). Bednar (2002) reasoned that the technology which has made online instruction possible promotes better communication between student and teacher. Bednar pointed out that teachers and students have learned to perform in asynchronous manners that can make communication and interaction a 24 hours a day, 7 days a week process. Although significant differences in performance were not generally reported in current studies and focus was placed on the benefits of asynchronous learning, the online environment is less than perfect as it currently stands. Students have criticized professors that compensate for a lack of lectures and classroom interaction by increased reading, exercises, and assignments (Bednar 2002). Students have complained about the lack of interaction with instructors and other students, hardware and software problems, and connectivity when enrolled in online courses (Beard & Harper 2002). Students have reported that it is much easier to fall behind in the online courses and that a student must be self-motivated with a strong sense of personal responsibility and possess tremendous commitment to succeed in the online environment (Uhlig 2002). Teachers have commented on increased development time requirements, the necessity to learn new technical skill sets, and increased efforts in student assessment as major detractors of online course development (Boser 2003). 3. PURPOSE The purpose of this study was to compare course performance and achievement over time between online and traditional (classroom) students enrolled in courses with varied subject matter and student demographics. The purpose was accomplished by determining if there was a significant difference in course performance, as measured by final grade percentage, between students receiving online instruction as opposed to those receiving traditional classroom instructions. 4. METHODOLOGY Two distinctly different, high demand courses at a mid-sized university were used for this study. The first group contained students with primary interests in the field of business and/or information systems. This group contained no freshman students. The second group consisted of students who were interested in computer science as a major or minor. This group contained primarily freshman students. The selections were made intentionally to include a diversity of subject matter and student demographics. The first group was enrolled in a Management Information Systems (MIS) course between the years of 1999 and 2004. The composition of this course of instruction customarily contains two or three traditional sections and one online section each trimester. Three different teachers have been involved in the instruction of this class since 1999. The curriculum was team developed to assure common presentation of subject matter and common assessment instruments. All three teachers have been involved with the traditional delivery model while two of the teachers have been involved in developing and delivering the online course. The number of students involved included 112 online students and 576 traditional students. These classes have been traditionally made up of late sophomore and junior undergraduate business and information systems majors. The MIS course is a requirement of the business school common professional component. The second group studied was enrolled in an elementary programming course, using Visual Basic 6.0, between the years of 1999 and 2003. The composition of this course of instruction was to introduce freshman students to the field of computer programming at a simple and basic level. Four different teachers have been involved in the instruction of this class since 1999. The curriculum was team developed to assure common presentation of subject matter and common assessment instruments. All four teachers have been involved with the traditional delivery model while three of the teachers have been involved in developing and delivering the online course. The number of students involved included 74 online students and 245 traditional students. The MIS and beginning Visual Basic programming course curriculum were developed and approved collaboratively by all teachers of the class and remained common among all sections regardless of delivery model. The MIS course contained introductory material that included types, uses, and development of management information systems in addition to application projects that include advanced uses of Microsoft Office Professional. The Visual Basic programming course was a basic programming course at or below most Computer Science I courses. In both courses the eCollege course management system was used to deliver instruction over the Internet to the online sections. This course website was also available to students in the traditional sections. A large portion of the online section typically included traditional (on-campus) students who elected the online format because of schedule conflicts or were forced into the online section as the traditional sections filled. In most cases students who were not true distance learners were required to come to campus for proctored exams. True distance learners were required to participate in a proctored final exam, as a minimum, where it was the students’ responsibility to recruit a local proctor acceptable to the MIS faculty. It was further determined through interviews with admissions and other administrators that no significant differences had been found in the University’s online and traditional demographics. The online population was not more heavily weighted by non-traditional students, increased or decreased ACT/SAT scores, gender, or age. In other words, this university’s online population and traditional population were statistically equal in demographical measures and abilities. The performance measure for this study included the final course grade expressed as a percentage. In addition to exams, students of both groups and in all delivery methods were required to produce exercises and projects which were graded and included in the final grade determination. Final course grades were used because they represent a variety of performance tests by students and because this is the official performance recorded on student transcripts. Final course grades were examined for mean differences between online and traditional students within each study group. 5. FINDINGS The final student percentage for each student was collected and coded by course (group), by delivery method (online or traditional) and by instructor. Statistical analysis was performed using the SPSS for Windows version 11.5 software package. The performance of students, as measured by final course percentage grade, was analyzed using the independent t-test statistical method. A comparison of final course grade percentage between instructors was analyzed using the one-way analysis of variance (ANOVA) statistical procedure. Table 1 illustrates the results of an independent t test comparing the mean MIS final grades of the traditional and online students. A significant difference was found between the means of the two groups (t(686) = 3.397, p < .01). The mean grade of students involved in traditional course delivery were significantly higher (m = .846, sd = 0.091) than the mean grade of students involved with online instruction (m = .811, sd = 0.130). Table 2 illustrates the results of the analysis of variance procedures (ANOVA) that were performed between the instructors of group 1 (MIS upper classmen) to determine whether significant course grade differences existed. There was no significant difference among the three instructors, (F = 2.647, p > 0.05). The mean final course grades ranged from 83.4% to 85.7% The lack of significance between the instructor means indicated that students had an equal ability to perform regardless of course instructor. Table 3 shows the results of an independent t test comparing the mean final grades of the traditional and online students in an elementary programming course. A significant difference was found between the means of the two groups (t(317) = 2.894, p < .01). The mean grade of students involved in traditional course delivery were significantly higher (m = .846, sd = 0.137) than the mean grade of students involved with online instruction (m = .781, sd = 0.206). Table 4 demonstrates the results of the analysis of variance procedures (ANOVA) that were performed between the instructors of group 2 (Basic programming freshman course) to determine whether significant course grade differences existed. There was no significant difference among the four instructors, (F = 2.349, p > 0.05). The mean final course grades ranged from 80.2% to 87.5%. The lack of significance between the instructor means indicated that students had an equal ability to perform regardless of course instructor. 6. CONCLUSIONS The purpose of this study was to compare course performance and achievement over time between online and traditional (classroom) students enrolled in courses with varied subject matter and student demographics. This purpose led to the composition of the research question posed by this study: Was there a difference in course performance, as measured by final grade percentage, between students receiving online instruction and those receiving traditional classroom instruction at this university? Analysis of the data illustrated a significant difference between the online student grades (81%) and the traditional student grades (85%) in the MIS course. The same held true of the beginning programming course with online student grades averaging 78% while the traditional students average score was 84%. That being said, it is important to note that the online instruction model appeared to be an acceptable method of instruction with a respectable average student grade in the high 70s to low 80s. The fact does remain that students in the traditional sections outperformed the online students when the final course grade was used as a measure. The data also led the researcher to a secondary investigation into the process of collaborative course development and delivery of high demand courses. The data analysis indicated no significant differences between the three instructors of the MIS course or the four instructors of the basic programming course. The data suggests that a collaborative approach in high demand, multiple teacher courses can provide successful and relatively consistent results in student achievement regardless of delivery model. In this study neither the gender, age, nor experience of the teacher had a significant effect on the students’ ability to perform in either an upper class course or one designed primarily for freshman. However the delivery method did have a significant effect on student performance with those students involved with the traditional instruction achieving a one-half grade higher than those participating in online instruction. 7. LIMITATIONS Some situations may have been simplified or ignored because they reside outside the scope of the defined purpose or they were uncontrollable. This study included a large sample collected over a four to five year period, but it only looked at two courses at one midsized, moderately selective university in the Midwest. This study also limited the measure of performance to the single criteria of final course grade. No attempt was made to collect demographic information to determine differences in age, gender, or GPA although the administrative personnel of the university had found no overall differences between online and traditional students. Some students who would normally not make the choice of online instruction may have felt forced into online sections because traditional sections were full by the time they were allowed to enroll. 8. IMPLICATIONS This study suggests three major implications for practice. First, online instruction can be effective, but it may exhibit a significant difference in performance when compared to similar traditional classroom courses. Second, quality, well developed curricula designed for traditional courses can be effectively integrated into the online instruction model. Third, collaboratively developed curricula can benefit teachers and students in high demand courses by providing a consistent model of instruction. The two groups used in this study were radically different in subject matter, student composition, and instructors, yet there were statistically significant differences in overall course performance between online and traditional classroom students. The fact that those differences in mean scores were fairly consistent (MIS online 81%, MIS traditional 85%, programming online 78%, and programming traditional 84%) lends strength to the argument that online students are missing something that the classroom students are not. In spite of developers attempts to provide additional learning experiences to replace the classroom, there must be something else needed to help the online student achieve at the same level as the traditional classroom student. Additional literature review provided important guidelines for teachers who develop online courses. Ragan (2003) suggested that online course developers consider where students get the necessary information, course timeline, learning style employed (asynchronous or synchronous or mixed), number of lessons/assignments, and the technical proficiency level of the students. Ragan further instructs the online developer to consider breaking development activities into the five major categories: * User interface * Media elements * Software * Permissions * Accessibility (p. 3) The quality of online course design is increased by adding the activities to be accomplished under each of these major categories and remaining student centered (Ragan). Most institutions of higher education are at least investigating if not embracing online course delivery. The developers of these courses and programs will be the current university and college teachers. It is their responsibility to continue to experiment and improve online instruction that facilitates student learning at the same levels as the learning of traditional classroom students. There is a continuing need for more studies of this nature to track performance of the large variety of students exposed to ever expanding online course offerings. Continuing diligence is required to aggressively monitor student performance in this new and emerging educational delivery model. 9. REFERENCES Beard, L. A. & Harper, C. (2002, Summer). Student perception of online versus on campus instruction. Education, v122 i4, 658-663. Retrieved November 21, 2003 from the Expanded Academic database. Bearden, E. B., Robinson, K, & Deis, M. H. (2002, Summer). A statistical analysis of dental hygiene students’ grades in online and on-campus course and performance on the National Board Dental Hygiene Exams. Journal of Dental Hygiene, v76 i3, 213-217. Retrieved November 20, 2003 from the Expanded Academic database. Bednar, J. (2002, April). Targeting distance learning. Business West, v18 i12, 73-76. Retrieved November 20, 2003, from the MasterFILE Elite database. Boser, U. (2003, October 20). Working on what works best. U.S. News & World Report, v135 i13, 58-61. Retrieved November 20, 2003, from the MasterFILE Elite database. Cooper, L. W. (2001, March). A comparison of online and traditional computer applications classes. THE Journal, v28 i8, 52-55. Retrieved November 20, 2003, from the MasterFILE Elite database. Frey, A., Faul, A., & Yankelov, P. (2003, Fall). Student perceptions of web-assisted teaching strategies. Journal of Social Work Education, v39 i3, 443-457. Retrieved November 21, 2003 from the Expanded Academic database. Garson, G. D. (1998, September). Evaluating implementation of web-based teaching in political science. PS Political Science & Polities, v31 n3, 585-590. Retrieved November 21, 2003 from the Expanded Academic database. Holman, L. (2000, Fall). A comparison of computer assisted instruction and classroom bibliographic instruction. Reference & User Services Quarterly, v40 i1, 53. Retrieved November 21, 2003 from the Expanded Academic database. Miller, B., Cohen, N. L., & Beffa-Negrini, P. (2001, Winter). Factors for success in online and face-to-face instruction. Academic Exchange Quarterly, v5 i4, 4-10. Retrieved November 21, 2003 from the Expanded Academic database. Ragan, L. C. (2003, February 1). Defining quality on your own terms. Distance Education Report, v7 n3, 3, 6. Retrieved November 21, 2003 from the Education Full Text database. Roach, R. (2003, September 25). Survey says online learning equal to classroom instruction. Black Issue in Higher Education, v20 i16, 44. Retrieved November 20, 2003, from the Academic Search Elite database. Smith, S. B., Smith, S. J., & Boone, R. (2000, Spring). Increasing access to teacher preparation: The effectiveness of traditional instructional methods in an online learning environment. Journal of Special Education Technology, v15 n2, 37-46. Retrieved November 21, 2003 from the Education Full Text database. Thirunarayanan, M. O. & Perez-Prado, A. (2002, Winter). Comparing web-based and classroom-based learning: A quantitative study. Journal of Research on Technology in Education, v34 n2, 131-137. Retrieved November 21, 2003 from the Education Full Text database. Uhlig, G. E. (2002, Summer). The present and future of distance learning. Education, v122 i4, 670-673. Retrieved November 21, 2003, from the Expanded Academic database. 10. TABLES Table 1 T-Test Results Between Online Students and Traditional Students of a Management Information Systems Course Group N Mean F t DF Sig. Traditional 576 84.6% Online 112 81.1% 5.737 3.397 686 0.001* *Note: p < 0.01 Table 2 Analysis of Variance for Final MIS Course Grades by Instructor SS DF MS F p Between Groups 0.052 2 0.026 2.647 0.072 Within Groups 6.668 685 0.010 Total 6.719 687 Table 3 T-Test Results Between Online Students and Traditional Students of a Beginning Programming Course Group N Mean F t DF Sig. Traditional 245 84.1% Online 74 78.1% 13.743 2.894 317 0.004* *Note: p < 0.01 Table 4 Analysis of Variance for Final Course Grades in a Programming Course by Instructor SS DF MS F p Between Groups 0.173 3 0.058 2.349 0.072 Within Groups 7.710 315 0.024 Total 7.882 318 ?? ?? ?? ??