Use of Online Assessment Tools to Enhance Student Performance in Large Classes Donald L. Amoroso Chair, Computer Information Systems College of Business Administration Appalachian State University Boone, North Carolina, United States 28608 amoroso@appstate.edu +1.828.262.2411 Abstract This research addresses the use of online assessment tools for large classes. Research has reported an increasing use of technologies for enhancing the learning within large classes. Hybrid classes are those that use both traditional lectures and examinations in conjunctions with online teaching, learning modules, and online assessment tools. This paper addresses the online assessment tool and addresses the question, “Will students perform better using online assessment technologies?” Two hybrid classes, both with student enrollments of 500 students, utilized the online assessment tool by McGraw-Hill called SimNet. Students were assessed four times for Microsoft Office applications and completed a pre-test and post-test online assessment, while also completing three traditional, in-class examinations. Correlation analysis and linear regression was used to ascertain relationships as well as impacts on overall grade. It was found that, when adjusted for sample size, online assessments and traditional examinations both contribute to final grades for students in large mega-sections. Attendance was not found to have either a strong correlation with final grade, nor be significant in predicting final grade. Students who performed better on online assessments also performed better on traditional examinations. Finally, the gap-closing measure of performance was not a strong predictor of final grade indicating that, although pre-test and post-test measures are most common in terms of learning, other measures of learning may need to be used to predict final grades using online assessment tools. Keywords: assessment, online, student performance, large class size 1. Introduction This paper describes a study where online assessments where introduced in large classes. There has been a lot of debate as to whether making changes in the pedagogy of large classes will be reflected in enhanced learning and higher student productivity. Will students perform better using online assessment technologies? The literature regarding the infusion of technology in large classes portrays significant learning for courses where technology applications and concepts were taught, as measured by pre-test and post-test comparisons, traditional examinations and final course grades. Large 500-seat classes in a southern California university were used to investigate the impact of online assessments on student performance. McGraw-Hill’s SimNet online assessment and training tool was utilized. Pre-test and post-test assessments were administered in two classes. Several hypotheses were investigated. First, it was hypothesized that students who attended classes would perform better online assessments. Second, students who performed better on online assessments would perform better on traditional in-class examinations. Third, students who performed better on online assessments would also perform better on the gap-closing metric from the pre-test/post-test. The remainder of this paper describes the analysis and findings from the study. 2. Literature Review Online technologies are changing the way student’s learn in and out of the classroom. Online communication through technology has the potential to change the way in which people learn (Lea and Spears, 1991). Online[p1] learners report attitudes of greater control and responsibility toward their learning. (Schrum, 1998). The traditional lecture has been criticized as lacking sufficient interaction between the professor and students while not allowing the student the ability to interact with technology to learn a variety of subjects (Christopher, 2002). The use of on-line technology can promote creative thinking for students. Waite & Bromfield (2002) found that the use of computerized peer tutoring system, mathematics majors can benefit by enhancing their cognitive levels by trying to teach each other in an active environment. Leidner and Jarvenpaa, 1993 suggest that the use of technology could be used to change course pedagogy, but could still enhance teaching effectiveness with fewer pedagogical changes. According to Ricketts and Wilks (2001), computer based assessment can be a benefit to the students because it can improve students’ performance in their assessments. Students liked the speed of marking and feedback and this acceptability was related to the format of the test, i.e. the simulated examinations with individual questions and no scrolling (Ricketts & Willks, 2001). Noyes, Garland, and Robbins (2004) conducted a study on paper-based and computer-based assessments in which they compared the test performances of undergraduate students who took each test type. Given the identical multiple choice questions, students who took the computer- based assessment achieved better results than those taking the paper-based one (15% higher). Students with the high scores were found to benefit most from the computer-based assessment. According to Cameron (2002), the effectiveness of using simulations to teach and assess has been studied showing that the ability to enhance the hybrid classroom is effective. Scores on individual assessments were significantly higher for students that participated in using online simulations (p=. 000), higher for team project scores (p=. 001), higher for mid-term (p=. 017) and higher for final exam (p=. 008). For some educators, online assessment provides an avenue for designing authentic, relevant tasks in order to assess student learning outcomes (Northcote, 2002). Online students are able to take advantage of doing the online assessments from a variety of locations. Also, online assessment techniques can provide an efficient means by which to collate and distribute student grades. Conversely, the use of online assessment systems can present challenges associated with plagiarism, equity and the cost of specific software license (Northcote, 2002). In the Kozma study (2003), the outcomes reported suggest that online tutorials alone may not have as great an impact on student learning as technology-based projects and technology used to manage information. Comeaux & Neer (1995) compared two instructional formats in the basic hybrid course on oral and test performance. The results indicated that the interpersonal[p2]-first format and public speaking-first [p3]format had different impacts on students with high apprehensive levels. The study shows that interpersonal-first format resulted in a lower state anxiety levels than the public speaking-first format of instruction, supporting the hybrid class format. Lage, et al. (2000) examined the efficacy of the hybrid course by studying an introductory economics course and found a positive student reaction to the introduction of innovative technologies. They examined [p4]the relative efficacy of the online assessments compared to the traditional classroom assignments and found a significant difference toward the relative performance factors when using online assessments. A study of Schulman and Sims (1999) looked at pre-test and post-test scores of students enrolled in online and in-class versions of the same course taught by the same instructor over a variety of disciplines. In this case, the students’ participation was voluntary, and they chose how they were to take the classes. In comparing the pretest scores, the online students’ scores was higher on the average than the in-class students scores (t = 2.82, df = 97, p= 0.0059). In comparing the post test scores, no difference was found between the online and in class students scores (t = 0.06, df = 97, p= 0.9507). Results of this research suggested that even though the online students may be better prepared for the course material than the students who selected in class courses, this preparedness did not necessarily lead to a greater learning. The result showed that the final exam results of both sets of students did provide support for the effectiveness of the online courses. Riffell and Sibley (2003) found that students rated “online homework” as providing the highest value in their education with 76% of these students rating excellent. E-mail with instructors was very important with 50% of the students rating this as excellent (5 on a five-point scale). In the hybrid course, 66% of the students felt that the format increased their interaction with the professor, as compared to traditional lecture formats. Caywood and Duckett (2003) looked at performance of students on campus and online during one specific course considered instrumental for the development of theory and methods in training teachers. Clearly, courses with online assessment components were shown to have better performance scores results show no significant differences between quantitative measures of online versus on-campus learning and suggest that there is no actual difference regarding learning. 3. Research Hypotheses The results of these studies show moderate, if not very strong, improvements in the academic performance of students registered in large classes where technology is infused to create a hybrid learning approach. This study is devoted to examining the impacts of just the online assessment component of the hybrid classroom to ascertain changes in student performance. The following hypotheses are related to the evaluation of online assessments in large, hybrid classes: Hypothesis 1: Attendance One of the problems stated by faculty who teach mega classes in that “students do not attend the large mega class sessions” (Christopher, 2000). It is hypothesized that students that do not attend class as regularly as other students will not perform as well on online assessments. Comeaux and Neer (1995) found that students working toward[p5] grades in specific academic units were more likely to succeed in online work when spending physical time in class than when assessed in a hybrid course. H1a: Students who attend class will perform better on online assessments. H1b. Students who attend class will perform better on final grades. Hypothesis 2: Better traditional test performance Students who participated in online assessments were found to perform better on traditional in-class examinations than students who did more poorly (McCray, 2000). The main effect (treatment via online teaching mechanisms) was found to be significant in predicting both examination performance (on all three in-class exams) as well as final grade point average (p<=.05). H2: Students who perform better on online assessments will also perform better on in-class traditional examinations. Hypothesis 3: Differentiated online pre-test, post-test performance Some research has reported that students enrolled in “experimental”–oriented sections of large section courses did better than traditional-oriented sections (Emerson and Taylor, 2004). Cameron (2002) found that online simulation programs were more effective in enhancing post-test performance over previously measured pre-test performance. H3: Students who receive higher scores on online assessments [p6]will receive higher scores on the post-test assessment and the gap-closing measure of performance. 4. Methodology In this research, utilizing a large classroom setting to teach principles of information systems, students were given a variety of online assessment exercises to ascertain their performance levels. The online simulation programs employed utilized McGraw-Hill’s Simnet program for delivering simulated content for Microsoft Office applications, the Internet, and information systems concepts. Two large classes taught at a university in the southwest United States, both almost 500 students in size, were used to ascertain the effectiveness and efficiency of using Simnet as an online assessment tool. For purposes of pre- and post-testing, the online tool also included two overview assessments at the beginning and the end of class. 1. Pre-test and post-test comparison using both (1) the absolute difference between pre- and post-course online assessment exams and (2) a gap-closing measure defined as the difference in post- and pre-course scores expressed as a percentage of the maximum possible point improvement available based on the student’s pre-course assessment scores (Emerson and Taylor, 2004). 2. Online assessment versioning was utilized where categorical questions are used to compare means from groups of students taking similar questions from a pool of assessment areas that are homogeneous (Comeaux and Neer, 1995). 3. Traditional in-class examinations (3 in each class) were administered in both classes in order to assess the impact of student knowledge in achieving learning outcomes (Finn, et al., 2003). Administration of online assessments Through this online tool, students did four assessments that involved problem solving with Excel, Internet, Access, and PowerPoint. The administration of online assessments to such a large student body required pre-established guidelines. First, each student was responsible to understand and properly use the software Simnet. Second, each online assessment had 25 simulated questions and students had a 30 minute time limit in which to finish the assessment. Third, each student had for each assessment a specific date to take it, and a specific time from which the assessment were open from 6:00 am to 11:59 pm. Fourth, each of the four none pre- and post-test assessments on Excel, Internet, Access, and PowerPoint will each count for 10% of the final grade. Fifth, students would have only one opportunity to take an individual assessment. If the student gets logged out by mistake, he or she would resume using the same computer. Moreover, four in-class examinations were given during this course and each counted for 20% of the final grade. The first exams covered the computer concepts and Windows, Office, Web, and Word applications. The second exam covered computers concepts and Excel, and Internet applications. The third exam covered concepts, Access and PowerPoint applications. The first two exams had a duration of one hour, and the final exam has a duration of two hours. None of the examinations covered cumulative material, each contained specific material to the content presented in class and tested in online assessments. 5. Analysis Based upon the data, the resulting analysis focuses on the determination of the effectiveness of technology using online assessments in hybrid large classes. The analysis discusses results of the traditional in-class examinations, online assessments, pre-test/post-test, relationships between online assessment and traditional examination scores, and predictors of final grade. Traditional in-class examinations Table 1 shows the comparison of each of the three in-class examinations according to the mean scores, median and standard deviations of their means. In general, all of the examinations were in the C or C+ grading range, which indicates proficiency. Students taking this class had to show a C or better to be admitted to the College of Business and take upper-division classes. The performance of the students on the first two exams was better than the third one. The third examination focused on Access and PowerPoint applications. Students had had less exposure to Access applications prior to the start of the classes and overall felt much less confident after taking both the online assessment and the examination related to Access. Table 1 – Traditional, In-class Examination Scores Exam 1 Exam 2 Exam 3 N 918 904 904 Missing 16 30 30 Mean 79.21 79.85 72.39 Median 80.00 81.00 73.00 Std. Deviations 11.11 7.39 9.23 Variance 123.44 54.57 85.15 Minimum 0.00 0.00 0.00 Maximum 100.00 100.00 99.00 Percentile 25% 73.00 76.00 68.00 Percentile 50% 80.00 81.00 73.00 Percentile75% 86.00 85.00 78.00 Online assessments Table 2 illustrates the four primary online assessment scores, focusing on performance on the non-pre- and post-test comparison. Clearly the scores from the online assessments fall into two categories: high-end range (?>=0.85) for Internet and PowerPoint and low-end range (?<=0.72). Even after classroom lecture via traditional methods and online learning modules, students performed significantly different on these assessments. Table 3 illustrates the one sample t-test, adjusted for sample size for the two clusters of assessments. There is a statistically significant different between the means for the two clusters, where the sample is adjusted for size. This may indicate a certain predisposition toward the more quantitative and/or technical learning across large classes regardless of the type of technology used. Table 2 – Online Assessments (Excel, Internet, Access and PowerPoint) Excel (Assess #2) Internet (Assess #3) Access (Assess #4) PowerPoint (Assess #5) N 890 879 789 817 Missing 44 55 145 117 Mean 71.25 86.35 69.96 85.29 Median 72.00 88.00 70.00 90.00 Std. Deviations 18.75 10.91 17.67 13.82 Variance 351.52 119.09 312.37 191.02 Minimum 12.00 36.00 17.00 13.00 Maximum 100.00 100.00 100.00 100.00 Percentile 25% 60.00 80.00 60.00 80.00 Percentile 50% 72.00 88.00 70.00 90.00 Percentile 75% 84.00 96.00 83.00 97.00 Table 3 – Comparing Low- and High-end Range Performance on Online Assessments Online Assessment t df Sig.(2-tailed) Mean Difference 95% Confidence Interval of the Difference Lower Upper Low-end cluster 113.374 889 0.000 70.252 70.002 72.490 High-end cluster 234.591 878 0.000 86.348 85.630 87.070 Pre- and post-test results Table 4 shows the results of the pre-test and post-test scores. Students were tested on the same body of material in both the pre-test and post-test assessment questions. Students took both the pre-test and post-test Table 4 – Pre-test and Post-test Scores   Pre-test (Assess #1) Post-test (Assess #6) N 818 822 Missing 116 112 Mean 73.91 87.14 Median 74.00 90.00 Std. Deviations 10.62 13.63 Variance 112.87 185.86 Minimum 4.00 0.00 Maximum 100.00 100.00 Percentile 25% 70.00 80.00 50% 74.00 90.00 75% 82.00 97.00 assessments in controlled computer labs at the University. The mean responses indicates a significant increase from pre-test to post-test of more than 13 points. According to t-test results shown in Table 5, the performance of the students in the post-test was significantly better than the performance of the students in the pre-test. The performance on the post-test was statistically better than the pre-test, adjusted for sample size. Table 5 – One Sample t-Test (Pre-test and Post test – Online Assessments) Online Assessment t df Sig. (2-tailed) Mean Difference 95% Confidence Interval of the Difference Lower Upper Pre-test (Assess #1) 225.984 933 0.000 73.912 73.180 74.640 Post-test (Assess #6) 183.254 933 0.000 87.137 86.200 88.070 Correlations Table 6 illustrates the relationships between performance on online assessments, traditional examinations, attendance and grade in the class. There are strong relationships between all of the key variables in this study (p<0.000). Online assessments were considered in two ways for this analysis: (1) the core four assessments excluding the pre- and post- test assessments, (2) all assessments taken in aggregate. The rationale for taking Table 6 – Correlation Matrix Related to the Aggregate Scores   ATTb ASS 1-6c ASS 2-5 EXAM 1-3 GRADEa Correlation .366** .631** .580** .874** Sig. (2-tailed) 0.000 0.000 0.000 0.000 N 916 916 916 916 ATT Correlation .399** .314** .399** Sig. (2-tailed) 0.000 0.000 0.000 N 927 923 924 ASS 1-6 Correlation .898** .595** Sig. (2-tailed) 0.000 0.000 N 925 924 ASS 2-5 Correlation .371** Sig. (2-tailed) 0.000 N 920 a Dependent variable: Grade in class; b Attendance c Online assignments: Assignment 1 (Pre-test), Assignment 2 (Excel), Assignment 3 (Internet), Assignment 4 (Access), Assignment 5 (PowerPoint) and Assignment 6 (Post-test) all of the assessments in aggregate lies in the pedagogical justification that just taking online assessments helps to strengthen traditional exam scores, irregardless of their content. To a moderate extent, it was found that taking online assessments enabled students to perform more effectively on traditional, in-class examinations (r=0.595). Although statistically significant, attendance was found to show relationship between both online assessment performance (r=0.314) and traditional examination performance (r=0.399). It not surprising that traditional examinations played a more significant role in predicting final grade (r=0.874) than did the core online assessment group (r=0.580) because examinations were worth 60% of the total grade whereas online assessments were worth 40%. When corrected for weighting, performance for online assessments was found to equal traditional examinations in correlation (r=0.883). Prediction of final grade Linear regression analysis was used to ascertain the greatest impact of the three predictor variables (attendance, online assessments and traditional examinations) on student final grades. Table 7 shows the results of the linear regression model. The overall model shows a strong goodness of fit (F=1869.3456, p<0.000). The overall predication of the model toward explaining final grade was shown to be R2=0.86 and is to expected given Table 7 – Regression Analysis Results Source Sum of Squares df Mean Square F-value Sig. Regression 64726.22 3 21575.407 1869.356 .000a Residual 10514.423 911 11.542 Total 75240.642 914 a. Predictors: (Constant), EXAMS 1 TO 3, Assignments 2 to 5 and Attendance (ATT) b. Dependent Variable: Final Grade R R2 Adjusted R2 Std. Error of the Estimate .927 0.86 0.86 3.397 Un-standardized Coefficients Stndrdzd Coeff t-value Sig. B Std. Err Beta Const 9.988 0.934 10.698 0.000 Attendance -0.002 0.006 -0.003 -0.241 0.810 Ass. 2 to 5 0.194 0.008 0.330 24.785 0.000 Exam 1 to 3 0.723 0.013 0.767 56.011 0.000 the performance of the major components of performance are represented in the final grade. It should be noted that although online assessments and traditional examinations entered the regression model for prediction of final grade, attendance did not (p=0.810). The impact of attendance in class in enhancing student performance for demonstration in improving a student’s final grade was not demonstrated in this study. 6. Conclusions Discussion The purpose of this study was to examine the impact of online assessment technologies on the performance of students in online classes. The research in the areas of using online assessment tools to create hybrid classes that combine technology for learning and assessment with traditional lecture and examination approaches was examined and three hypotheses were developed and tested in this paper. The first hypothesis that students who attend class will perform better on online assessments was only moderately supported in the correlation analysis (r=0.314) and was not supported in the regression analysis where final grade was the dependent variable (p=0.810). Therefore hypothesis 1 was not fully supported. The main focus of the second hypothesis is the link between online assessment performance and performance on traditional in-class examinations. There is a strong, statistically significant relationship between performance on the online assessments and traditional examinations (r=0.595). There was a greater relationship between student performance on all given online assessments than on just the four online assessments (r=0.371) that counted in the students’ final grade. It should be noted that the pre-test assessment (#1) and the post-test assessment (#6) did not count as part of the final grade. This affect could be due to the confidence level that students felt after completing assessment #1 (pre-test) going into assessment #2 (Excel). It was found that students who perform better on online assessments also performed better on traditional examinations. It could be argued that online assessments better prepared students for traditional in-class examinations. Therefore hypothesis 2 was strongly supported. In the final hypothesis, the intent was to relate the scores received on aggregate online assessments with the gap-closing measure of pre- and post-test performance. While there were a significant improvement (t-test, p<0.000) in the pre-test score (?=73.9) and the post-test score (?=87.1), there was not a strong relationship (albeit statistically significant after adjusting for sample size) between the gap measure and the final grade (r=0.187). Interesting the gap measure was found to have the greatest impact on the third traditional examination (r=0.302) and the third examination had the strongest correlation (over the other exams) with the final grade (r=683). Therefore hypothesis 3 was weakly supported. Limitations External collaborations on online assessments have been proven to be problematic, as with traditional paper assignments (Kozma, 2003). The extent of collaborative efforts increased from 24% with small class sizes to 42% in larger class sizes. This research did not control for external collaborations, while recognizing their effect in the research data. This study did not explicitly tie specific assessments to the content of specific material, such as the Access online assessment with the Access exam. There was no attempt to differentiate between the two classes. Future research This research showed strong findings that students’ performance on online assessments strongly affects both students’ performance on traditional examinations and final grade in large hybrid classes. This type of data that was collected in this study might be better analyzed looking at both the measurement model and predictive model simultaneously using second generation multivariate techniques. Future research might also include correlating the use of online assessments with student satisfaction surveys and self-reported course evaluation scores. It could be argued that the use of online assessment and simulation tools for teaching is just in its infancy and that further study as to its effective assimilation could not only enhance student learning, but student confidence and satisfaction. 7. References Benson, A.D. (2001). “Assessing Participant Learning in Online Environments,” New Directions. Berke , W. J & T. L. Wiseman (2003). “The e-learning answer,” Nursing management; Oct 2003; Research Library Core, pp. 2-6. Brooks, B., F. Rose, E. Attree and A. Elliot-Square (2002). “An evaluation of the efficacy of training people with learning disabilities in a virtual environment,” Vol. 24, No. 11-12, pp. 622-626. Cameron, B. (2003). “The Effectiveness of Simulation in a Hybrid and Online Networking Course,” TechTrends, Vol. 47, No. 5, pp. 18-21. Chatel, R. (2003). “The power and potential of electronic literacy assessment: eportfolios and more,” The NERA Journal, Vol. 39, No. 1, pp. 51-58. Christopher, D. (2002). “Interactive Large Lecture Classes and the Dynamics of Teacher/Student Interaction,” Journal of Instructional Delivery Systems, Vol.17, No. 1, pp. 13-18. Caywood, K., Jane, D. (2003). “Online vs. On-Campus Learning in Teacher Education,” Teacher Education and Special Education, Vol. 26, No. 2, pp. 98-105. Clarke T. & A. Hermes. “Corporate developments and strategic alliances in e-learning,” Education and training. London 2001, Vol. 43, Iss. 4/5, pp. 256-268. Comeaux, P. and M. Neer (1995). “A Comparison of Two Instructional Formats in Basic Hybrid Course on Oral and Test Performance,” The Southern Communication Journal, Spring 1995, Vol. 60, No. 3, pp. 257-265. Cooper, L.W. (2000). “Online and Traditional Computer Applications Classes,” T.H.E Journal, Vol.28, No.8, pp. 52-58. Daniels , W. J. and Stan Salisbury (2002). “Using the Internet to Achieve Your Workplace Training Objectives,” Applied Occupational and Environmental Hygiene. Vol. 17, No. 12, pp. 814–817. Emerson, T. and B. Taylor (2004). “Comparing Student Achievement Across Experimental and Lecture-Oriented Sections of a Principles of Microeconomics Course,” Southern Economic Journal, Jan 2004, Vol. 70, No. 3, pp. 672-693. Finn, J., G. Pannozzo, and C. Achilles. “The “Why’s” of Class Size: Student Behavior in Small Classes,” Review of Educational Research, Fall 2003, Vol. 73, No. 3, pp. 321-368. Huang, A. H. (1997). “Online Training: A New Form Of Computer-Based Training,” Journal of Education for Business, Sep/Oct 1997, Vol. 73, No. 1, pp. 35-37. Kozma, R. (2003). “Technology and Classroom Practice: An international Study,” Journal of Research on Technology in Education, Vol. 31, No. 1, pp. 1-13. Lage, M., Platt, G. and Treglia, M. (2000). “Inverting the classroom: A gateway to creating an inclusive learning environment,” Journal of Economic Education, Vol. 31, pp. 30-44. Leidner, D. and S. Jarvenpaa, (1995). “The information age confronts education: Case studies on electronic classrooms,” Information System Research, Vol. 12, No. 4, pp. 265-291. McCray, G. (2000). “The hybrid course: Merging on-line instruction and the traditional classroom,” Information Technology and Management, Vol. 1, No. 4, pp. 307-327. McLoughlin, C., et. al. (2002). “A learner-centred approach to developing team skills through web-based learning and assessment,” British Journal of Educational Technology, Vol. 33, No. 5 (November 2002) pp. 571-82. Meyen, E.L., et. al. (2002). “Assessing and monitoring student progress in an E-learning personnel preparation environment,” Teacher Education and Special Education, Vol. 25, No. 2, pp. 187-98. Meyer, K.A. (2002). “Quality in Distance Learning,” ASHE-ERIC Higher Education Reports, Vol. 29, No. 4, pp. 1-21. Northcote, M. (2002). “Online assessment: friend, foe or fix?” British Journal of Educational Technology, Vol. 33, No. 5, pp. 623-625. Noyes, J., Garland, K. & Robbins, L. (2004). “Paper-based versus computer-based assessment: is workload another test mode effect?” British Journal of Educational Technology; Vol. 35 No. 1, pp. 111-113. O'Donoghue, J., et. al., (2001). “Virtual education in universities: a technological imperative,” British Journal of Educational Technology, Vol. 32, No. 5, pp. 511-23. Oblinger, D. G. (1999). “Global Education: Thinking Creatively,” Higher Education in Europe, Vol. 24, Iss. 2, pp. 251-259. Rainbow S. and E. Sadler-Smith. (2003). “Attitudes to computer-assisted learning amongst business and management students,” British Journal of Educational Technology; Vol. 34, Iss. 5, pp. 615-624. Ricketts, C., and S. Wilks. (2002). “Improving Students Performance Through Computer-based Assessment: insights from recent research,” Assessment & Education in Higher Education, Vol. 27, No. 5, pp. 476-479. Riffell S. and D. Sibley. “Online Student Perceptions of a Hybrid Learning Format,” Journal of College Science Teaching, Vol. 32, No. 6, pp. 394-399. Robles, M., et. al., (2002). “Online assessment techniques,” Delta Pi Epsilon Journal, Vol. 44, No. 1, pp. 39-49. Sabry, K., et. al., (2003). “Web-based learning interaction and learning styles,” British Journal of Educational Technology, Vol. 34, No. 4, pp. 443-54. Sanchis, G.R. (2001). “Using Web forms for online assessment,” Mathematics and Computer Education, Vol. 35, No.2, pp. 105-13. Schaverien, L., (2003). “Teacher education in the generative virtual classroom: developing learning theories through a web-delivered, technology-and-science education context,” International journal of science education, Vol. 25, Iss. 12. pp. 1451-1462). Schrum, L. (1998). “On-Line Education: A Study of Emerging Pedagogy,” New Directions for Adult & Continuing Education, Iss. 78, pp. 53-62. Schulman, A.H., Simms, R. (1999). “Learning in an online format versus an in-class format: an experimental study,” T.H.E. Journal, Vol. 26, No. 11, pp. 54-56. Tearle, P, & P. Dillion (2001). “The development and evaluation of a multimedia resource to support ICT training: Designs issues, training processes and user experiences Innovations in Education and Training,” Staff and Educational Development Association, Feb 2001. Tuckman, B.W. (2002). “Evaluating ADAPT: a hybrid instructional model combining Web-based and classroom components,” Computers & Education, Vol. 39, No. 3, pp. 261-269. Voci, Elaine & K. Young. (2001). “Blended learning working in a leadership,” Development Program Industrial and Commercial Training, Guilsborough. Vodanovich, S. and C. Piotrowski. (2001). “Internet-Based Instruction: A National Survey of Psychology Faculty,” Journal of Instructional Psychology, Vol. 28, Iss. 4, pp. 253-258. Wheeler, S. S. Waite, and C. Bromfield. (2002). “Promoting creative thinking through the use of ICT,” Journal of Computer Assisted Learning, Vol. 18, Iss. 3, pp. 367-379. [p1]Should it be online or on-line???? [p2]Is this correct way to write? [p3]Correct?? [p4]I just want to know what this is. Is it ANOVA???? [p5]???? [p6]Better? Or just who takes the assessments compared to who doesn’t??? ?? ?? ?? ??