Information Systems National Assessment Update: The Results of a Beta Test of a New Information Systems Exit Exam Based on the IS 2002 Model Curriculum John H. Reynolds1 CS & IS Department, Grand Valley State University Allendale, Michigan 49401, USA Herbert E. Longenecker, Jr.2 Jeffrey P. Landry3 J. Harold Pardue4 School of CIS, University of South Alabama Mobile, Alabama 36688, USA Brooks Applegate5 Educational Studies Department, Western Michigan University Kalamazoo, Michigan 49008, USA Abstract There is a growing need for requiring assessment of Information Systems (IS) curricula. A beta test of an IS exit assessment exam was conducted to evaluate the feasibility of using such an exam to make subgroup comparisons. A comparison of subgroup descriptive statistics on the overall exam and eight skill areas suggests meaningful data can be derived for curricula assessment. The data suggests the most reliable foundation for future comparison and assessment of IS student achievement and IS curricula effectiveness would be a classification structure based on a school’s mapping of its IS courses to the IS model curriculum as opposed to year in curriculum or discipline area classifications. In addition, it was determined that there is an absolute requirement for verification of student classification and other demographic data to insure validity of the measurements. Keywords: IS education, IS model curriculum, IS exit exam, assessment 1. INTRODUCTION The discussion of outcome assessment has moved from the annual college and university assessment report to the daily newspaper as tuition costs have risen and parents are asking for assurances that they are receiving value for their education dollar (“Help Parents,” 2003). In Information Systems (IS), a nationally standardized exam created for outcome assessment and mapped to the IS 2002 Model Curriculum (Gorgone, Davis, Valacich, Topi, Feinstein, and Longenecker 2002) has not been available. This paper will report on the results of a beta test of the exam developed in a joint project between the Institute for the Certification of Computing Professionals (ICCP) and members of the IS Model Curriculum Task Force who authored the most recent version of the IS model curriculum. The exit assessment exam was created to assist institutions with IS programs in efforts to evaluate and improve their IS curriculum (Landry, Reynolds, and Longenecker 2003) and to update and improve the quality of certification of IS professionals (McKell, Reynolds, Longenecker, and Landry 2003). 2. RATIONALE The IS Exit Assessment exam is part of an overall effort to assess the knowledge and practical readiness of IS students and professionals and to evaluate, improve, and accredit undergraduate information systems degree programs. The purpose of the exam is to assess individual student performance in eight skill areas, defined in the model curriculum. These eight skill areas shown in Table 1 are based on research incorporating the curriculum presentation areas in the IS model curriculum (Gorgone, et al. 2002) and IS entry-level job ad criteria and represent what IS students need to know upon graduation (Landry, Longenecker, Haigood, and Feinstein 2000). Assessing student performance in these skill areas provides a more detailed assessment of student readiness than currently exists. Table 1 - IS Model Curriculum Summary Presentation Areas and Related 8 Skill Areas I. Information Technology Skills A. Software Development B. Web Development C. Database D. Systems Integration II. Organizational and Professional Skills A. Individual and Team Interpersonal Skill B. Business Fundamentals III. Strategic Organizational Systems Dev. with IS A. Organizational Systems Development B. Project Management By aggregating results, exam performance for various subgroups of IS students can be compared and contrasted. When demographic, educational, and experience data are collected from test candidates, many classifications are possible. Students at one school can be compared to another, or to all exam takers, skill area by skill area. Students in one degree program or specialization at a school can be compared to another. This paper will illustrate the process used to select the sample and analyze the data and will provide an example of the kind of profile analysis that is possible by comparing all undergraduate IS students against IS graduate students, and business school IS vs. non-business school IS students. 3. SAMPLE The first step of creating a standardized exam, after the initial terminal objectives and questions are written by subject matter experts, is to choose a representative sample, if possible, for a beta test from which test and item statistics can be derived. The sample of schools chosen to participate in this beta test consists of 17 schools who indicated that they supported and used the IS model curriculum, of which nine IS Programs are in schools or colleges of business. The senior class in all of the programs ranged from 10 to nearly 200 students, with the average class size being about 60. For the student sample, in addition to graduating seniors, schools were encouraged to have graduate students, faculty, and sophomores and juniors also take the exam to provide data so that a sophomore or junior could be assessed at various stages of their academic career in the future. A total of 593 candidates’ exams were useable of which 472 were undergraduates, 100 were graduate students, and 6 were faculty members. (The remaining 15 candidates did not complete the demographics questionnaire so they could not be classified into one of the groups and, therefore, were not included in the analysis. The six faculty members chose to take the exam for professional development and assessment and were included in the graduate student group for comparison purposes.) 4. METHODOLOGY The tests were administered using browser-independent web based testing software. Students were tested in groups at their respective schools with volunteer faculty proctoring each exam session. Students registered for the exam and provided basic demographic data as well as academic and work experience data. The software presented students with each question in a randomly predetermined sequence and stored each individual student’s answer to each question in a central database. Tests were scored by comparing responses with the exam key and students were shown an individual score profile after they ended the test. The descriptive statistics were generated from Microsoft® Excel 2002. Excel was also used to calculate the statistical significance of the group average comparisons using a two-tailed independent t-test and based on a conservative alpha of .01 in order to reduce Type I error. Test, sub-test scores, KR20, and the main test histogram were obtained from TESTFACT (Wilson, Wood, Schilling, and Gibbons 2003). 5. RESULTS The distribution of examinees’ total test scores follows a generally normal looking pattern that is slightly negatively skewed with a slight negative kurtosis as shown in Chart 1. The KR20 estimate of internal consistency for the entire group of examinees is 0.841 and varies from 0.766 to 0.878 for the four sub-groups noted in Table 2. According to Ebel and Frisbie (1991), a KR20 between 0.85 and 0.95 is desired for group administered standardized achievement exams that are used to measure individual performance. “However, if the decision is about the scores of a group of individuals, like a class, the generally accepted minimum standard is 0.65.” (Ebel and Frisbie 1991, p. 86) Based on these guidelines, this test shows a very good internal consistency, both as a whole and across sub-groups. Descriptive statistics on the overall test for each subgroup are presented in Table 2. The group of undergraduate students who attend IS programs in schools or colleges of business seem to be the most homogenous with the lowest standard deviation and range of scores. The average score (also the percent correct since the test contained 100 items) on the overall test and on the individual subtests (skill areas) for the four groups are presented in Table 3. Initially, overall test and subtest performance of the graduates/ faculty were compared to all of the undergraduate students. (The two individual sub-groups were not analyzed separately due to the small size of the faculty sub-group.) The presumption was that the graduates/faculty group would provide a “benchmark” that the undergraduate student’s preparation could be measured against. These results can be seen in the table, where the graduates/faculty scored significantly higher overall, on the entire first section of the test (Information Technology), and specifically on the first two skill areas (Software Development and Web Development). Surprisingly, the graduates/faculty did not significantly outperform the undergraduates in IS programs that are in schools or colleges of business on the entire exam nor on any of the subtests except Web Development. (This comparison of the overall test average between these two groups is not specifically labeled in Table 3. The t-score is 0.99.) It is possible that there may be an inflated Type II error due to the assumption that there is no covariance between student scores at each individual institution, but this is in keeping with the very conservative nature of this report. Another possible reason is not all graduate students may have had an IS undergraduate degree. At the most recent workshop held in Mobile, Alabama, June 6-7, where preliminary results were disseminated to study participants, some faculty expressed a concern that the undergraduate students should be separated into two groups: those programs that are housed in a school or college of business, and those that are not. The presumption was that since students in business schools have significantly fewer hours in which to be taught the content of the model curriculum, they would score lower on a model curriculum exit exam and should not be compared to the higher average of students who are in IS programs not in schools or colleges of business. Table 3 shows that, when compared overall and at the subtest level, undergraduates in schools or colleges of business do not score below students in IS programs in non-business schools in any area, and in fact, scored significantly higher on the test overall and on two of the three sections of the test (Organizational and Professional Skills and Strategic Organizational Systems Development with IS), including five of the eight skill areas (Database, Individual and Team Interpersonal Skill, Business Fundamentals, Organizational Systems Development, and Project Management). A possible explanation for this result is that this exit assessment is biased toward solving problems in a business context due to the IS 2002 Model Curriculum stipulation that IS graduates “should have a basic understanding of the main functional areas of an organization” (Gorgone, et al. 2002, p. 12). Each question was designed to test Use level knowledge (Gorgone, et al. 2002, p. 39) by creating scenarios that students need to understand before they can answer the question. An understanding of business vignettes may be superior in students who are enrolled in IS programs that are in schools or colleges of business. 6. INTER-GROUP COMPARISONS The inter-group comparisons provide useful benchmarks against which institutions with IS degree programs can compare themselves. By choosing the appropriate referent groups, an IS program can assess its student performance skill area by skill area across various groups. For example, an IS program in a business school may be interested in comparing its students against other IS programs in business schools, or other IS programs in general. A referent benchmark would be more meaningful than simply comparing a raw score against the perfect score of 100, given that the exam is designed to be norm-referenced. After the beta test was completed in June 2003, each of the participating schools were provided with a profile that compared their students performance against a single national profile or average that showed how their students average scores compared with the average of the entire group of participating schools both on overall scores and on subtest (skill area) scores. 7. OBSERVATIONS AND LIMITATIONS While useful, comparing sub-groups of IS students taking the national exam has its limits. One imitation is the makeup of the sample. There is a self-selection sampling bias due to the small number of schools involved in the study. In addition, after analyzing the data, it was discovered that the graduate students in the sample did not represent a homogenous group. Data was gathered on work experience, but deemed invalid as some faculty reported students did not limit their reporting to work experience in the field. Anecdotal observations by faculty of their individual student scores suggest that full-time work experience in the field is a factor affecting success on the test and that should be made clear when the student is asked for work experience. Another, more critical limitation, of the IS performance profiling analysis, is the difficulty with classifying programs and students into comparison subgroups. For example, IS programs that are housed in schools or colleges of business may not be homogenous. This is a broad category where programs that are administratively housed in a business school may not be constrained in their course offerings in the same way that other programs are. (For example, a program may administratively be in a school or college of business, but may not be required to meet the requirements of a national business accrediting body and, therefore, have more hours to devote to courses in the IS major.) This could result in combining programs with very rigid curricula with those with very flexible curricula. As a result, any stratification by major or by year in school was confounded. 8. RECOMMENDATIONS A sampling makeup limitation, such as the self-selection sampling bias, is remedied by making the IS exit exam widely available to schools who wish to participate and by insuring that schools require students to take the exam. Secondly, schools should verify the student’s academic progress and demographic data. Graduate students need to be stratified by work experience in the field and their respective undergraduate majors, including, possibly, the country where both the experience and/or undergraduate degree were obtained. More importantly, what cuts across all comparison subgroups is the breadth and depth of coverage of IS model curriculum content. In order for meaningful comparisons to be made, schools should map their individual IS courses against the IS model curriculum’s learning units (Daigle, Longenecker, Landry, and Pardue 2003). This would allow for comparison of various groups based on their progress through the model curriculum, regardless of academic year, number of credit hours, or discipline area classification. 9. ACKNOWLEDGEMENTS The authors wish to thank Jumaira Jaleel and Farhad Hussein for their excellent effort in implementing the browser-based testing software. 10. REFERENCES Ebel, R.L. and D.A. Frisbie. (1991). (5th ed.). Essentials of Educational Measurement. New Jersey: Prentice Hall. Daigle, R.J., H.E. Longenecker, Jr., J.P. Landry, and J.H. Pardue. (2003). “Using the IS 2002 Model Curriculum for Mapping an IS Curriculum.” Proceedings of ISECON 2003, November 6-9. Help Parents Weigh Value of Pricey Higher Education. (2003, June 5). USA Today, p. 23A. Gorgone, J.T., G.B. Davis, J.S. Valacich, H. Topi, D.L. Feinstein, and H.E. Longenecker, Jr. (2002). IS 2002 Model Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems. Atlanta: Association for Information Systems. Landry, J.P., H.E. Longenecker, Jr., B. Haigood, and D.L. Feinstein. (2000). “Comparing Entry-Level Skill Depths Across Information Systems Job Types: Perceptions of IS Faculty.” Proceedings of the 2000 Americas Conference on Information Systems, August 10-13. Landry, J.P., J.H. Reynolds, and H.E. Longenecker, Jr. (2003). “Assessing Readiness of IS Majors to Enter the Job Market: An IS competency Exam Based on the Model Curriculum.” Proceedings of the 2003 Americas Conference on Information Systems, August 4-6. McKell, L.J., J.H. Reynolds, H.E. Longenecker, Jr., and J.P. Landry. (2003) “Aligning ICCP Certification with the IS2002 Model Curriculum: A New International Standard.” Proceedings of the European Applied Business Research Conference, June 9-13. Wilson, D., R. Wood, S. Schilling, and R. Gibbons. (2003) TESTFACT (Version 4.0) [Computer software]. Lincolnwood, IL: Scientific Software International. 1 john.reynolds@csis.gvsu.edu 2 hlongenecker@usouthal.edu 3 jlandry@usouthal.edu 4 hpardue@jaguar1.usouthal.edu 5 brooks.applegate@wmich.edu