Using the IS Model Curriculum and CCER Exit Assessment Tools for Course-level Assessment Jeffrey Landry jlandry@usouthal.edu Harold Pardue hpardue@jaguar1.usouthal.edu Herbert Longenecker hlongenecker@usouthal.edu University of South Alabama Mobile, Alabama 36688 USA John Reynolds john.reynolds@csis.gvsu.edu Grand Valley State University Grand Rapids, Michigan 49401 USA Lynn McKell mckell@byu.edu Brigham Young University Provo, Utah 84602 USA Bruce White Bruce.white@quinnipiac.edu Quinnipiac University Hamden, Connecticut 06518-1940 USA Abstract The paper describes a process whereby IS faculty work collaboratively in a community-of-practice (Pardue et al. 2005) in order to assess and improve student educational performance at the course-level. Working with colleagues and utilizing resources of the Center for Computing Education Research (www.iseducation.org) and the Institute for the Certification of Computing Professionals (ICCP), and utilizing the IS 2002 Model Curriculum, faculty can development and administer online assessment exams. Each of these exams is designed for courses they teach and is given to students for placement, pre-test, and post-test purposes. Keywords: IS Model Curriculum, curriculum development, assessment 1. INTRODUCTION In 2003, the IS 2002 Exit Assessment exam was launched with the purpose of assessing the readiness of IS majors to enter the job market and to improve IS courses and curricula. The stated intent of the IS exit assessment effort was “to assess the knowledge and practical readiness of IS students and professionals and to evaluate, improve, and accredit undergraduate information systems degree programs” (Reynolds et al. 2003). These efforts lead to the creation of an exit assessment exam that “not only reflects what IS students need to know, but does so at a level that reflects culminating skills—those at a maturity level reached at graduation time and required for the entry level job market” (Landry et al. 2003). Current uses of the assessment tools include the improvement of courses and curricula and professional certification. For example, the exit assessment tools have been used to assess the impact of a proposed curriculum change (Landry et al. 2004) by comparing the learning units covered by a course being considered as a replacement for another course. By aligning the IS Model Curriculum with standards set by the ICCP (McKell et al. 2004), the new Information Systems Analyst (ISA) certification simultaneously provides students making a passing score on the exit exam with credentials from two respected bodies (McKell et al. 2005). As future uses of the assessment mechanisms emerge and get refined, a more mature, continuous improvement process is possible (White et al. 2003). This paper describes another use of the assessment tools provided by the CCER. While much has been made of the use of assessment for curricula, this paper focuses on course-level assessment. From the standpoint of a teacher of an IS course, this paper examines an emerging approach that uses exit assessment tools for evaluating and improving courses. The following table and explanation describes such a process. 2. COURSE-LEVEL IS ASSESSMENT PROCESS For simplicity, we have described the process in a step-by-step fashion, as summarized in Table 1. Step 1 – Map Local Course The first logical step for an IS faculty member is to map an IS course into the national standard—the IS 2002 Model Curriculum. The faculty member should map a local course that he or she is interested in assessing. Mapping is “the process of identifying and describing how courses that make up an undergraduate IS degree program support the educational goals and objectives embodied in the learning units of the model curriculum” (Daigle et al. 2004). Mapping is the first logical step because one needs to identify the educational objectives before one can design an educational program or tests of the effectiveness of the program. The CCER provides Web-based software utilities for mapping one’s course into the IS 2002 Model Curriculum. Mapping makes interpreting assessment results, such as the IS Exit Exam, more meaningful to teachers, because it enables test results to be broken down to the course level. Table 1 - Course-level IS Assessment Process Step Activity 1 Map local course 2 Write and review test items for local course 3 Design a local course exam 4 Administer the exam 5 Evaluate the results of the exam Step 2 – Write and Review Test Items for Local Course The next logical step for faculty colleagues is to participate in efforts to create high-quality test items for assessing the students in their courses. Currently, IS faculty whose institutions enter into a collaborative agreement with the ICCP are invited to participate in item writing and review. Test writers and reviewers contribute items and reviews on an ad hoc basis, guided by a test item review board. Item writers and reviewers are expected to follow rigorous guidelines for writing multiple choice test items, using principles of educational test theory, test item statistics, and critical expert review. Items written are shared by all and become property of the CCER. Logically, the faculty would try to write as many items as possible in the learning units mapped to their local course objectives. Step 3 – Design a Local Course Exam The faculty member would design an exam to test students taking or seeking credit in the chosen local course. Currently, the faculty member would have to request an exam. The faculty member could suggest a length for the exam, in terms of time limit and number of items. The map created in step one would provide the outline of coverage for the exam. In a collaborative process, the faculty could review the exam and remove and replace items as required. Step 4 – Administer the Exam Procedures are well-defined for administering the exam. Using online utilities provided by the CCER, faculty would define exam sessions, including the time, place, and proctor. Students would register and sign up for the exam but would need to be verified by a school official. The actual exam would be given in a secure lab at the local institution, using a proctor who would enter a password and monitor the room. Step 5 – Evaluate the Results of the Exam Various results could be evaluated in different ways at different times. Students are given an immediate score report at the completion of the exam. Once a full test period is completed, faculty have access to several predefined report formats. Reports on both test item performance and on student results are provided. Results are compared to other students in other institutions taking the same exam. If faculty collaborate and agree on a standard exam for similar courses, more useful exam results can be generated and used. The actual use of the results depends on the purpose of the exam. Three possibilities include placement, pre-test, and post-test. A placement test would be used to determine if a student qualifies for placement into a course based on knowledge of the prerequisite material. A pre-test is a test given to students at the beginning of a course, providing a baseline of what the student already knows before beginning the course. If learning takes place, a post-test given at the end of a course should be passed, and the students should show improvement from pre-test scores. 3. AN EXAMPLE – DATABASE PLACEMENT EXAM This semester, two of the co-authors are successfully using CCER resources to develop and administer a graduate placement test in data management. The exam was given to incoming students in the MSIS program, many of whom were international students whose IS backgrounds were difficult to assess. Example Step 1 – Map Local Course The design of the exam was to be based on a local course, the undergraduate database course, which represented the prerequisite knowledge needed of graduate students being placed. The co-authors had previously mapped local database course objectives to the related learning units in IS2002 in a process described in Daigle, et al. (2004). CCER mapping utilities and summary reports are available to faculty at schools who participate with the CCER for nominal fees. Example Step 2 – Write and Review Test Items for Local Course The co-authors had to write some additional items to cover the learning units and local objectives needed for the placement test. They used software utilities designed for the test item writing and review board of the CCER. The utilities enable item authoring, mapping, collaborative review, and revision. Two other colleagues, both co-authors, were called in to serve as reviewers. They used software and telephone conversion to complete item revision. Example Step 3 – Design a Local Course Exam After deciding to use learning units mapped to the introductory database course, and writing items to cover the learning units, the authors proceeded with designing the exam. The authors searched the CCER test bank and chose test items mapped to the selected learning units. Faculty who volunteered to serve on the test item writing and review board would have access to a Web-based searching for items by learning unit. Other faculty could work collaboratively with the co-authors, using their mapping reports as a guide to allow the co-authors to search for items. The result was a 25-item, multiple-choice exam that students were given 20 minutes to complete, following a 45-seconds-per-question rule of thumb. Example Step 4 – Administer the Exam The instructors administered the exam in secure, controlled conditions. The students were physically present in the room, had to create an account and provide identification, their identities and enrollment at the school had to be verified in software by an approved faculty member; they logged in with a password-protected account, and were allowed to sign up for the exam session unless they had already taken it in the past 30 days or were scheduled for another session. The proctors maintained control over the room, observed student behaviors, typed in a 3-character password to start the exam, and followed other documented, CCER procedures to ensure a secure exam that protected the test items and avoided cheating. Although the content of the exams were identical for all students, the order of the items is randomized to minimize systematic cheating. The proctors stood by in case of problems. One problem is when a student loses their place in the browser, in which case they can be logged in again and pick up without loss of responses or time. Sometimes, there are remedies for repairing local computer settings that cause problems with display formatted and readability. Example Step 5 – Evaluate the Results of the Exam The faculty analyzed the results of the test with software tools and made informed decisions on placing students. They used 50 percent as a baseline score for “passing” the test, because it is the approximate national average for students taking similar CCER exams. A score of 30 or below was interpreted as not significantly better than guessing, so they were placed in the introductory database course. Scores in the range of 30 to about 45 were placed in an intermediate-level database course, and students scoring over 45 were placed in the graduate database course. Students will now have to be evaluated after completing this semester’s coursework to determine if the process was successful. The process will be successful if students are satisfied that they learned and mastered new material in the course into which they were placed. Students will be unsuccessful if they are overwhelmed by material too advanced for their background, or if they are bored by redundant content. Initial indications are that students were placed appropriately. A follow-up survey could ascertain their perception of success and failure. To further evaluate the quality of the exam and of the undergraduate database course, pre-and post-tests can be performed. Students could be given the exam at the beginning of the course, and then again at the end. If the test is measuring what it is supposed to measure, scores of masters, or students who are at the end of the course, will be higher than non-masters, or students at the beginning. Low post-test mean scores in selected areas would signify a potential problem with coverage of that specific area. The local objective-learning unit mapping process helps identify specific areas addressed by test items. A post-test will be given to all of the undergraduate database students to perform this evaluation. 4. CONCLUSION The co-authors are in an ongoing effort of making the process work. Already, exams for several specific courses have been developed and used, including: IS project and change management, introductory and advanced data management, and IS in organizations. The data management exam was used to place students in a graduate database course. The project management exam was used as a final exam, and the IS in organizations exam will be used this coming fall as a placement exam for a graduate course. Initial indications are that the process is viable and useful. Students are motivated to take an exam that, although their teacher took part in its development, provides an objective, external assessment by a standards body. Because the results of specific questions are tied to both local and national objectives, it is easy for teachers to assess strengths and weaknesses of students and the course at a very detailed level. Another benefit for students is that they get immediate feedback through the online score reporting system. The process of exam development and administration, although automated, is quite flexible for developing exams of varying lengths and coverage, given at various times, and used in a variety of purposes. 5. REFERENCES Daigle, Roy J., Herbert E. Longenecker, Jr., Jeffrey P. Landry, and J. Harold Pardue (2004) “Using the IS 2002 Model Curriculum for Mapping an IS Curriculum,” Information Systems Education Journal, January 27, 2004, Volume 2, Issue Number 1, URL: http://isedj.org/2/1/. Landry, Jeffrey P., J. Harold Pardue, Roy J. Daigle and Herbert E. Longenecker, Jr. (2004) “Using IS2002 to Assess the Impact of a Proposed Curriculum Change,” Americas Conference on Information Systems (AMCIS 2004), August 2004, New York, NY. Landry, Jeffrey P., John H. Reynolds, and Herbert E. Longenecker, Jr., (2003) “Assessing Readiness of IS Majors to Enter the Job Market: An IS Competency Exam Based on the Model Curriculum,” Americas Conference on Information Systems (AMCIS 2003), August 2003, Tampa, FL. McKell, Lynn J., John H. Reynolds, Herbert E. Longenecker, Jr., Jeffrey P. Landry, and J. Harold Pardue (2005) “Information Systems Analyst (ISA): A Professional Certification Based on the IS 2002 Model Curriculum,” Review of Business Information Systems, Summer 2005, Vol. 9, No. 3. McKell, Lynn J., John H. Reynolds, Herbert E. Longenecker, Jr., and Jeffrey P. Landry, (2003) “Aligning ICCP Certification with the IS2002 Model Curriculum: A New International Standard,” International Business & Economics Research Journal, September 2003, Vol. 2, No. 9, pp. 87-91. Pardue, J. Harold, Jeffrey P. Landry, Herbert E. Longenecker, Jr. (forthcoming) “Computing Program Curriculum Assessment: The Emergence of a Community of Practice,” Special Issue of Journal of Informatics Education Research. Reynolds, John H., Herbert E. Longenecker, Jr., Jeffrey P. Landry, J. Harold Pardue, Brooks Applegate (2004) “Information Systems National Assessment Update: The Results of a Beta Test of a New Information Systems Exit Exam Based on the IS 2002 Model Curriculum,” Information Systems Education Journal, May 1, 2004, Volume 2, Issue Number 24, URL: http://isedj.org/2/24/. White, Bruce., Herbert. E. Longenecker, Jr., Paul M. Leidig, and David M. Yarbrough (2003) “Applicability of CMMI to the IS Curriculum: A Panel Discussion,” Proceedings of the Information Systems Education Conference (ISECON), November 2003, San Diego, CA.