Is it Possible to Assess Information Systems Skills using a Multiple-Choice Exam? Sharon Paranto parantos@northern.edu Leigh Shillington shillinl@northern.edu Northern State University Aberdeen, SD 57401 Abstract The percentage of students with knowledge and skills in the areas of computers and technology has gradually increased as the number and level of courses taught in the high schools have expanded. As a result, many business schools are electing to change the core Information Systems requirement in the business curriculum from an introductory computer course to an advanced applications course. This change better prepares business students with the skills they need for success in a global economy. However, when making this change, the decision must be made as to how to place students into the appropriate course, based on their level of knowledge and expertise. This paper addresses the placement issue and highlights how a well-designed multiple-choice test can be used as a placement tool when logistics and other factors prevent or limit the use of technology in assessing student skills. Keywords: placement testing, assessment, business curriculum, multiple-choice exams, computer skills 1. INTRODUCTION Just as the percentage of high school graduates with knowledge and skills in the areas of computers and technology has increased over time, the skills college graduates need for success in a global economy have increased as well, especially in business and industry. As a result of the combination of 1) the improved skills of students entering college and 2) the increased level of expertise required of college graduates, many business schools have elected to change the core Information Systems requirement in the business curriculum from an introductory computer course to an advanced applications course. However, a number of students entering college do not have the skills necessary to move directly into an advanced computer course. Some type of assessment tool is necessary in order to place students into the applicable course. Obviously, the best way to assess technology skills is in a hands-on environment. Unfortunately, both logistics and cost-constraints can make hands-on placement testing difficult. This paper highlights how a well-designed multiple-choice test can be used as an inexpensive and easily-implemented placement tool when the use of technology in assessing student skills is not a viable alternative for placement testing. 2. LITERATURE REVIEW Learning technological skills was once reserved for higher education institutions. In today’s society, students at the elementary age are being exposed to technology in the classroom and at home. These skills are honed over years of casual learning in elementary and secondary schools. As students graduate toward higher education, emphasis is being placed on real-life applications. By the time they reach university, most students will be proficient in basic computer applications. Post-graduation, the global workforce demands graduates with at least basic technology skills, and students with superior technology skills often receive the highest paying jobs, especially in the tech sector (Roth, 2005). Technology skills have altered the landscape of classroom learning (Schuh, 2004). Yet, the method for assessing these skills is undergoing continual change. As technology advances, assessors must maintain a similar level of understanding and awareness of technological standards. The debate over assessment, especially in the field of technology, focuses on the method of assessment. Although educators may differ in their opinions on assessment, they deem necessary the skills that students learn from using the technology. Schuh’s (2004) paper states that the ability to use computers in the school and workplace are essential for full participation in society. Students will use computers at work, home, and school, and they will be constantly challenged to develop their skills as the demand for new technology increases. Technology literacy requires a large investment that many educational institutions do not have access to (Schuh, 2004). Additionally, it is not enough to simply provide technological resources; educators must ensure that the proper skills are being taught. It is difficult to categorize learning skills in technology, and therefore, before any sort of assessment begins, educators must determine the definition of “technologically literate.” There seems to be some agreement among educators that technological literacy should center on teaching students how to understand and utilize the technology to process information (Schuh, 2004). Debate continues over the validity of using multiple-choice questions to assess technology skills. Proponents indicate that students benefit from the process of problem solving through multiple choice. Cook-Sather’s (1997) paper states that by exploring a range of answers, students develop critical thinking skills. Students have the option of multiple answers and multiple ways to develop critical thinking to choose the right answers. Sharif (2005) makes an excellent point with the assertion that multiple-choice testing is the best assessment because it is closer to real life. He asserts that with multiple-choice questions, no credit is given for partial answers, and therefore multiple choice is the most reliable tool for assessing critical thinking skills. Opponents of multiple-choice tests claim that these tests are too basic and are not indicative of whether students have actually developed their process-orientated skills. The argument can be made that multiple-choice tests disregard the process of formulating an answer, and simply indicate basic knowledge (Roth, 2005). Similarly, Sharif’s (2005) paper argues that multiple-choice tests do not differentiate between students that have made minor errors in process but answered correctly, and those students that completely guess the correct answer. It is important that assessment be matched with the curriculum of technological literacy. However, research in the field of student technology literacy skills is very limited (Schuh, 2004). Arguments both for and against multiple-choice testing of technological skills indicate that while critical skills are being discovered, the process of discovering the solution may go unnoticed. Cook-Sather’s (1997) paper concludes that despite the method of assessment, it must be the focus of student learning, and development of critical thinking skills, which drives the educator to understand new technologies. In the end, it is the decision of the educator, based on the desired outcomes of the test and the resources that are available, whether to use the multiple-choice format or another method of assessment. 3. SELECTION OF QUESTIONS Due to logistics and financial issues, our university made the decision to develop an in-house multiple-choice exam to assess student knowledge and skills and place the students into the appropriate technology-related courses. To start the placement process, the Information Systems Department selected a large number of multiple-choice questions from many sources. The series of questions selected were associated with word processing, spreadsheets, databases and database management, presentation software, and the Internet and World Wide Web, as well as computer concepts. All selected questions were included in a draft version of the placement exam. The department then went through the questions and eliminated some that were deemed to be of poor quality or duplications of other questions. The revised version of the exam contained one hundred questions. This version was then given to all sections of the Introduction to Computers course, the Advanced Computer Applications course, and the Management Information Systems course, in order to “test the test.” As would be expected, the students taking the advanced course did considerably better than the other students. Students taking the Management Information Systems course had higher scores, on the average, than students in the Intro course, but lower, on the average, than students in the Advanced Applications course. This group of students had completed the Intro course, but few of the students had taken the advanced course. Therefore, the outcome was as had been expected. Many of the questions were designed to assess whether students understood how to utilize the software effectively, rather than whether they had the ability to regurgitate a definition. For instance, to assess a student’s knowledge and understanding of word processing, the following question might be utilized: If Mary presses Ctrl+End to move the insertion point to the end of her document, then clicks the Underline button on the Formatting toolbar, _____ a. the entire document will be underlined. b. the next word she types will be underlined, then text will revert to normal. c. whatever text she types from that point forward will be underlined, until she turns off the Underline command. d. None of the above Similarly, questions were included to assess whether students had an understanding of the Internet, such as: Information about tobacco use that has been found on a personal homepage on the Internet ______ a. is checked for accuracy by an impartial third party. b. may include statistics that were made up by the author. c. must coincide with information released by the American government. d. Only A and C It was felt that students who have a good understanding of the Internet would be able to answer this question. Most students who had completed the Intro course were able to answer it correctly, especially since most had created their own website as part of a class project, but the majority of the incoming students who had not yet taken a college-level computer course believed that if they found information on the Internet, it could be cited as a valid source of information, regardless of the website utilized. Some of the spreadsheet questions related to simple formulas, as well as slightly more complicated formulas, such as the IF statement. In addition, questions were utilized in determining if the student had a good understanding of both absolute and relative addressing when copying formulas. Sample questions follow: The formula _____ works behind the scenes to tell the computer to subtract the number in cell B5 from the number in cell B4. [Note: Sample budget utilized] a. =B5-B4 b. =B4-B5 c. =B4(B5) d. B4-B5 The IF function =IF(G3 <= $E$3, G3*1.2, G3*1.4) assigns __________ to the cell if the logical test is true. a. G3*1.4 b. G3*1.2 c. $E$3 d. 0 (zero) If the formula =B4 * C4 is in cell D4, the formula assigned to cell D5 when the formula is copied is __________. a. =B4 * C4 b. =A4 * B4 c. =B5 * C5 d. =A5 * B5 When a formula containing the cell reference ___________ is copied to another cell, the row reference and column reference remain the same. a. $B$16 b. B$16 c. $B16 d. B16 Similar questions relating to databases and presentation software were incorporated into the exam, as well. In order to maintain the integrity of the exam, actual exam questions have not been included in this report, but the authors are willing to share information with individual universities, as requested. 4. EVALUATION OF QUESTIONS The exam was given using WebCT. The statistics that are provided by WebCT were used in evaluating the value of each question (WebCT, 2005). The goal was to include questions in the final version of the exam that would differentiate between those students who understood the concepts and those students who did not. A sampling of the statistics is provided in Table 1, below. Table 1 - Analysis of Questions - Sampling Ques-tion % Correct Of: Discrim- ination Score Whole Group Upper 25% Lower 25% Mean SD Q01 76 94 54 0.40 76.4% 42.6 Q02 55 75 32 0.40 55.4% 49.9 Q03 89 89 81 0.13 89.2% 31.2 Q04 53 91 13 0.50 53.4% 50.1 Q05 98 100 94 0.47 98.6% 11.6 Q06 87 100 70 0.34 87.8% 32.8 Q07 18 21 16 0.05 18.9% 39.3 Q08 77 91 48 0.39 77.0% 42.2 Q09 88 83 89 0.05 88.5% 32.0 Q10 61 91 24 0.49 61.5% 48.8 Q11 67 94 21 0.56 67.6% 47.0 Q12 27 32 29 0.00 27.7% 44.9 Q13 48 83 24 0.43 48.6% 50.2 All of the statistics relating to any particular question had to be analyzed in evaluating the question. For instance, it was decided that Question 1 was a good question, because 76% of all students taking the exam answered this question correctly, 94% of the students who scored in the upper 25% of the group answered it correctly, and only 54% of the student who scored in the lower 25% of the group answered it correctly. It has a “discrimination” value of 0.40, which indicates a high level of discrimination between the students who understood the concepts and those who did not. Similarly, there was a high standard deviation. On the other hand, Question 3 shows a low discrimination value and little difference in the percentage of 1) all students answering the question correctly, 2) the upper 25%, and 3) the lower 25%. This question was removed from the exam. Similar to Question 1, Question 4 has an extremely high discrimination value and shows extreme differences between the overall group percentage and the upper 25% and lower 25% results (53%, 91%, and 13%, respectively). This was deemed to be a good question. Conversely, Question 5 shows a high discrimination value, but has a very low standard deviation, and as the scores indicate, almost everyone answered it correctly. This question was removed from the final version of the exam. There were also some questions that had some very interesting results. For instance, on Question 9, the upper 25% actually scored the lowest and the lower 25% scored the highest. In evaluating the results, the department determined that the better students were “reading into the question” information that was not there – in other words, they were making it more complicated than it really was. This question was also removed. Similarly, Question 12 had a discrimination value of zero and, as the scores indicate, was of little value. After each question was evaluated and non-discriminatory questions were removed, the final version of the exam contained 60 questions. At the end of the spring semester, this exam was again given to all students in the classes referenced above and the questions re-evaluated. After it was decided that the remaining questions were of value, the exam was given to all incoming students majoring in business, so each student could be placed into the appropriate class. 5. SCHEDULING OF CLASSES The business school at our university is currently in a transition stage. The advanced course did not become the required course until this fall, and thus only impacted incoming students. Although the placement exam has been given to all incoming “business” students as they pre-registered for fall classes, a number of business students who are already enrolled at the university have not yet taken either the introductory or the advanced computer course. Many were pre-registered for the Intro course for this fall. When they arrived for the fall semester, they were tested to determine if they had the knowledge and skills to move into the advanced course. In order to avoid scheduling problems for the students, sections of the introductory and the advanced class were scheduled for the same time slot. Several seats were reserved in the advanced sections for existing students that placed into the advanced class. Thus, on the first day of class this fall, students enrolled in the introductory course were given the placement exam and those that placed into the advanced class, based on a predetermined cutoff score, were given the option to transfer to the advanced class that was offered during the same time slot. 6. CONCLUSION As mentioned earlier, many business schools have elected to change the core Information Systems requirement in the business curriculum from an introductory computer course to an advanced applications course. Consequently, some type of assessment tool is needed to place students into the appropriate course, based on their level of knowledge and expertise. The department at our university would have preferred to have given a “hands-on” placement exam, but for a number of reasons, including logistics and cost-constraints, this option is not feasible at this point in time. The multiple-choice exam is an inexpensive and easily implemented alternative and appears to meet the needs of the Information Systems Department, although the hope is that it will eventually be replaced with an electronic test that “simulates” actual software packages. The SAM package, an electronic assessment tool, is currently used in assessing student progress within each course, and would be a viable alternative for placement testing if the logistics and cost factors related to placement testing can be worked out. SAM provides a “hands-on” testing environment, does the scoring, and provides immediate results, as well as statistics, so that the “tasks” can continually be evaluated and revised (SAM, 2005). However, until such time as the use of a testing package is deemed to be feasible for placement, the multiple-choice exam seems to provide the assessment data that we need for placing students into the appropriate classes, based on each student’s knowledge and skills. The results will continue to be monitored and the placement test will be re-administered in the classes at the end of the fall semester, to assess whether student scores improved as a result of having completed the course. 7. REFERENCES Cook-Sather, Allison (1997) “Making Connections: The Brain and the Creation of Optimal Learning Opportunities for Students.” (serendip.brynmawr.edu/sci_edu /cook-sather.html). Roth, Mark (2005) An Advocate Assesses Use Of Standards In Education. P.G. Publishing Co., Pennsylvania. SAM, 2005, Course Technology (samcentral.course.com). Schuh, Alexander V. (2004) “Equity and Technology Literacy in the Mid-Atlantic Region” (www.temple.edu/martec/assistance/pd/schuh_paper_web.pdf). Sharif, Shan (2005) Physics tweaks testing methods at Georgia Tech. University Wire, Atlanta. WebCT: Learning Without Limits, 2005, WebCT, Inc. (www.webct.com).