Establishing an Assessment Process for a Computing Program Cheryl Aasheim Department of Information Technology Georgia Southern University Statesboro, Georgia 30460, USA caasheim@georgiasouthern.edu Art Gowan Department of Information Technology Georgia Southern University Statesboro, Georgia 30460, USA artgowan@georgiasouthern.edu Han Reichgelt College of Information Technology Georgia Southern University Statesboro, Georgia 30460, USA han@georgiasouthern.edu ABSTRACT This paper describes the assessment process designed and implemented for an undergraduate program in information technology (IT) with specific emphasis on course-level assessment and discusses how the course-level assessment data can be used in the ABET CAC accreditation process. Several examples of course-level assessment data collected at Georgia Southern University are provided. The authors also attempt to illustrate that, while the actual design of the assessment process was time-consuming, the additional load on faculty to gather the relevant data is not too arduous. Key Words: accreditation, assessment, ABET 1. INTRODUCTION Accreditation is the mechanism that the higher education community has adopted as its primary quality assurance mechanism. The last few years has seen a significant shift in the way in which accreditation criteria are formulated, reflecting a re-conceptualization of quality as “fitness for purpose”, rather than “adherence to standards” (see Garvin, 1984, for a discussion of different concepts of quality). Prior to the late nineties, in order to be accredited, a program or institution had to meet a number of criteria that specified in very detailed terms such factors as faculty qualifications, the number and size of class rooms and laboratories, and specific required courses or topics covered in the curriculum. The underlying concept of quality was “adherence to standards”. Following a trend set by the international higher education accreditation and quality assurance agencies, US accreditation agencies moved to an outcomes-based approach, reflecting a concept of quality as “fitness for purpose”. Although there are differences in detail between accreditation agencies, most accreditation criteria are similar. They require programs or institutions to: Specify clearly the skills, including cognitive skills (i.e., knowledge), that they expect students to achieve by graduation (sometimes referred to as “program outcomes or “learning outcomes”), making sure that the outcomes are relevant to the various stakeholders of the program; 1. Set up an assessment process to determine to what extent the program or institution is successful in enabling students to achieve the outcomes; 2. Establish a process to use the data collected from the assessment process to make program improvements. The other accreditation criteria follow more or less naturally from this. For example, a typical faculty-related criterion will no longer specify how many faculty must have specific degrees. Rather, in an outcomes-based approach, the faculty criterion will simply state that there are sufficient faculty with the skills and authority to design and deliver a program of study that allows students to acquire the specified outcomes. A process is implemented to collect data on the success of students and is analyzed. A finding could be that there are insufficient faculty or some are lacking skills to successfully deliver the curriculum therefore resulting in a request for additional faculty or training. This utilization of information in an attempt to improve a program is often called “closing the loop” (Maxim, 2004). There are a number of reasons that the accreditation agencies have shifted to an outcomes-based approach. Accreditation agencies in the US were under pressure from the Federal Government in the US to adopt the outcomes-based approach (Banta, 2001). Also, many accreditation bodies wished to allow educational institutions and programs greater opportunities to be innovative and more reactive to their stakeholders, and at the same time apply the quality improvement approaches that had proven so successful in other industries. Finally, the shift coincided with research in education that indicated that an outcomes-based approach to program and course design is likely to lead to increased learning (e.g., Diamond, 1998; Knowles et al, 1998, Sork and Caffarella, 1989). It turned out to be more helpful to the adult learning process if students were told explicitly what skills and knowledge they could expect to acquire in a particular course or program of study. Although there is some evidence that the shift to outcomes-based criteria has led to program improvements, at least for programs in engineering (Lattuca, Terenzini and Volkwein, 2006), some in the academic community remain unconvinced of the wisdom of the shift to outcomes-based accreditation criteria. Some have argued that relinquishing the checklist of specified criteria would lead to a drop in standards. Others argued that a greater emphasis on assessment would lead to increased demands on faculty time at a time when faculty were already facing increasing pressures to improve teaching, increase research output, and increase professional service activities (cf Hogan et al, 2002). The aim of this paper is to describe the assessment process designed and implemented for an undergraduate program in information technology (IT) with specific emphasis on course-level assessment. The authors will also attempt to illustrate that, while the actual design of the assessment process was time-consuming, the additional load on faculty to gather the relevant data is not too arduous. 2. BACKGROUND An important aspect of any drive towards excellence is the implementation of a rigorous assessment and continuous improvement process. Given the rapidly changing nature of the field, a department that does not continuously reassess its operations is in danger of quickly becoming obsolete. This paper describes the assessment and continuous quality improvement process the department has adopted for the BS program in Information Technology at Georgia Southern University. The assessment and quality improvement process was primarily developed to further the mission of the Department of IT. The Department’s mission is to become nationally and internationally recognized as one of the best departments of Information Technology world-wide.  It strives to achieve this goal through the provision of world-class undergraduate and graduate education in Information Technology, research, and service both to the community of IT Educators and the wider community. However, the implementation of good assessment and quality improvement processes is also crucial for the BS in IT to maintain its accreditation with ABET CAC. The central criteria in ABET’s accreditation criteria concern program educational objectives, program outcomes and assessment. They state that: * The program has documented measurable educational objectives based on the needs of the program’s constituencies. * The program has documented measurable outcomes based on the needs of the program’s constituencies. After a recent revision, the criteria now also list a number of minimal program outcomes. * The program uses a documented process incorporating relevant data to regularly assess its program educational objectives and program outcomes, and to evaluate the extent to which they are being met. The results of the evaluations are used to effect continuous improvement of the program through a documented plan. The program received accreditation by ABET CAC in the fall of 2005. It was recognized as one of the first of three IT programs accredited by ABET CAC under the general criteria for programs in computing. 3. PROGRAM EDUCATIONAL OBJECTIVES AND PROGRAM OUTCOMES The Department of IT at Georgia Southern University has adopted ABET’s terminology and distinguishes between program educational objectives and program outcomes. Program educational objectives are defined as statements that describe the career and professional accomplishments that the program is preparing graduates to achieve, while program outcomes are statements that describe what students are expected to know and to be able to do by the time of graduation. It is crucial that a program can provide evidence that the program educational objectives and program outcomes are relevant. Program educational objectives should describe accomplishments that are indeed regarded as accomplishments by the profession, and program outcomes should describe knowledge and skills that are in demand by organizations that individuals are likely to join after graduation, or will in some other way allow graduates to be professionally successful immediately after graduation. The Department has adopted the following program educational objectives for its BS in IT: A few years after graduation, graduates will demonstrate: * The ability to take on positions as IT managers and/or the ability to embark on a research career in the field; * Evidence of a pursuit of life-long learning; * The ability to work effectively to make a positive contribution to society; * Evidence of the ability to collaborate in teams; * Allegiance to Georgia Southern University in general and the College of Information Technology in particular. In order to allow graduates to achieve these accomplishments the Department has adopted the following program outcomes for its BS in IT: Upon graduation, students with a BS in IT will be able to: 1. Demonstrate expertise in the core information technologies; 2. Demonstrate sufficient understanding of an application domain to be able to develop IT applications suitable for that application domain; 3. Identify and define the requirements that must be satisfied to address the problems or opportunities faced by an organization or individual; 4. Design effective and usable IT-based solutions and integrate them into the user environment; 5. Demonstrate an understanding of best practices and standards and their application to the user environment; 6. Identify and evaluate current and emerging technologies and assess their applicability to address individual and organizational needs; 7. Create and implement effective project plans for IT-based systems; 8. Work effectively in project teams to develop and/or implement IT-based solutions; 9. Communicate effectively and efficiently with clients, users and peers, both orally and in writing; 10. Demonstrate independent critical thinking and problem solving skills; 11. Demonstrate an understanding of the impact of technology on individuals, organizations and society, including ethical, legal and policy issues; 12. Demonstrate an understanding of the need for continued learning throughout their career. 4. ASSESSMENT PROCESS ABET’s accreditation criteria imply that a full assessment process must address the following questions: 1. How relevant are the program educational objectives to the program’s constituencies? 2. To what extent has the program prepared graduates to achieve the program educational objectives? 3. How relevant are the program outcomes to the program’s constituencies? 4. To what extent do graduates achieve the program outcomes? Moreover, since graduates are expected to achieve the program educational objectives and outcomes at least partly through the curriculum they are exposed to, the assessment process must be designed to answer a number of additional questions, including: 5. Has the curriculum been designed in such a way that it allows students to achieve the program educational objectives and program outcomes? 6. To what extent are students successful in each of the courses that make up the curriculum? The answer to question 6 is of particular importance when it comes to program improvement. After all, changes to the curriculum are the most straightforward measures that a program can use to effect program improvements. Once designed, a curriculum will either allow students to achieve program outcomes, or not, and therefore does not have to be revisited as often as the other questions in our list. However, the question must be revisited whenever a department is considering a change to its curriculum, or in response to a problem identified through one of the assessment instruments used to answer the other questions in our list (1-6 above). In order to answer question 5, the Department has developed and is maintaining a courses to program outcomes matrix, indicating which courses contribute to which course outcomes. The construction of this matrix was made considerably easier because the Department had agreed on a set of course learning outcomes for each course, which are recorded in the course syllabus. The course outcomes are used to give instructors guidance about which material to cover in a course and to what depth. When assigned a course, instructors implicitly agree to attempt to instill the course outcomes in students enrolled in the course. In order to systematically gather data relevant to the other questions above, the Department has devised a number of assessment instruments: * Graduate questionnaire: administered to graduates one year after graduation and every two years thereafter. This tool is used to gather data relevant to questions 1 through 4. * Employer questionnaire: administered to employers shortly after administration of the graduate questionnaire. Employee questionnaires are administered only if the Department has received permission to do so from the graduate and are used to gather data relevant to questions 1 through 4. * Student questionnaires: administered to graduating students. This tool is used to gather data relevant to questions 3 and 4. * Course assessment instruments. Course assessment instruments are used to collect data relevant to question 6. While the administration of the first three instruments falls primarily with the department chair, responsibility for the administration of the course assessment instruments lies with the course instructor. Course instructors are requested to complete the forms and send the completed forms to the Department Chair, together with the course syllabus and whatever assessments they used (mid-term exams, final exams, projects, etc). Filing of the relevant material is done by the departmental secretary. Data analysis is done by the Department Assessment Committee. The focus of this paper is on the course assessment instrument. 5. EXAMPLES OF COURSE LEVEL ASSESSMENT INSTRUMENTS In order for a faculty member to determine the extent to which students are successful in the course (question 6 above), they must: 1. Map the content/assessments in their course to the outcomes set forth in the course syllabus; 2. Collect data on those assessments (exams, quizzes, projects, assignments, etc.); 3. Analyze the data to evaluate the degree to which students are successful in achieving the outcomes; 4. Perform a self-assessment of the results and propose actions to be taken if required. The results of items 1 – 4 above are presented at the end of the semester for each course in a course level assessment document. We expand on each of the items above and provide examples of the contents of the course level assessment document for three courses taught in the IT department at Georgia Southern University in the following sections. The courses are: * IT 4235 – Problems in Web Applications, a senior-level course * IT 3234 – Software Acquisition, Integration & Implementation, a junior-level course * IT 1130 – Introduction to IT, a freshman introductory course 6. MAPPING COURSE ASSESSMENTS TO OUTCOMES AND COLLECTING DATA In order for a faculty member to determine the extent to which the curriculum was delivered as designed they must map course outcomes to the assessments they give in class and collect data in the form of grades which indicate student performance for each outcome. Often, the mapping of outcomes to assessments is the most time-consuming task in preparing the course level assessment document. The instructor is responsible for creating the map, which essentially involves two decisions. The first decision concerns the level of granularity. Outcomes can be mapped to assessments at the level of a quiz, project, or assignment, or may be mapped at the level of quiz items or questions or piece of a project or assignment. Assessment items might be mapped to multiple outcomes as well. The second decision concerns the number of assessment items for which data is collected. For example, if a faculty member uses several multiple-choice questions on an exam that correspond to multiple course outcomes it can be quite tedious to track which questions map to which outcomes and to keep separate grades for each outcome for each student. Data does not necessarily have to be collected and analyzed on all assessment items. For example, Table 1 in Appendix A reflects a mapping of the learning outcomes to some labs, projects and two exams. The values indicate the percent of students achieving an objective with a 70% or higher. While the mapping is not at a very low level of granularity, it does require the faculty member to consider the course-level learning outcomes, note student performance, and ultimately perform some analysis to address potential issues, covered later. Achievement is measured by the % of students achieving at least 70% on the lab, assignment, or exam. Richer data might be available through collection of data at a more granular level. This comes at a cost of time to faculty in collecting the data. In many cases, it is easier for a faculty member to plan and structure their course in such a way that the assessments map easily to the outcomes. Tables 2 & 3 in Appendix A provide examples of how two courses in the IT department mapped assessments to course outcomes. In addition, Tables 1 & 2 in Appendix A provide information about the percent of students that achieved each of the course outcomes based on their grades on the assessments for that outcome. The “% Achievement” column will be discussed in more detail in the next section. The strategies employed in each course were different. In IT 3234 (Table 2 Appendix A) the assessments consisted of midterm and final exams with essay questions, an individual case study, several group projects and in-class participation. As the exams were essay based, there were very few questions, making it quite simple for the faculty member teaching the course to record student grades for each question. The faculty member could then use these individual grades along with the grades on the projects, case study and participation to use in the data analysis and self-assessment discussed in the following section. In IT 1130 (Table 3 Appendix A) the assessments consisted of four assignments, a midterm exam (consisting of multiple-choice, true-false, short answer questions and an Excel exercise) and a final exam (consisting of multiple-choice, true-false, short answer questions and an SQL query writing exercise). Prior to authoring the course-assessment document, the exams did not map easily to the course outcomes. The faculty member teaching the course realigned the content of the course to better map to the outcomes. In the event that an exam corresponded to multiple outcomes, which is the case with the final exam as it is cumulative, the faculty member placed the questions in sections in order to align with individual outcomes and then recorded grades for students for each section. The faculty member only needs to know which questions correspond to which outcome. For example, questions 1 – 50 might correspond to outcome 1a and questions 51 – 100 might correspond to outcome 1e. The faculty member then records two separate grades, one for outcome 1a and one for 1e. 7. ANALYZING THE DATA AND PERFORMING SELF-ASSESSMENT To determine the extent to which students are successful in the course the student grades collected for assessments that correspond to each of the outcomes need to be analyzed. The faculty member must also decide whether the student performance for each outcome is acceptable and if not, what to do to improve performance. For each course taught in the IT department, each instructor computes the percent of students that receive a passing grade (70% or above) for each assessment. We call this the “achievement rate”. As long as this rate is sufficiently high, i.e. above 70%, the instructor knows that students are successfully achieving the outcomes set forth in the course. In any case where this number is not sufficiently high, the instructor would add a section in the course assessment document providing an explanation for why the number was deficient or what the instructor might do to remedy the situation. This is how the instructor performs self-assessment. For example, for outcome 2b in the IT 1130 course (Table 2 Appendix A), the success rate of writing SQL queries on the final exam was low. The instructor might provide either of the following explanations: * I had a small number of students and this is the first semester for which this number was low. Due to low sample size, I won’t make an immediate change. I will pay close attention to this in the next semester and, if the trend continues, I will make a change to the delivery of the course. * I will devote an extra period to covering SQL and I will add a quiz on the material so that the students have a chance to practice prior to the final exam. Monitoring the achievement rate allows an instructor to monitor his/her performance at the end of the semester, identify problem areas and continually improve the course. Once this process is in place it is easy for an instructor to set his/her own goals regarding an acceptable rate of achievement for students. 8. CONCLUSION One of the problems that departments often face when implementing a detailed assessment process is reluctance on the part of the faculty to take on more work. While some of the data that has to be gathered as part of a complete assessment process can be gathered by administrators, it is hard to see how detailed course level assessments can be conducted without direct faculty involvement. It is therefore beneficial to design a course level assessment process aimed at measuring student achievement in the courses that make up the curriculum that involves minimum additional work on the part of the course instructor. This paper shows how one can use assessments already required to measure individual student performance to also measure the effectiveness of the course overall. It simply involves asking instructors to map some course assessment instruments to explicitly formulated course outcomes and using summary data on those assessments to determine how well students as a group have performed on this course. Since a good student assessment strategy requires one to set assessments based on the course outcomes anyway, the additional burden on faculty is minimal. The course assessment process described here is done using summary data. We currently collect data on the performance of the students as a whole, rather than on the performance of individual students. It would obviously be preferable if we could determine, for each individual student, to what extent he or she has achieved each course outcome. Indeed, some accreditation agencies require this type of data collection. While this data is available in the spreadsheets, etc., faculty use to track student grades in a course, reporting this data is far more tedious than the reporting of the summary data that we currently have in place. However, many of the difficulties involved in reporting individual student achievement can be alleviated through the implementation of a well-designed course level assessment information system. The Department has implemented an assessment information system that allows it to automatically gather data from the various assessment instruments described in this paper (Gowan et al, 2006), and it is currently working on expanding this system to allow for the easy collection of data on individual student performance in each course. We hope to be able to report on this in the near future. 9. REFERENCES Banta, T. (2001) “Assessing competence in higher education.” In: Palomba, C. & Banta, T. (eds) Assessing Student Competence in Accredited Disciplines. Sterling, VA: Stylus. Diamond, R. (1998) Designing and Assessing Courses and Curricula.(5th Edition). San Francisco: Jossey-Bass Publishers. Garvin, D. (1984) “What does “product quality” really mean?” Sloan Management Review, 24, pp. 25-45. Gowan, A., B. MacDonald & H. Reichgelt (2006) “A configurable assessment information system.” Proceedings of SIGITE-06, Minneapolis, MN, October. Hogan, T., P. Harrison & K. Schulze (2002) “Developing and maintaining an effective assessment program.” ACM SIGCSE Bulletin, 34(4), pp. 52-56. Knowles, M., E. Holton and R. Swanson (1998) The Adult Learner: The Definitive Classic in Adult Education and Human Resource Development (5th Edition). Houston: Gulf Publishing Co. Lattuca, L., P. Terenzini & J. Volkwein (2006) Engineering Change: A Study of the Impact of EC 2000. Baltimore, MD: ABET Inc. Maxim, B. (2004) “Closing the loop: Assessment and accreditation.” Journal of Computing Sciences in Colleges, 20, pp. 7-18. Sork, Th. and R. Caffarella (1989) “Planning Programs for Adults,” in Merriam, S. B. and Cunningham, P.M. (ed.) Handbook of Adult and Continuing Education, San Francisco: Jossey-Bass. APPENDIX A Table 1: Mapping of assessments to outcomes for IT 4235 Course Outcome Coverage Assessment % Achievement Discuss the problems and challenges of designing, implementing, delivering, and supporting a major web-based application Lecture Labs Individual Project Lab 1 Lab 2 Lab 10 Midterm Exam Exam 2 Individual Project 100 71 86 86 43 93 Demonstrate an understanding of the factors that affect international use of information technology, including technology support, data ownership and transfer, and personnel concerns Lecture Labs Class participation Recognize the challenges (technical, managerial and organizational) of change management Lecture Exam 2 43 Demonstrate an understanding of how to identify and evaluate new technologies and approaches Lecture Labs Lab 7 Lab 9 Midterm Exam Exam 2 Group Project 93 43 86 43 93 Table 2: Mapping of assessments to outcomes for IT 3234 Course Outcome Coverage Assessment % Achievement Critically compare the advantages and drawbacks of purchasing software with the advantages and drawbacks of developing solutions from scratch; Lectures Mid-term q. 1 Final q 2 82 80 Demonstrate an understanding of the problems and challenges of acquiring, integrating and implementing a software package; Lectures Case Studies Group project 1 Mid-term q. 2 Mid-term q. 3 Final q 2 Project 1 Class participation In-depth case 78 57 80 100 100 Identify alternative packages that would satisfy an organization's needs; Lectures Group project 2 Project 2 100 Recommend one of the packages identified and justify their recommendation; Lectures Final q 1 Final q 3 86 60 Identify, analyze and resolve integration and implementation issues; Lectures Case studies Group project 3 Final q 4, Final q. 5 Class participation Project 3 In-depth case 80 48 94 100 Recognize the challenges (technical, managerial and organizational) of managing the change that an organization faces when it implements a new IT application; Lectures Case studies Group project 3 Class participation Project 3 In-depth case 94 100 Recommend approaches to overcome these challenges. Lectures Case studies Group project 3 Class participation Project 3 In-depth case 94 100 Table 3: Mapping of assessments to outcomes for IT 1130 Course Outcome Coverage Assessment % Achievement 1. Demonstrate a basic understanding of the field of IT, including the ability to: Lecture 1a. Define the term "Information Technology"; Lecture Midterm (MCQ) Final (MCQ) 95 100 1b. Explain the specializations within the BS IT degree; Lecture None N/A 1c. Recognize the disciplines that have contributed to the emergence of IT, namely Computer Science, Information Systems, and Computer Engineering; Lecture Guest speakers from IS & CS Assignment 5 100 1d. Understand the difference between CS, IS, and IT. Lecture Guest speakers from IS & CS Assignment 3 100 1e. Identify areas in which IT has significantly impacted individuals, organizations and/or societies, including ethical, legal and policy issues. Lecture Assignment 4 Final (MCQ) 100 80 2. Demonstrate an understanding of basic information technology software applications including the ability to: Lab Assignment 1 Midterm (Excel Activity) 90 90 2a. Using a given specification, create a simple database; Lecture/Lab Assignment 2 70 2b. Use SQL for simple queries using the Access database management system. Lecture/Lab Assignment 2 Final (SQL Queries) 95 60