Assessment Loop for the MIS Program at Central Connecticut State University: A Practice of Learning, Reflection and Sharing   O. Petkova1  A.T. Jarmoszko2 Department of MIS, Central Connecticut State University New Britain, CT 06050, USA   Abstract  This paper describes one department’s efforts to institute a program assessment process. Based on the results of a previously conducted pilot assessment, the department is in its second assessment loop. Some theoretical foundations of outcomes assessment are provided and their suitability for the Management Information Systems discipline is discussed. Keywords:  Program Assessment, Outcomes Assessment, MIS, Information Systems 1. INTRODUCTION In the last 15 years educational assessment has become a major trend in the US higher education. This trend is dictated by accrediting organizations, professional academic associations, and government legislative bodies and most importantly by the desire of the departments and universities to improve student learning and achievement. Even though literature on theories and methodologies related to educational assessment is extensive, many a faculty – upon initiating assessment – are faced with numerous challenges and difficulties. Within the field of Information Systems this situation is even more complicated – thanks to relative scarcity of IS source literature on program assessment. The focus, rather, has been on the definition of goals and objectives (Gorgone et al., 2002) and not on how to measure achievement and how to develop feedback mechanisms into the process of curriculum design. We believe this shortcoming must be addressed, especially since IS programs have been growing in size and their success needs to be monitored (Pick and Kim, 2000). This exploratory study describes program assessment initiatives undertaken during the last three years at the Department of Management Information Systems, School of Business, Central Connecticut State University. The paper is a continuation of a previously published paper, describing a pilot program assessment in the same department (Jarmoszko et al., 2003). What we present is still very much research in progress. Our intent is: to summarize some of the existing program assessment methodologies, to explore the suitability of these methodologies for the discipline of Management Information Systems and – ultimately – to share our Department’s experiences in program assessment with others contemplating similar actions within their own institutions.    2.  FROM INDIVIDUAL STUDENT ASSESSMENT TO INSTITUTIONAL ASSESSMENT   Although literature offers many definitions of assessment, most authors define it as an important activity to improve the learning and development of students. For example Erwin (1991) sees assessment as: “… the systematic basis for making inferences about the learning and development of students. More specifically, assessment is the process of defining, selecting, designing, collecting, analyzing, interpreting, and using information to increase students' learning and development" Some of the definitions explicitly state that assessment should be used not only in the limited context of classroom teaching and learning, but also in the wider context of institutional improvement, as in the definition by Astin (1993): "I shall consider assessment to include the gathering of information concerning the functioning of students, staff, and institutions of higher education. The information may or may not be in numerical form, but the basic motive for gathering it is to improve the functioning of the institution and its people. I used functioning to refer to the broad social purposes of a college or university: to facilitate student learning and development, to advance the frontiers of knowledge, and to contribute to the community, and the society". One of the most comprehensive definitions of assessment, used as a basis for many assessment plans is proposed by the American Association for Higher Education: “Assessment is an ongoing process aimed at understanding and improving student learning. It involves making our expectations explicit and public; setting appropriate criteria and high standards for learning quality; systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards; and using the resulting information to document, explain, and improve performance. When it is embedded effectively within larger institutional systems, assessment can help us focus our collective attention, examine our assumptions, and create a shared academic culture dedicated to assuring and improving the quality of higher education” (Angelo, 1995). Looking at the issue from the macro perspective, one could conclude that – over the last two decades – the continuous effort to improve higher education has led us to a better understanding of the assessment process. This effort has broadened assessment from the narrow scope of assessing individual students and courses to the more complicated process of assessing departments or programs – and finally – to the very complex and difficult task of assessing whole institutions. In this paper we address assessment at the departmental level. Such assessment should be used to evaluate curriculums and programs, plan improvements, and when necessary to evaluate the effect of change (University of Montana, 2004). Assessment helps departments to identify positive and negative trends and often points to the specific changes that might be needed.  As seen from the above definitions, assessment is an ongoing and continuous process aimed at improving performance. Assessment (often called outcomes assessment) can be formative, summative or both. The purpose of a formative assessment is to evaluate a program’s effectiveness and suggest steps for improvement. In the summative assessment the value or worth of a new curriculum may be judged by comparing it with the curriculum it is intended to replace. In this case, data are gathered for the purposes of accountability, advancement, and decisions about continuation of the program. This paper describes a formative assessment conducted as a part of a University assessment exercise.     3. ASSESSMENT STEPS There are literally hundreds of valuable resources on the web, providing assistance and guidelines for the departments undergoing the complex and challenging process of outcomes assessment. Although the multitude of source materials proves that the topic is important, it also may make it difficult to synthesize and to draw conclusions. We choose to confine our discussion of assessment approaches to essentially three models – those which helped us to conduct our MIS-specific program assessment. Rogers and Sando (1996), suggest seven steps in the development of an assessment plan: 1. Identify goals and identify specific objective(s) for each broad goal. 2. Develop performance criterion(a) for each objective. 3. Determine the practice(s) to be used to achieve goals. 4. Select assessment method for each objective. 5. Conduct assessments. 6. Determine feedback channels. 7. Evaluate whether the performance criteria were met and objectives achieved. One possible problem with the above sequence is that the feedback channels are determined after concluding assessment. This difficulty can be overcome by the model, proposed by University of Wisconsin-Madison (1998), which claims that by adhering to the following five-step process, the complexities associated with developing effective and efficient assessment plans – especially when this is done for the first time – can be made less arduous and time consuming: 1. Define educational/programmatic goals and objectives for the major or program. 2. Identify and describe instruments or methods for assessing student achievement at important stages in the program. 3. Determine how the results will be disseminated and used for program improvement. 4. Develop a timetable for accomplishing the previous three steps. 5. Implement assessment plans and revise as needed. Perrin et al. (2002) introduce another very important improvement and this is the feedback to the students (step 5 below). However, this approach is limiting since it prescribes course embedded assessment as a method for assessing learning outcomes. 1. Determine program goals and objectives. 2. Determine which courses address each of the goals. 3. Determine time points in which you would like to collect data on student learning. 4. Determine the type of data you would like to collect and how it will be evaluated. 5. Determine how students will receive feedback. Steps 3, 4, 5, and 6 might need to be done simultaneously or in another order. 6. Determine where you will collect the data. 7. Determine who will review the information and how it will be used for program changes. Further analysis of researched assessment approaches shows that they differ primarily in the number and the sequencing of assesement steps. The assessment model we finally adhered to had four steps: 1. Setting of goals and asking of questions; 2. Gathering of evidence; 3. Interpretation; 4. Using of results. This is the model proposed by the Systems Office of the Connecticut State University System (Figure 1). We found it intellectually appealing because of its simplicity and flexibility. For example, the first step – setting goals and asking questions – combines many of the steps listed by the other approaches. Figure 1. The Assessment Loop at Connecticut State University. In choosing our model, we were fully aware of the notion that assessment process is not a rigid, prescriptive sequence of steps but rather a holistic and environment-specific, continuous process. Much depends on the discipline itself, the managerial practices of a department and culture of the larger institution. 4. ASSESSMENT LOOP AT CCSU In developing the MIS program assessment plan, the Department of MIS at Central Connecticut State University adhered to the nine principles of good practice for assessing student learning outlined by Astin et al. (2003). Another major principle we followed – during the two cycles of our ongoing assessment process – was the principle of department-wide participation. The importance of this factor is confirmed by the experience of Concordia College (2004). Cycle One: The Pilot In the Spring 2002 semester, the Department of Management Information Systems decided to conduct a pilot exercise in assessing its undergraduate program. These actions were mandated by the State of Connecticut legislature. In the Fall 2001, the Department participated in campus-wide series of meetings meant to reaffirm its mission and goals as well as to consider viable program assessment options. After much deliberation it was decided to conduct a course-embedded assessment pilot via a fourth year MIS course Structured Systems Analysis and Design (MIS 461). We were fully aware that in conducting a program assessment the focus should be on the major as a whole rather than on individual courses or on the minor as specified by some of the assessment guides available in the literature (Concordia College, 2004). However, we opted for a course embedded assessment due to scarcity of other options in the limited timeframe. We deliberated possible methods of assessment within the SA&D setting and decided to employ a combination of simulation and performance appraisal through a set of standardized business cases used in semester-long group projects. For more information on the pilot – especially on reformulation of goals, objectives, performance criteria and on data collection – please see Jarmoszko et al (2003). Analysis of pilot results prompted some important curriculum changes. For example we created a new course in Systems Implementation and Project Management and reorganized course prerequisites throughout the entire MIS curriculum.  Cycle Two: Process Evolution By the Fall of 2003 the Department accumulated enough experience and theoretical knowledge to continue the program assessment in a more holistic and theoretically motivated way. Although the pilot assessment had some good results that helped us to make conclusions about the effectiveness of the program and to reform the curriculum, it was obvious that we had to continue the process in a more formal, detailed and structured way. Literature recommends that assessment of student learning is most effective when it is multidimensional and integrated. (Astin, et al., 2004). Following a series of department-wide discussions, we have concluded the same. The consensus was that course-embedded assessment is not an appropriate method for conducting assessment of the whole MIS program and that we should examine other approaches to assess what we do – especially those methods that provide for a comprehensive, holistic evaluation of different program aspects. The first of the “Nine principles of Good Practice for Assessing Student Learning”, recommended by the American Association for Higher Learning is: “The assessment of student learning begins with educational values … Where questions about educational mission and values are skipped over, assessment threatens to be an exercise in measuring what’s easy, rather than a process of improving what we really care about” (Astin, 2002). Complying with this principle, during series of meetings the mission of the department was discussed, scrutinized and reformulated. From this point on, it was necessary to continue with the formulation of the program goals. Formulating Program Goals The critical aspect of any assessment effort is the identification of learning goals for the program as a whole. Small number of critical learning goals is one of the characteristics of a good assessment plan, according to the assessment guidelines of Concordia College (2004). In formulation of the four learning goals listed below and the corresponding objectives, the Department adhered to the requirements of IS 2002 Curriculum (Gorgone et al., 2002) aligned with the Departmental mission. 1. Understanding the leadership role of MIS in achieving competitive advantage through informed business decision-making. 2. Analyzing and synthesizing business information needs to facilitate evaluation of strategic alternatives. 3. Effectively communicating strategic alternatives to facilitate decision-making. 4. Applying MIS knowledge and skills learned to facilitate the acquisition, development, deployment and management of information systems. After the main program goals were formulated a logical next step in the assessment process was to pay attention to the learning objectives and assessment measures. Bloom’s Taxonomy and Outcomes Assessment According to the assessment materials of the University of Montana (2004), outcomes assessment of student learning, the effectiveness of a departmental curriculum and teaching effectiveness can be accomplished by using Bloom’s taxonomy of educational activities. Bloom (1956) proposes three dimensions that should be covered in the teaching –learning process: cognitive learning, behavior/skills and attitudes/values. In the following discussion we shall provide examples of assessment measures in the three dimensions of Bloom’s taxonomy, suitable for the Management Information Systems discipline. According to Bloom, measures of cognitive learning incorporate knowledge, comprehension, application, analysis, synthesis and evaluation and are either course-specific or focused on a major discipline. Knowledge questions are designed to ask what, where, when and who. Knowledge of facts, definitions and terms are typical of memory items. A typical example of an MIS question in this category is: “List the steps in project initiation”. Questions that are testing knowledge typically require rote memorization rather than actual learning and are unsuitable to be used as outcome measure of student learning. Comprehension is the lowest level of learning and understanding. It involves student’s ability to translate information into their own words. ”Describe the steps in project initiation” is an example of such an MIS question. At the application level of learning, students are asked to apply their knowledge to different situations and in different contexts Students are expected to abstract information learned and to apply it to a situation in the discipline field. One example of application level question in MIS is: “How can data flow diagrams can be used as analysis tools?” Analysis questions ask students to analyze, compare and contrast relationships between things as in the question: “Compare data flow diagrams to Oracle’s process model diagrams”. Synthesis questions ask students to pull together parts and elements to form a whole. A typical question from the MIS field is: “What are the role of data flow diagrams, decision tables, state transition diagrams, and entity-relationship diagrams in building of a complete model of a system?” The highest level of student learning is evaluation. Students are asked to make judgments about the value of the material presented as in the question:” How might the project team recommending an enterprise resource planning design strategy justify its recommendation as compared to other types of design strategies?” We believe that cognitive learning is best measured via application, analysis, synthesis and evaluation levels of learning. Some of the most common assessment methods for this are course tests, writing assignments and summative knowledge projects during the senior year. The second dimension of Bloom’s taxonomy is behavior/skills. The assessment measures of behavior/ skills outcomes measure not what the students know, but what they can do. The skills and behaviors for effective practice in the MIS profession (programming, systems analysis and design, networking and in general decision making skills; ability to work in teams, manage time, present, defend an argument) must be assessed here. Team projects have proved to be the best activities to assess these skills and behaviors. The third dimension of the taxonomy is related to attitudes and value outcomes. The assessments here must determine personal and social values, namely responsibility, commitment, engagement, ability to compromise etc. Two useful tools for assessing of attitudes and value outcomes in MIS are peer assessment and lesson learned reports as part of the team project activities. The above discussion is based on the dimension of assessment framework, published by University of Montana (2004). The above considerations have been helpful in the formation of the learning objectives and measures in the MIS program assessment. Additional discussion on the assessment methods and their suitability for MIS program assessment, can be found in Jarmoszko et al (2003). After the outcomes were articulated it was important to use curriculum and syllabus analysis in order to map out the specific courses and learning activities that support the program outcomes. Curriculum and Syllabus Analysis Curriculum analysis is one of the popular indirect indicators of learning. It provides a means to chart which courses cover which goals/ learning outcomes. During a special department meeting in December 2003, the departmental members identified four courses which cover most comprehensively all the educational goals listed above: MIS400 (Business Decision Analysis/ Knowledge Base), MIS410 (Networks & Telecom), MIS450 (Enterprise, Strategies and Transformation) and MIS462 (Project Management and Systems Implementation). Throughout the subsequent syllabus analysis, conducted by faculty teams teaching these courses, the most important artifacts and activities that can be used to measure the learning outcomes were identified. These include but are not limited to projects, presentations, simulations and case studies. The Department agreed that a portfolio assessment approach is the assessment method we should be moving toward. The artifacts from the courses listed should be used in the portfolio. The long-term goal in the MIS program assessment is the creation of student portfolios that would represent students work and accomplishments. It was obvious that the portfolio approach must be supplemented with behavioral observations, simulations and performance appraisal in order to create an effective assessment program. Data Collection As a department, we have decided that input data into the assessment process shall be the artifacts – created by students – in the four selected courses. These artifacts shall include but shall not be limited to: * several reports on individual projects in knowledge management; * a report on interactive simulation exercise in strategic decision making; * a group project report on decision support systems; * a group project report in network design * a group project report and a completed and implemented information system (the final capstone experience). We have agreed that faculty will evaluate these artifacts and assign performance indicators, which will then be used to monitor the degree to which our program learning goals are being met. The artifacts in the four courses were evaluated by faculty committees consisting of at least two faculty members. Each committee was responsible for determining their own assessment methods. The objective was to study the submitted artifacts and to assign performance indicators for each of the program goals on a three-level scale: fails to meet the goal, meets the goal, or exceeds expectations for the goal. Assessment Results/Interpretation On the basis of the collected information some major conclusions about the MIS program emerged. Strengths of the program: 1. The program prepares students well for flexible and changing organizational and technological environments. Given the challenge, students did more than was expected of them. 2. Program projects create an opportunity for expression of the students’ creativity and problem-solving ability. Students appeared not only to enjoy working on projects but also to internalize and apply what they learned to similar decision making exercises. Similarly, concepts learned in class were used effectively when making strategic decisions. 3. The program prepares students well for putting together material learned in class and gathered from vendors and other sources. Documenting project work overall was done very well. 4. By and large, students in the program have shown very good presentation skills that brought out the main lessons learned in class. Weaknesses: 1. Students did not have good skills to handle group conflicts. 2. Facilities were lacking for students to do a better job in the program. For example, the lab layout and lack of space made it difficult for students to conduct a trade show to showcase their work. Similarly, the physical and technical configurations of the classroom computers need to be enhanced to allow students to work on group projects in class in addition to their work outside of the classroom. For example, students were not able to work on the interactive simulation within the classroom due to lack of sound. Groups needed the capability of hearing the interactive simulation conversations simultaneously. Individual headsets to enable sound do not solve the problem. 3. The practical skills in some areas as databases and programming were not sufficient. A revision of some of the lower level courses is necessary. The results from the MIS program assessment were also sent to the office of the Vice-Provost for accountability purposes. However, the more important use of these results is that they help the Department to better understand the effectiveness of its teaching methods, syllabi and curriculum. A consequence of such understanding are improvements that lead to better learning outcomes   5. CONCLUSIONS The contribution of the paper is in its synthesis of the existing theory on assessment in general and its operationalization to the assessment of an Management Information Systems program. The assessment process at the Department of MIS at Central Connecticut State University is a rich learning experience for the faculty. The cyclic nature of the process permits us to reflect on our past assessment exercises and to plan for improvement of the process which in turn leads to the improvement of students learning. During the evolution of the assessment process starting from one course-embedded assessment through elements of a portfolio and a completed e-portfolio assessment, we share our teaching practices and hopefully, we also became better teachers. 6.      ACKNOWLEDGEMENTS   The authors would like to acknowledge the intellectual input of our colleagues: George Claffey, Marianne D’Onofrio, Michael Gendron, JooEng Lee-Partridge and Leslie Leong.   7.      REFERENCES   Angelo,T.A., 1995, AAHE Bulletin, November 1995, p.7. Angelo, T.A, 1999, “Doing Assessment as if Learning Matters Most”., Accessed at http://aahebulletin.com/public/archive/angelomay99.asp on August, 15 2004. Astin, A. W., 1993, “Assessment for Excellence”, Onyx Press. Astin, A. W., T. W. Banta, et al. (2003). "Nine Principles of Good Practice for Assessing Student Learning," Accessed at www.aahe.org/ assessment/principl.htm on August 17, 2004. Bloom, B.S., 1956, “Taxonomy of educational objectives: The classification of educational goals: Handbook I, cognitive domain”. New York; Toronto: Longmans, Green. Concordia College, 2004, “Guidelines for Departmental assessment Plan”, Accessed at http://www.cord/edu/ dept/assessment/guidelines.htm on August 10, 2004. Erwin, T.D., 1991, “Assessing Student Learning and Development”, Jossey-Bass.   Gorgone,J.T., G.B., Davis, J.S.., H.Topi, D.L Fernstein, H.E. Longenecker, 2002, “IS’ 2002: Model Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems”, Accessed at http://www.is2002. org on February 20, 2003 Jarmoszko, A., O.Petkova and M.Gendron, 2003, “Toward Assessment of Information Systems Programs: Evaluating Learning Outcomes in Systems Analysis and Design Courses”, Accessed at http:// ecommerce.lebow.drexel.edu/eli/2003Proceedings/docs/166Jarmo. pdf on August 10, 2004. Perrin, N., T. Dillon, M.Kinnik and D. Miller-Jones, 2002, “Program Assessment: Where to Start?”, Accessed at http://www.clas.pdx.edu/assessment/program_assessment.html on August 10, 2004. Pick, J.B., and J. Kim, 2000, “Program assessment in an undergraduate information systems program: Prospects for curricular and programmatic enhancement”, Proceedings of the 15th Annual Conference of the IAIM, Brisbane, Australia, 2000. Rogers, G, and J. Sando, 1997, “Steping Ahead: An Assessment Plan Development Guide”, Accessed at http://www.rose-hulman.edu/irpa/ old/steppingahead.html on May 17, 2004. University of Montana, 2004, “What is Assessment”, Accessed at http:// www.umt.edu/provost/assessment/what_is.htm on August 10, 2004. University of Wisconsin, 1998, “Outcomes Assessment”, Accessed at http://www.wisc.edu/provost/assess/manual/manual1.html on August 10, 2004. 1 PetkovaO@CCSU.edu 2 JarmoszkoA@CCSU.edu ?? ?? ?? ??