Development of Assessment Portfolios for IS Majors Nelly Todorova, Annette Mills Department of AFIS, University of Canterbury Christchurch, New Zealand Abstract While assessment is acknowledged as a critical enabler of student learning, literature shows lack of alignment between learning objectives and the types of assessment used in practice. This paper reports on the findings of education and IS literature in order to define and evaluate the role of assessment in promoting higher learning objectives for Information Systems majors. The paper recommends a four stage approach to the evaluation and development of assessment portfolios for IS education. The discussion closes with recommendations for future research. Keywords: assessment, IS Education, students’ approaches to learning, knowledge levels, IS curriculum 1. INTRODUCTION The primary goal of an IS undergraduate education for the IS major is to produce graduates who can function in an entry-level IS position and have a strong basis for continued career growth (Richards & Pelley 1994; Lee, et al. 1995). IS graduates must therefore have the technical skills, knowledge and understanding appropriate to their specialisation, as well as an organisational view of IS. They must also be life-long learners, able to question, to think critically and independently and to learn. To help students develop these abilities, IS educators need to examine both "what" and "how" they teach (Chalmers and Fuller, 1996). The education literature recognizes assessment as the single most important factor affecting learning outcomes (depth of knowledge and skills) and students' strategies to learning (Biggs, 2003). This is particularly relevant to the IS discipline which targets higher level learning outcomes such as detailed understanding and application. However, there is a lack of research both in the general education area and in IS education investigating the effect of assessment types on student learning outcomes. It is therefore the aim of this paper to identify the effect of individual types of assessment on learning outcomes and to propose a comprehensive framework for development of assessment strategies. The remainder of this paper reviews the IS curriculum objectives and the role of assessment in achieving these objectives. The paper provides a review of the general education and IS literature and analysis of the effect of assessment types on student learning. Finally, we propose a framework and tools for evaluation and development of assessment portfolios. 2. IS TEACHING AND LEARNING Determining the "what" in IS education, that is, the knowledge and skills needed by the IS graduate has been a key focus of IS curriculum studies over the past three decades. Such studies endeavour to identify a course of study aligned to the needs of and changes in the IS environment. This includes specifying program and course content and teaching methods and resources as well as the expected exit characteristics of IS graduates. IS academics since 1972 (Ashenhurst 1972; Davis, et al. 1997; Gorgone, et al. 2003) have consistently concluded that IS graduates require entry-level knowledge and abilities that includes technical and business expertise. While much attention has been paid to developing an IS curriculum that is responsive to organisational needs, and which covers essential topics while defining an appropriate balance between technical expertise and business knowledge (e.g. Davis, et al. 1997; Gorgone, et al. 2003; Richards & Pelley 1994), IS education is still criticised for failing to produce graduates who have the required skill-set (Lee, et al. 1995; Nunamaker & Konsynski 1982; Tang, et al., 2000-2001; Yen, et al., 2003). This failure is attributed primarily to differences between the IS education curricula offerings and business needs. While technical and business knowledge subjects are more readily addressed by changes in curricula content, fostering highly desirable interpersonal skills and personal traits such as critical thinking and creativity is more difficult. Here it is suggested that teaching/learning strategies such as team-work, problem-solving, and internships are possible ways to foster such skills. Hence an equally important aspect of IS skill development lies in the “how” of IS teaching (Chalmers & Fuller, 1996). How IS education is delivered is influenced not only by the curriculum objectives but also by the teaching/learning activities and assessment tasks (Biggs, 2003). Since the discipline of Information Systems is a practical discipline (Work, 1997), IS education must include a significant practical and applied element. For example, Richards and Pelley (1994) identify team/group projects and hands-on/real-world experience as key components of an IS education, as well as the experiences gained through systems analysis and design, programming, and other technical aspects of computing. IS education research has therefore paid much attention to suggesting and evaluating various teaching and learning strategies and their impact on learning outcomes. These include the use of site visits (Cragg 1998), interactive, hyper-linked web-based case-studies (Liebowitz & Yaverbaum 1998), role plays and simulations (Freeman 2003; Nulden & Scheepers, 2002), case studies (Mukherjee, 2000), in-class problem-solving exercises (Mukherjee, 2004), cooperative-learning groups (Whers, 2002) and flexible learning strategies using web-based technology (Bryant et al., 2003). Recent research has also examined learner satisfaction and the teaching effectiveness of e-learning systems (Wang, 2003). On the contrary, there is little evidence of such attention being paid to understanding the role of assessment in achieving IS education goals. This paper will therefore review the goals of IS education particularly as these relate to desired knowledge outcomes for IS majors, and discuss how these outcomes may be assessed in the teaching/learning context. 3. KNOWLEDGE LEVELS FOR IS EDUCATION A key outcome of the joint IS curriculum efforts of ACM, AIS and AITP societies are the IS’97 and the IS 2002 Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems (Davis et al., 1997; Gorgone et al., 2003). Both curricula identify an IS Body of Knowledge as well as a Depth of Knowledge Metric that links key topics with desired levels of competency (or depth of knowledge). These curricula suggest the IS Body of Knowledge consists of three major subject areas: Information Technology (e.g. operating systems, databases, telecommunications), Organisational and Management concepts (e.g. general organisational theory, decision theory, interpersonal skills, change management) and Theory and Development of Systems (e.g. applications planning, systems development, risk management, project management). The IS graduate is therefore expected to have analytical and critical thinking skills (e.g. organisational problem-solving, creativity), business fundamentals (e.g. business models, evaluation of business performance), interpersonal, communication and team skills, as well as technology-related skills, and information systems/technology enablement skills (e.g. systems analysis and design, business process design). The IS Depth of Knowledge Metric (Table 1), which is based on Bloom's taxonomy, has five levels, namely awareness (recognition), literacy, concept/use, detailed understanding and application, and advanced. For undergraduate IS education, the IS curriculum models only consider Levels 1-4 with Level 5 for graduate programs. The IS curricula also identify three target levels (i.e. courses for all students, courses for IS minors and courses for IS majors) with each level delivering increased competency. The curriculum models therefore recognise that while it is sufficient for all (other) students to achieve Level 1 (awareness) knowledge in topic areas such as IS planning and software development, a higher level of competency leading to effective use (Level 3: Usage) is desired for the IS major. Table 1 IS Knowledge Levels and Associated Learning Activities (Source: IS 2002 Model Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems) Depth / Level of IS Knowledge Bloom's Levels Associated Learning Activities 1 Awareness: Introductory recall and recognition 1. Knowledge, Recognition Class presentations, discussion groups, watching videos, structured laboratories. Involves only recognition but with little ability to differentiate. Does not involve use. 2 Literacy: Knowledge of framework and contents, Differential Knowledge 1. Differentiation in context Continued lecture and participative discussion, reading, teamwork and projects, structured labs. Requires recognition knowledge as a prerequisite. Requires practice. Does not involve use. 3 Concept/Use: Comprehension and ability to use knowledge when asked 2. Comprehension/ Translation/ Extrapolation/ Use of Knowledge Requires continued lab and project participation, presentation involving giving explanations and demonstrations, accepting criticism; may require developing skills in directed labs 4 Detailed Understanding and Application: Selection of the right thing and using it without hints. 3. Application Knowledge Semi-structured team-oriented labs where students generate their own solutions, make their own decisions, commit to it and complete assignments, and present and explain solutions. 5 Advanced: Identification, Use and Evaluation of New Knowledge 4. Analysis 5. Synthesis 6. Evaluation An advanced level of knowledge for those very capable of applying existing knowledge in which de novo solutions are found and utilized in solving and evaluating the proposed new knowledge. Finally, the curriculum models also recommend learning activities for each knowledge level. For example, the awareness level which embodies knowledge objectives expressed as "Define…", and "List the characteristics of …” may be achieved using teaching/learning activities such as class presentations and discussion groups. On the other hand, achieving higher level knowledge objectives of concept use and detailed understanding and application is associated with activities such as team-oriented lab and project work that require explanation and problem-solving; these activities are likely to also encourage higher-order approaches to learning, even for students who naturally use surface approaches (Biggs, 2003). A review of each of the three major subject areas and the corresponding depth of knowledge indicators (Davis et al., 1997) shows that with the exception of a few IT management topics (e.g. management of the IS function, information resource management), it is desirable that IS majors achieve usage and application levels of competence across most topic areas (e.g. computer and IS literacy, IS planning, software development, project management, networks, team and interpersonal skills). Hence for IS education to be congruous with its higher-order goals, curriculum objectives, teaching/learning strategies and assessment must align with these goals. This suggests that when it comes to the lesser-researched area of IS assessment in particular, IS educators must be able to select and administer appropriate assessments. Whether tests/exams (e.g. multi-choice, short-answer or long-answer format) or assignments (e.g. presentations, practicum, individual or group research project, case study analysis, critical incidents or portfolio assessment) and their particular format are chosen will depend on the purpose of the assessment (e.g. formative or summative) and the desired outcome/knowledge level. 4. THE ROLE OF ASSESSMENT TO PROMOTE LEARNING Good assessment of students’ knowledge and skills is central to the process of learning. Brown (1999) argues that designing a “fit for purpose” assessment strategy is the single most useful thing teachers can do to positively influence teaching and learning. Biggs (2001) defines the effect of assessment on student learning as “backwash”. The author explains that students concentrate first on the assessment, then learn accordingly and finally achieve the outcomes that teachers are trying to impart. If the assessment activities match the teaching objectives, then the backwash is positive. However, if the assessment does not fit the objectives, the backwash will encourage students to use surface approaches to learning. There is therefore a direct link between the task which students are expected to perform and the strategies students adopt when organizing their studies (Miller et al. 1998). For example, assessment tasks that test independent facts (e.g. some types of short answer questions and multi-choice questions), or encourage students to think that factual recall is adequate, tends to encourage memorization-related activities. Therefore, it is essential that assessment is developed to match the expected outcomes. Research indicates that the format and quality of assessment have direct effect on learning outcomes. (Miller et al., 1998) (Fig. 1). Fig. 1 The role of assessment in the context of learning Assessment can be used to encourage students to adopt particular approaches to their learning. An approach to learning refers to the way in which a student organizes a learning activity in response to an assigned task. Existing extensive research confirms a positive relationship between the adopted approach and the outcome (Rowe, 2002). There are three main categories of learning approaches recognized in the education literature: deep, surface and strategic approaches (Rowe, 2002) (Table 2). Of the three approaches, the deep approach is viewed by educators as most desirable since it encourages understanding and a higher level of learning outcomes while the surface and strategic approaches are considered undesirable. It is important to note that the approach to study is not a characteristic inherent to individual students. The same student may apply a deep or surface approach depending on the learning environment and assessment strategies that directly impact the choice of a learning approach (Lundberg, 2004). Entwistle (2001) argues that deep learning can be promoted by using tasks to develop and demonstrate understanding, assessment techniques that assess understanding such as open-ended questions, and qualitative grading in relation to levels of understanding. Table 2. Learning Approaches Deep Goal to understand, enthusiastic interaction with content, relating new ideas to previous knowledge, relating evidence to conclusions, examining the logic of the argument Surface Goal to complete task requirements, treating task as an external burden, unreflectiveness about purposes or strategies, focus on discrete elements without integration, failure to distinguish principles from examples, memorizing information for assessments Strategic Goal to obtain highest possible grades, target work to perceived preferences of teacher, awareness of marking schemes, systematic use of previous papers in revision, organizing time and effort to greatest effect While this impact of assessment on approaches to learning is widely recognized, there is a scarcity of research linking individual assessment activities to the outcomes (Lundberg, 2004). Many statements related to the suitability of assessment types are not supported by empirical research and some findings are contradictory (Brown, 1999; Lundberg, 2004). In the context of IS education, researchers have been attentive to teaching/learning mechanisms and their impact student learning. However, IS researchers as well as IS curricula developers, have largely ignored (or not reported on) the assessment dimension of these teaching/learning tools or other mechanisms. The following section discusses and synthesizes previous studies both in the areas of education and Information Systems (IS). 5. LEARNING OUTCOMES, APPROACHES TO LEARNING, AND CORRESPONDING ASSESSMENT ACTIVITIES There is a general agreement in the literature that assessment techniques that encourage students to think independently promote deep approach to learning and assessment requiring reproduction of information encourages surface approaches (Entwistle, 2001; Biggs, 2003). Multiple choice (MC) questions require low level cognitive processes and encourage students to employ surface approaches to learning (Scouller, 1998). In addition, the author found that deep approaches were negatively related MC test performance. While it is possible to write MC question that test understanding, the majority of tests require only factual knowledge. Kuechler and Simkin (2003) examined how well multi-choice tests and constructed-response tests assess student performance in computer programming classes. This study included a mix of differentially constructed questions that ranged in quality and composition, some of which may have the potential to assess understanding at different levels. However, the results were not linked to desired knowledge levels. One of the studies cited in Entwistle’s paper on deep learning (2001) concluded that most academics do not have the expertise to develop MC questions that test higher levels of knowledge. Test banks provided by educational vendors also tend to be targeted either at assessing software package skills and IS literacy skills (e.g. McDonald, 2004) or are more appropriate for assessing lower-order knowledge levels. However, given the large sizes of the IS undergraduate courses the MC technique is widely used in contradiction to the teaching/learning objectives set by the curricula. The few studies on the effect of traditional exams on student learning show that conventional exams often do not support deep learning. Rowe (2002) studied approaches to learning of first year engineering students. The study found no positive association between the deep approach and the final grade and no negative association between the memorization approach and the final grade. These results demonstrate that students adopting the deep approach to learning were not rewarded by the assessment. Lundberg (2004) argues that most written final exams favour surface learning strategies. The authors supports this opinion with the statement that it is difficult to construct questions for a written exam which are easy to assess; students can answer in a limited time; measure more than detail knowledge; promote learning during the exam itself and prevent the students from postponing their efforts until the end of the study period. Imrie (1995) argues that a way to discourage memorization and surface learning for final exams is to allow students to take material into examinations. The authors found that open book exams discouraged rote learning and allowed students to show their understanding and to be creative. Brown (1999) concurs that open-book exam reduce the reliance on rote learning and can test what students can do with the information. However, Biggs (2003) associates open-book exams with the same level of learning as traditional exams except that open-book exams require less memorization. This contradiction may be explained if we consider the nature of the questions included in an exam paper rather than the type of exam. Most open-book exams exclude pure regurgitation and include analysis and application questions. There is no reason why such questions cannot be included in close-book exams as well. In addition to open-book exams, Brown (1999) suggests case studies where the case material is provided before or during the exam. Such case-based exams enable synthesis, analysis and evaluation. Another alternative is presented by take away papers that allow students to research a topic and produce work that is not time-constrained. Biggs (2003) cites a study comparing how physiotherapy students prepared for a short essay examination and an assignment. The exam elicited memorization-related activities and the assignment application-related activities. The teacher concluded that the assignment supported better the desired course objectives but lacked the breadth of the exam. Therefore, they adopted to use both. The trade off between breadth and depth is also demonstrated by a change in the assessment structure in a civil engineering course (Lundberg, 2004). It was found that the use of extensive assignments and oral group exam improved student learning. However, all teachers expressed concern about a decrease in the breadth of students’ knowledge. These results indicate the need for a balanced mix of assessments to achieve a satisfactory level of depth and breadth of student learning. The argument for a balanced portfolio approach is further supported by a qualitative study showing that it is not always easy to distinguish between memorizing and understanding (Entwistle et al., 2003). The analysis showed that a deep learning approach can involve some rote memorization and a surface approach at university level will include some understanding. The findings of the above literature review with respect to the relationships between types of assessment and levels of knowledge are summarised in Table 3. The table also includes examples of wording of assessment tasks which may elicit different levels of learning (Imrie, 1995). In conclusion, the preceding discussion supports the argument for a greater diversity in assessment. Each assessment offers advantages and disadvantages in terms of support of student learning. The findings also demonstrate that it is not necessarily the type of assessment but its content that promotes deep learning. For example, more scenario-based and problem solving questions can be included in traditional exam formats. While research indicates that certain types of assessment promote higher levels of knowledge, a perfunctory categorization of assessment types and corresponding levels of knowledge is too simplistic. The next section discusses the need for an overall assessment strategy at a discipline level and proposes a tool for the planning and evaluation of assessment portfolios. 6. STRATEGIC PLANNING AND DEVELOPMENT OF ASSESSMENT PORTFOLIOS To ensure the fit between the educational objectives and outcomes, current research advocates the need for strategic evaluation and development of assessment practices (Miller et. al, 1998; Gibbs, 1999). The previous discussion concluded that there is a need for diversity of assessment methods applied in IS courses. Mutch (2002) argues that without some strategic direction such trend towards diversity will reinforce fragmentation of the student experience. Issues of progression between programme levels (e.g. introductory vs advanced courses), and consistency across levels also become more important when students encounter unfamiliar forms of assessment. Table 3 IS Knowledge Levels and Associated Assessment Activities Depth / Level of IS Knowledge Bloom's Levels Associated Assessment tasks and wording 1 Awareness: Introductory recall and recognition Knowledge: Recognition Short answer and structured questions, MC questions Wording: name, define, list, select, state, identify, describe, reproduce, tabulate 2 Literacy: Knowledge of framework and contents, Differential Knowledge Knowledge Differentiation in context Short answer and structured questions, MC questions Wording: (as above) 3 Concept/Use: Comprehension and ability to use knowledge when asked Comprehension/ Translation/ Extrapolation/ Use of Knowledge Explain, summarise, interpret, give examples, compare (simple), contrast (simple), infer, rewrite, defend, illustrate 4 Detailed Understanding and Application: Selection of the right thing and using it without hints. Application Applies concepts, rules, principles to a new situation Applications to new contexts (scenario based questions), problem solving questions. Project assignments Apply, modify, predict, demonstrate, find, solve, discover 5 Advanced: Identification, Use and Evaluation of New Knowledge Analysis Recognizes unstated assumptions, argues logically, distinguishes between facts and inferences Supply questions Open book exams, case studies Analyse, distinguish, relate, discriminate, separate, deduce, classify. Synthesis Writes a well organized theme, creative story, combines information from different sources to solve problems; devises a new taxonomy case studies, take away papers Devise, design, plan, reorganize, rearrange, create, combine, generate, solve, invent; compose Evaluation Judges whether conclusions are supported by data; uses criteria to judge the value of a work case studies, take away papers Project assignments Compare (complex), contrast (complex), justify, appraise, criticize, determine, draw conclusions. Mutch (2002) presents four levels of assessment strategies: institution, faculty, programme and module. In the context of Mutch’s definitions, this study considers strategies and procedures at the programme and module levels. While definitions and justifications of strategies are useful, it is the objective of this paper to present some practical guidelines for IS assessment planning and evaluation. Based on the previous analysis of the literature, the following briefly outlines a four stage process for the development of IS assessment portfolios. Stage 1. Evaluation of the current situation. This stage evaluates the role of assessment in student learning, its fit with learning objectives, student performance, quality of assessment description, evaluation and remediation. The outcome of the evaluation should identify strengths, weaknesses, opportunities and threats associated with the current programmes, and priorities for improvement. Stage 2. Strategic planning for each level. The purpose of this stage is to plan a balanced portfolio of assessment types to support the desired learning outcomes identified in the previous stage. Current assessment offerings can be evaluated in terms of the knowledge level and learning approach that they support (Fig. 2). This process will identify specific directions for improvement of the balance of assessment types. Fig 2. Framework for planning and evaluation of assessment Stage 3. Planning for individual courses/modules. Based on directions from the previous stages the purpose of this stage is to plan individual course assessment portfolios. Table 3 can be used to support the selection of appropriate assessment tasks and their wording. Stage 4. Development of assessments and changes to current assessment portfolios to fit the learning strategy. Table 3 can also be used to assist the detailed development of particular assessment tasks. 7. CONCLUSIONS AND FUTURE DIRECTIONS Current research acknowledges assessment as a major factor which influences teaching and learning. Changes to assessment tasks can significantly alter student learning behaviour and consequent learning outcomes in terms of depth of knowledge. However, there is a scarcity of research into the effect of individual types of assessment on learning outcomes. Current literature indicates that assessment to support learning has been neglected (Biggs, 2003 Lundberg, 2004; Mutch, 2002). Model IS curricula recognize the importance of learning outcomes at all knowledge levels. While IS researchers have concentrated extensively on teaching strategies and their impact on student learning they have largely ignored assessment as one of the key components of learning environments. This paper builds on established cognitive frameworks in the area of education and IS education to include assessment guidelines. It defines the role of assessment by its relationship to learning outcomes (knowledge levels) and student approaches to learning. Based on an extensive analysis of the literature, it argues the need for a balanced portfolio approach to assessment and discusses and justifies the impact of individual types of assessment on learning outcomes and knowledge levels. This provides a framework for planning assessment at the practice level. The education literature recognizes the need for strategic direction for assessment at a level higher than the module in order to ensure consistency and progression. This paper presents an assessment development process and a portfolio framework for the planning and evaluation of assessment It therefore provides tools to evaluate the current assessment offerings and the degree to which they support desired learning outcomes for IS graduates. The tools also support strategic planning for assessment improvement. A review of the current body of education research has identified a number of opportunities for future work. First, there is a need to investigate and test the links between assessment and learning outcomes in the IS context. Also, there is an urgent need to explore and validate the effect of different forms of assessment on students’ approaches to study within the context of IS. Studies also indicate that learning outcomes are affected not only by student learning approaches and assessment but also by institutional policies (Entwistle, 2001; Rowe, 2002). Workload and lack of resources have been noted to inhibit development of advanced assessments but there is no empirical research which identifies and supports such inhibiting factors (Mutch, 2002; Lundberg, 2004). There is therefore a need to investigate the effect of workload, institutional policies, and reward schemes on assessment development. In addition research suggests that individual disciplines impose specific demands on student learning (Rowe, 2002). Deep or surface orientations may be positive or negative depending on the subject area For example, deep orientation was found to be unrelated to academic progression in mathematics (Heywood, 2000). Therefore, IS academics need to investigate the specific norms and practices associated with our discipline. 8. REFERENCES Ashenhurst, R. L. (1972). "Curriculum Recommendations for Graduate Professional Programs in Information-Systems." Association for Computing Machinery. Communications of the ACM 15(5), 363. Biggs, J, (2003). Teaching for Quality Learning at University: What the Student Does. Society for Research into Higher Education (SRHE) and Open University Press: Buckingham. Biggs, J. (2001), "Assessing for Quality in Learning," in Assessment to Promote Deep Learning: Insight from AAHE's 2000 and 1999 Assessment Conferences, ed. L. Suskie, American Association for Higher Education, pp. 65-68. Brown, S. (1999), "Institutional Strategies for Assessment," in Assessment Matters in Higher Education: Choosing and Using Diverse Approaches., eds. S. Brown and A. Glasner, Buckingham: Open University Press. Brown, S., and Knight, P. (1994), Assessing Learners in Higher Education, London: Kogan Page Ltd. Bryant, K., Campbell, J., Kerr, D. (2003). Impact of Web based flexible learning on academic performance in information systems, Journal of Information Systems Education, 14(1), 41-50. Chalmers D. and Fuller R. (1996) Teaching for Learning at University: Theory and Practice, Kogan Page: London. Cragg, P. B. (1998). Site Visits as a Teaching Method in Information Systems Courses. Proceedings of the International Academy for Information Management (IAIM) Annual Conference, Helsinki, Finland. Davis, G. B., J. T. Gorgone, et al. (1997). IS '97 Model Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems, Data Base. Entwistle, N. (2001), "Promoting Deep Learning through Teaching and Assessment," in Assessment to Promote Deep Learning: Insight from AAHE's 2000 and 1999 Assessment Conferences, ed. L. Suskie, American Association for higher education, pp. 9-20. Entwistle, N., and Entwistle, D. (2003), "Preparing for Examinations: The Interplay of Memorising and Understanding, and the Development of Knowledge Objects," Higher Education Research and Development, Vol. 22, pp.19-42. Freeman, L. A. (2003). Simulation and role playing with LEGO(R) blocks, Journal of Information Systems Education, 14(2), 137-144. Gibbs, G. (1999), "Using Assessment Strategically to Change the Way Students Learn," in Assessment Matters in Higher Education, eds. S. Brown and A. Glasner, Buckingham, UK: Open University Press, pp. 41-53. Gorgone, J. T., G. B. Davis, et al. (2003). IS 2002: Model Curriculum and Guidelines for Undergraduate Degree Programs in Information Systems. The DATA BASE for Advances in Information Systems 34(1), 1-52. Gupta J.N.D., and Watcher, R.M. (1998). A Capstone Course in the Information Systems Curriculum, International Journal of Information Management, 18(6), December, 427-441. Heywood, J. (2000), Assessment in Higher Education, ed. M. Kogan, London: Jessica Kingsley Publishers Ltd. Imrie, B. (1995), "Assessment for Learning: Quality and Taxonomies," Assessment and evaluation in higher education, Vol. 20, pp.175-189. Kuechler, W. L., and Simkin, M.G. (2003). How Well Do Multiple Choice Tests Evaluate Student Understanding in Computer Programming Classes? Journal of Information Systems Education, 14(4), 389-399 Lee, D. M. S., Trauth, E. M., et al. (1995). Critical skills and knowledge requirements of IS professionals: A joint academic/industry investigation, MIS Quarterly, 19(3), 313-340. Liebowitz, J. and G. J. Yaverbaum (1998). Making learning fun: The use of web-based cases, The Journal of Computer Information Systems 39(1), 14-29. Lundberg, A. (2004), "Student and Teacher Experiences of Assessing Different Levels of Understanding," Assessment and evaluation in higher education, Vol. 29, pp.324-333. McDonald, David S. (2004). Computer Literacy Skills for Computer Information Systems Majors: A Case Study, Journal of Information Systems Education, 15(1), 9-33. Miller, A., Bradford, I., and Cox, K. (1998), Student Assessment in Higher Education: A Handbook for Assessing Performance, London: Kogan Page Ltd. Mukherjee, A. (2000). Effective use of in-class mini case analysis for discovery learning in an undergraduate MIS course, The Journal of Computer Information Systems, 40(3), 15-23 Mukherjee, A. (2004). Promoting Higher Order Thinking In MIS/ CIS Students Using Class Exercises, Journal of Information Systems Education, 15(2), 171-179 Mutch, A. (2002), "Thinking Strategically about Assessment," Assessment and evaluation in higher education, Vol. 27, pp.163-174. Nulden, Urban and Helana Scheepers. (2002). Increasing student interaction in learning activities: Using a simulation to learn about project failure and escalation, Journal of Information Systems Education,12(4), 223-232. Nunamaker, J. F. and B. Konsynski (1982). MIS education in the U.S.: Private sector experiences and public sector needs, Computers, Environment and Urban Systems 7(1-2): 129-139. Richards, M. and L. Pelley (1994). The ten most valuable components of an information systems education, Information & Management, 27(1), 59-68. Rowe, J. W. K. (2002), "First Year Engineering Students' Approaches to Study," International Journal of Electrical Engineering Education, Vol. 39, pp.201-209. Scouller, K. (1998), "The Influence of Assessment Method on Students' Learning Approaches:Multiple Choice Question Examination Versus Assignment Essay," Higher Education, Vol. 34, pp.105-122. Wang, Y. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems, Information & Management, 41(1), 75-86. Tang, H.L., S. Lee, and Koh, S. (2000/2001). "Educational gaps as perceived by IS educators: A survey of knowledge and skill requirements," The Journal of Computer Information Systems, Vol. pp. 76-84. Yen, D. C., H.-G. Chen, et al. (2003). "Differences in perception of IS knowledge and skills between academia and industry: findings from Taiwan." International Journal of Information Management, Vol. 23(6), pp. 507-522. Wehrs, W. (2002). An assessment of the effectiveness of cooperative learning in introductory information systems, Journal of Information Systems Education, 13(1), 37-49 Work, Brent. (1997) Some Reflections on Information Systems Curricula, In Mingers, John and Stowell, Frank A. (Eds). Information systems: an emerging discipline? London:McGraw-Hill, 329-359.