Course Evaluation in Sweden - When, How, What and Why Stefan Cronholm stefan.cronholm@liu.se Dept. of Management and Engineering Linköping University Linköping, 583 81 Sweden Abstract This study is about course evaluation in Swedish higher education. Performing course evaluation is regulated in Swedish law. Despite this, only half of the courses are evaluated. The aim of this study is to understand why satisfactory course evaluations not are performed. Problems are identified from a student perspective and the paper provides proposals for reducing the problems. In order to tackle the problems an evaluation process model consisting of five phases is proposed. A main message is that there is need for an increased government from the university’s management levels to revise the incentives for performing a satisfactory course evaluation. Keywords: course evaluation, motivation, evaluation process model INTRODUCTION The problem we are facing is that course evaluation is not performed or is performed in an unsatisfactory way. Following problems are identified: * The number of students evaluating courses is too small (approx. 35%). * The number of courses evaluated is too small (approx. 50%). * The teachers are not encouraging students to perform course evaluation. * The students seem to be unmotivated. In Sweden, performing course evaluation is compulsory. That means it is not optional. It is prescribed in the “Regulation of Higher Education” (in Swedish: Högskole-förordningen). The regulation reads: “The university/college shall offer students possibilities to express their experiences and comments through a course valuation that is organized by the university/college” (our translation), (SFS, 1993). The university’s action plan for quality work includes the following text: “course evaluation shall work as a quality instrument and contribute to the quality of the education”, (Faculty of Arts & Science, 2007). This statement makes it clear that course evaluation is part of the university strategy. It is amazing that although course evaluation is regulated in the law and is part of the university’s strategy only half of the courses are evaluated. The obvious question is why? At first sight, course evaluation seems to be perceived as something redundant and one possible reason is that there are few or no incentives for performing the evaluation. The aim of this paper is to answer the question of why satisfactory course evaluation is not performed and to suggest proposals that could support the process of evaluation. An evaluation process model is proposed. The aim of performing course evaluation is obvious; the results should be viewed as a base for a possible redesign of the courses. A quote from a student reads “Why should I evaluate the course? The results are not used anyway”. This quote can be true, false or something in between. If evaluation results are used in order to improve courses, is there a possibility that the low interest in course evaluation could be related to that evaluation results and new redesign proposals are invisible for students? In other words, is there missing feedback from the evaluation? Another related problem is who is responsible for organizing possibilities to participate in course evaluations? According to the “Regulation of Higher Education” the Head of School is responsible. In practice, this responsibility is delegated to the director of study. This makes the question of “why” even more interesting since there is no doubt that course evaluation is compulsory and there is no doubt where the responsibility lies. This paper takes a student perspective since it is the students who are performing the evaluations and it is the students who are primarily benefiting from the results. Taking a student perspective means that the student’s opinions are identified and analyzed. Course evaluation is viewed as an important instrument for students to preserve their influence on how their education is designed. The character of the knowledge searched can be seen as both explanative and normative. The aim of the question “Why is course evaluation not performed?” is to identify explanative knowledge while the aim of the proposals is of normative character. The evaluation process of the subject information systems has been studied at one Swedish university. Possible generalisations will be discussed in the concluding section. This introductory section is followed by section 2; the research approach. Section 3 describes the current evaluation process and section 4 contains a discussion about relevant theories. After that the findings will be presented in section 5 and finally conclusions will be drawn in section 6. 2. RESEARCH APPROACH The research approach can be characterized as abductive (Peirce, 1931-35; Alvesson & Sköldberg, 1999) and as qualitative (Glaser & Strauss, 1967; Strauss & Corbin, 1998; Strauss & Corbin, 2007). The concept abductive means that the research process has been both inductive and deductive. Inductive in the sense that problems concerning course evaluation have been induced from empirical data. Deductive in the sense that existing theory has been used for comparison of induced data and as inspiration of generating proposals for problem reduction. There has been a continuous shift between empirical data and existing relevant theories (see figure 1). Figure 1. The research process In order to answer the question asked data has been gathered through interviews of students, studies of reports concerning course evaluation (Stake, 1993) and general theories about evaluation (Scriven , 1967; Scriven , 1972; Rutman, 1980; Remenyi & Sherwood-Smith, 1999; Love, 1991; Walsham, 1993). All together there have been six interviews conducted with students representing different study levels and genders. The interviews were constructed as semi-structured (Patton, 1990). That is, the character of the interviews was more like a conversation than formally structured interviews (Patton, 1990). That means, a predefined order of questions have not been followed, rather there have been an openness and sensitivity for the students’ opinions. Based on the results of the interviews, several problems have been identified. The problems have been related to each other in terms of cause and effect by using problem diagrams (Goldkuhl & Röstlinger, 2003). Finally, an evaluation model is proposed in order to reduce the problems identified. The proposals are based on the interviews with the students and theories about evaluation. The deficiencies in current course evaluation approaches could be seen as anomalies that need attention. In this way the knowledge contribution can be viewed as cumulative, that is, the evaluation process model developed is built on existing knowledge/theory where the ‘good’ parts are preserved and new proposals are added. We have identified four groups that could benefit from the results of this paper. The first group is of course the students themselves. The students want to take courses that are of high quality. The second group is the program managers. A program manager is responsible for a study program and has thereby an interest in the quality of courses. The third group is the teachers. Outcome of course evaluations is normally excellent feedback for improving and developing courses. The fourth group is the university level. The university is legally responsible for how course evaluations are performed. A working course evaluation process with an acceptable number of course evaluators secures that the university is fulfilling the requirements according to the regulation. 3. BRIEF DESCRIPTION OF THE CURRENT EVALUATION PROCESS The evaluation process consists of two phases, one phase where a questionnaire is constructed by the teacher and one phase where questions are answered by the students. The questionnaire has to embrace 10 compulsory questions provided by the principle. These questions are standard questions and are used for comparisons between courses. In addition to the principle’s questions the teacher adds an arbitrary number of course specific questions. A questionnaire normally consists of 40-50 questions. Examples of questions are: is the aim of the course fulfilled, did you get informative course information, did you benefit from the course literature, are you happy with the teachers’ pedagogy, did the examination correspond to the content of the course and how much effort did you put in. Answers to questions can be expressed both qualitatively and quantitatively. When the questionnaire is constructed, the students receive an e-mail that the questionnaire is electronically available and after a week a reminder is sent out. The process of construction and answering is completely computer supported. Using a computer supported evaluation process is a directive from the principle. Of course, paper based evaluation can be used but only as a complement. Still, paper based evaluations are used as the only medium. In that case, the teacher provides the students with a paper based questionnaire at the end of course. Normally the questionnaire is distributed at the end of the last lecture. There is also the case that course evaluation is not performed at all. 4. RELATED THEORIES Course evaluation is one type of evaluation; therefore it would be possible to benefit from general theories about evaluation. Evaluation is defined as “collection and use of information to make decisions about educational programs”, (Cronbach, 1963). According to Cronbach et al. (1980) evaluation should be recognized as not only a service to central decision makers, but should help everyone in a pluralistic society understand what programs accomplish and why they fall short of their objectives. These claims formulated as general statements about evaluation are valid for the concept of course evaluation. According to several theories about evaluation the term motivation seems to be a key concept. That is, people who are doing evaluation have to be motivated; they need incentives for participating in evaluations. One of the best incentives is that there is visible correlation between the evaluators’ opinions and the actions taken. According to Heron & Reason (2001) people who are affected by changes should always have the opportunity to participate in evaluation or change work. Dwyer (2008) claims that motivating people is a myth and furthermore that people cannot be motivated by others. The claim is that motivation comes from within and leaders can only provide conditions for motivation like setting up an environment that stimulates the motivation. The question “What’s in it for me?” is critical for individuals in order to be motivated (ibid.). This question is clearly valid concerning the students’ motivation for performing course evaluation. Maslow’s (1943) conceptualization of a hierarchy of human needs is often used as a base for understanding motives. The hierarchy consists of the levels physiological needs, safety needs, belonging needs, esteem needs and self actualisation need. Physiological needs are the very basic needs such as air, water, food, sleep, shelter, etc. Safety needs have to do with personal safety and security including job security. Belongingness is the desire to belong to groups: clubs, work groups, religious groups, family, gangs, etc. There are two types of esteem needs. First is self-esteem which results from competence or mastery of a task. Second, there's the attention and recognition that comes from others. The need for self-actualization is "the desire to become everything that one is capable of becoming". The aim of course evaluation is to provide input for improving the study environment and content of courses. This can be seen as a need that exists on the higher levels; esteem and self actualisation. Herzberg et al. (1959) is discussing motivation in terms of hygiene factors. They claim that if all the hygiene factors are taken care of, you have created an environment that motivates people. In other words, the hygiene factors stop people from being unmotivated. The hygiene factors included in a job environment encompass the company, its policies and its administration, the kind of supervision which people receive while on the job, working conditions, interpersonal relations, salary, status and security. According to Herzberg et al. (1959), these factors do not lead to higher levels of motivation but without them there will be dissatisfaction. The hygiene factors mentioned could easily be transformed into the context of course evaluation. A well known “prescription” to increase motivation is to provide feedback. Feedback means to inform people about the consequences of their actions. The aim of providing feedback is to inform people of how they are doing in relation to a specific goal Stephen (2002). Moreover, it is important to provide a feedback that is timely, effective and appropriate. Feedback should be delivered in a way that doesn’t make people defensive. Rather, feedback should engage people to perform actions leading to specific goals. When is it timely? When is it appropriate to give feedback? Stephen (2002) claims that feedback could be given at anytime you want to improve a performance. According to Deci (1992) true motivation is based on a genuine interest. Deci claims that there are three levels that affect peoples’ behaviour; 1) embedded regulation; that is, the only reason for performing the task is that you have to do it. For example a teacher demands that a student performs a task. In this level there is no free will involved, the student is governed by the teacher’s reactions. Level 2 two is about identified regulation; that is, you are not longer thinking of other peoples’ demands. The demands are adopted as your own demands. The tasks are thereby performed as free will and not seen as coercion. The third level is called integrated regulation; that is, the demands are incorporated with your existing value system. If external demands are in line with existing values, the motivation for performing a task will be reinforced. All the theories discussed above claim that it is hard to develop an environment that improves peoples’ motivation. The reason is that there is no “one size fits all” solution to the problems since every course evaluator (student) is an individual. In order to increase the students’ motivation it seems important to identify existing problems and identify incentives for increasing the motivation. 5. FINDINGS – AIMS AND PROBLEMS Aims identified - Why course evaluation? Three different aims have been identified. The aims are related to three significant roles that are involved in ordering, giving and taking courses. The three roles are program managers (faculty), study director/teachers (department) and students. Aim 1 – Provide an instrument for presenting experiences for courses taken. The first aim is related to the students; the course participants. The aim is to provide an instrument for students to present their opinions and experiences about courses taken. This opportunity is a right of co-determination and regulated in Swedish law. The instrument is used for presenting positive or negative opinions. Moreover, several students prefer to view the relation between the university and themselves as a business relation. The university is the supplier of knowledge and the students are the customers. Therefore a course is viewed as a product (a package of knowledge). Through using the instrument course evaluation, students can present opinions that improve the quality of the product. Aim 2 - Provide feedback to teachers and study directors (course suppliers). The second aim is related to the course suppliers. The aim is to provide feedback to teachers and study directors that can be used for implementing course improvement. The question “how did the students perceive the course?” is important for teachers in order to propose improvements. Aim 3 – Provide feedback to the university administration. The third aim is to provide feedback to program managers and the faculty level. At the studied university programs managers are ordering and paying for courses delivered from study directors department. The program director is interested in results from course evaluations in order to see if he got what he paid for. Problems identified – How, When and What All together nine problems have been identified. The problems are of different character but they are all related to the conditions of the evaluation process, i.e. to the evaluation process itself or to the results of the evaluation process. Problem 1 – The regulation is not detailed enough. According to the students, courses are in several cases evaluated in an unacceptable way or not evaluated at all. The students think that it is good that course evaluation is regulated in the law, but they would like the law to be more precise and detailed about how course evaluation ought to be performed. They perceive the law text as being too general and thereby there are too many action possibilities of how to conduct course evaluation. Problem 2 – Insufficient feedback. Most of the students think that the feedback is insufficient. They are not satisfied with information that is too aggregated. The students also think that the results of the evaluation are hard to find. Problem 3 – Low engagement by students. The results of the interviews divide the students into three categories. First, there are engaged students that present opinions in order to improve courses. Second, there are students that are filling in the evaluation form as a matter of routine and in an unreflective way. Third, there a students who are not interested at all and don’t participate in course evaluation. Several students seem to be aware that course evaluation is something that should be done, but far from all are aware of why. The students blame laziness for not filling in the questionnaire. Problem 4 – Performing course evaluation is too time consuming. The students think that it takes a long time to answer the questionnaire. There are too many questions and several of the questions seem to be unimportant. The reason why there are a large number of questions is that the answers are used on different levels of the hierarchy of the university. There are questions asked by the university level, there are questions asked by the faculty level and there are questions asked on the course level. All together there can be over 50 questions asked. Problem 5 – Teachers are not receptive to criticism. Several students are questioning if course evaluation will have an impact on courses at all. These students perceived some of the teachers being not receptive to criticism. Despite that negative comments were brought forward there were no visible reactions from teachers. This behaviour made the students upset and reinforced their understanding that course evaluation is meaningless. Problem 6 – The students will not benefit from the results. The students’ understanding is that the result of the evaluation will not influence their education. They have already finished the course so why should they care. The argument that they should participate in course evaluation since their precedents did that for their sake is not powerful enough. Problem 7 – The questionnaires are not distributed adjacent to the course end. Sometimes questionnaires are distributed weeks or even months after a course is finished. There is a risk that the students forget their experiences or that their engagement is low when a long time has passed between the course end and the evaluation. Problem 8 – Measures to take corresponding to the identified problems are not developed. The identified problems are not always transferred into measures to take. Problems have been identified but it happens that no improvements are implemented. It seems like the teachers are in charge of the process of deciding if a problem should be taken care of or not. The students are not participating in this process and thereby this part of the course evolution process is out of their control. Problem 9 – The outcome of course evaluations is not easily accessible. The students claim that previous course evaluations are not easy to access. They could be stored in the teacher’s local computer, in the teacher’s bookshelf or in a central IT-system. If students are not informed of the results of the course evaluations their motivation for doing course evaluation will decrease. Problem relations In order to be able to suggest proposals to the problems identified a more thorough problems analysis is needed. According to Goldkuhl & Röstlinger (2003) problem analysis is an organizational problem solving process. Problem analysis is a process of defining problems and proposing solutions. This will not automatically lead to problem resolution (ibid.). Besides identified problems and proposing solutions change measures must be developed and implemented. Moreover, it can be the case that not all problems are possible to eliminate. There are situations, which you must keep up with. Hopefully, there is a way to reduce the problems when you can’t eliminate them. One way to more thoroughly analyze problems is to use Change Analysis (Goldkuhl & Röstlinger, 2003). Change Analysis consist of a method component called problem diagrams which aims to analyze problems in terms of cause and effect. This way of analyzing problems is also supported by Strauss & Corbin (1998, 2007). In the problem diagram below an analysis of problem causes and problem effects are documented. In order to support the reading of the diagram some comments are provided: The problem “P1 Regulation is not detailed enough” is seen as a problem that causes several problem effects. The problem “P8 Problems are not transferred into measures to take” is viewed both as a problem cause end a problem effect. P8 is part of causing “P2 Insufficient feedback” and is also an effect since it is caused by “P5 Teachers are not receptive”. Problem “P3 Low engagement of students” is seen as the ultimate problem effect. Figure 2. Relations between identified problems Problem reduction The proposals recommended in order to reduce the problems are based on the student interviews and on existing theories of evaluation and motivation (see section 4). In this way the proposals suggested in order to reduce the problems are grounded in both empirical findings and in existing theory. Problem 1 – The regulation is not detailed enough. Proposal: Inform the university administration that the students wish more detailed regulations. Encourage the university administration to identify if other (contradictory) opinions exist among other stake holders (teachers, study directors). These students think that there is too much freedom and flexibility for the universities to plan and conduct course evaluation. In other words there is a need for an increased guidance and added detailed recommendations for how course evaluation ought to be performed. The Regulation of Higher Education should be seen as a framework (SFS, 1993). There is no hindrance for the universities to add detailed recommendations as long as they fit into the framework. The students support that course evaluation is regulated in the law, but they would like the law to be more precise about how course evaluation ought to be performed. There could be other opinions among other stake holders (teachers, study directors, administration staff) who prefer regulations to be more openly formulated. As a complement to this study there is a need to identify if other preferences exist in order to gain a richer base for decision. Problem 2 – Insufficient feedback. Proposal – Make feedback compulsory; compensate teachers. There is no (economical) incentive for teachers to provide feedback to students. Several teachers are doing as little as possible since they are fully loaded with regular teaching. In order to get rid of this problem, teachers must be compensated for both offering course evaluation and providing feedback to students. Providing feedback should not be an option; it should be compulsory. According to Nielsen (1993) feedback should be provided to people within reasonable time. Problem 3 – Low engagement of students. Proposal – Inform the students of why course evaluation is important. First, all students should be informed about why course evaluation is important. There should be no doubt why this opportunity could improve the education. According to Moxner (1984) students could be aware of the importance of course evaluation. Despite this, students chose not to perform evaluations. Moxner claim that every person has a “need of comfort”. Engaging in course evaluation could mean that students have to leave the social and secure environment. Presenting opinions in a course evaluation could result in an imbalance of the “need of comfort”. Second, the engagement of the students will increase when they experience concrete results from performing course evaluation. The current situation can be compared to a “bad circle”. That is, no visible changes of courses will probably lead to a low degree of engagement. A low degree of engagement will not create “good” proposals for course improvements. The idea is to move from a ”bad circle” to a ”good circle”. The implementation of course improvements must be made visible for the students. They should be marketed and students should be informed about improvements made. If the students experience that teachers and study directors are listening to their opinions; it will increase their engagement and their motivation. This is in line with claims by Heron & Reason (2001). They claim that a close dialogue and a broad participation is the key to increase the engagement. Of course, there will never be 100% of engaged students, but all students should at least be aware of when, where, how and why course evaluation is performed. Problem 4 – Performing course evaluation is too time-consuming. Proposal – Reduce the number of questions asked. The students think that it takes a long time to answer the questionnaire. According to the students, an objective should be that the minutes spent on course evaluation should be equal to the number of credits of the course. That is, a student taking a course embracing 10 credits should spend a maximum of 10 minutes on the course evaluation. The analysis of why there are so many questions asked for each course shows that there are three different questioners; the university, the faculty and the teacher. The data gathering from the university levels and the faculty level seems to be too ambitious. Their interest is to gather data of a more general nature and not at the detailed course level. Therefore it is not necessary that questions asked by the university and by the faculty have to be done at every course evaluation. It should be sufficient to gather these data once or twice every semester. We are not saying that the questions asked by the university and the faculty level are unimportant; rather our claim is that the frequency of data gathering is excessive. Following this proposal means that the number of question can be substantially reduced. Problem 5 – Teachers are not receptive to criticism. Proposal – Make course improvements transparent for students and follow up on the implementation of proposals for course improvements. Several students are questioning if results from course evaluation will improve the courses at all. They mean that some of the teachers are not receptive to criticism. A closer look revealed that most of the courses are improved based on course evaluations. The problem seems to be that the improvements are not made visible to the students. Therefore this problem is overlapping the problem of feedback (problem 2). For a minor part of the courses proposals have not been implemented. In these cases, there must be a follow-up from study directors. Problem 6 – The evaluating student will not benefit from the results. Proposal – Clarify that the evaluating student can benefit from the results of the evaluation A common misunderstanding is that the result of course evaluation only affects the next version of the course. A course evaluation normally contains questions about the content, the teacher’s pedagogical skill and the way the course is organized. That means that comments from students could be viewed as course specific or being of a more general nature. Specific comments about the course will hopefully have an impact on the next version of the course; but the general comments will also have an impact on other courses where the same teacher is involved. A teacher who is sensitive of criticism will remove bad ideas and replace them with something new in all courses he is involved in. Often students will meet the same teacher again in another course at the same or at a higher level. Course evaluation is traditionally performed after the course is finished. Another possibility to make students benefit from the ongoing course is to offer an opportunity to evaluate while the course is running through using muddy cards. An evaluation opportunity that is offered in the middle of the course will increase the motivation of the students (Kessler & Nadim-Tehrani, 2002). Problem 7 – The questionnaires are not distributed adjacent to the course end. Proposal – Make both teachers and students aware of problems related to this problem and why this is important to follow. This problem is of an administrative character and should be easy to get rid of. There should be a clear and agreed time limit for when course evaluations should be distributed. Problem 8 – Measures to take corresponding to the identified problems are not developed. Proposal – Invite student representatives to discuss improvements and use written agreements. Students can be invited to discuss which measures to take corresponding to the problems identified. Open up this part of the decision process in a democratic way is one way to increase the students’ motivation. We are not saying that the decisions should be made in a democratic way. The decisions should always be made by the university since they have to take responsibility for the consequences. We are proposing that the students can provide input to the decision process through creative ideas about how courses could be improved. They possess a unique knowledge as course participants; not investigating their knowledge is a lost opportunity. Discussing different opportunities for improvement will also increase the students understanding for why some proposals of improvement are less feasible and why others are more feasible. Advocates of a close collaboration argue for a broad and genuine participation aiming at agreement (Heron & Reason (2001) and Carlshamre (1994). We are also proposing that the outcome of the discussion is documented. A written document can be seen as a memory of the decisions made. This document should then be handed over to the students of the next year. The aim of the document is to serve as a basis for follow up activities. Problem 9 – The outcome of course evaluations is not easy to access. Proposal – Make outcomes of course evaluations available from the web site. The students claim that previous course evaluations are not easy to access. They are also self-critical and admit that they could put more effort in searching for course evaluations. Nowadays almost every course has a web site where information about the course can be found. A simple proposal to reduce this problem is to make the outcome of the course evaluation available from this web site. Together with the presentation of the results newly implemented improvements could be highlighted. Simple actions like this will make the improvements visible for the students, provide feedback on course evaluation and contribute to more engaged students. The evaluation model The problems identified can be categorized as belonging to the management level and to the operational level. The management level category consists of problems that need to be taken care of on a higher hierarchical level and are outside the scope of a traditional course evaluation process. The operational category consists of problems that can be taken care of within the course evaluation process. The two categories are not mutually exclusive since they are overlapping each other. That is, some problems exist in both categories but different aspects of the problems are focused. The management level orientated problems are “P1 The regulation is not detailed enough”, “P2 Insufficient feedback (the management level needs to make sure that teachers will provide students with feedback and make sure that there exists an incentive), “P6 The evaluating student will not benefit from the results” (the management level needs to inform and market that students will benefit from participating in course evaluations), “P8 The identified problems are not transferred into measures to take” (the management level needs to follow-up that good proposals for improvements are taken care of”) and “P9 The outcome of course evaluations are not easily accessible (the management level needs to inform teachers of how, when and why results should be presented. The operational level is divided into five phases: follow-up meeting, present improvements, mid course evaluation (Kessler & Nadim-Tehrani, 2002), quantitative evaluation (Gummesson, 1988; 1970: Patton (1990) and qualitative evaluation (Kvale, 1989; Bryman, 2001; Strauss & Corbin 2007). The first phase is a follow-up meeting. The follow-up meeting should be organized approximately one month before commencement of the course. The term follow-up is used since the aim is to follow up or to check whether the agreed and documented improvements from the previous version of the course have been implemented or not. A written protocol from the qualitative evaluation (see phase five) has been handed over to this year’s students. The student representatives ask for a meeting with the responsible teacher. At the meeting, the teacher and the student representatives walk through the protocol. This phase works more or less as a checkpoint where the teacher can verify that agreed improvements are implemented or present arguments for why they are not. Presenting arguments for why an improvement proposal has not been implemented is also feed back that is important to bring forward. This phase addresses primarily the problems “P2 Insufficient feedback”, “P3 The engagement of the students”, “P5 Teachers are not receptive for criticism”, “P8 The identified problems are not transferred into measures to take” and “P9 – The outcome of course evaluations are not easily accessible”. This phase addresses all the three aims (see section 5.1). Phase two, presenting improvements, aiming at giving the students a clear feedback of what course improvements have been implemented since the last time the course was running. This information should be presented at the first lecture in front of all the students. In that way the students will be aware that their experiences from participating in courses will affect the course content or the pedagogy used. This awareness will increase their motivation and attitudes towards course evaluation. Phase two addresses all the three aims (see section 5.1) and addresses primarily the problems “P2 Insufficient feedback”, “P3 The engagement of the students”, “P5 Teachers are not receptive to criticism”, “P8 The identified problems are not transferred into measures to take” and “P9 The outcome of course evaluations is not easy to access”. The aim of the third phase, mid course evaluation, is to offer a possibility for the students to influence the running course. Being able to present opinions about the running course will motivate the students for performing evaluation since the result will have an affect on the remaining part of the running course and not primarily on the next version of the course. This phase addresses primarily the problems “P2 Insufficient feedback”, “P3 The engagement of the students”, “P5 Teachers are not receptive to criticism”, “P6 The evaluating student will not benefit from the results” and “P8 The identified problems are not transferred into measures to take”. This phase addresses all the three aims (see section 5.1). The fourth phase, quantitative evaluation, addresses the first aim of providing an instrument for all students to present their opinions and experiences. A questionnaire consisting of structured and predefined questions is sent to all the students. Answers could be given according to predefined options or as free text. The advantage of starting with a qualitative survey is that data can be gathered on a broad array (Sears, 1997). The answers are compiled according to general statistical methods. This phase primarily addresses the problems “P2 Insufficient feedback”, “P3 The engagement of the students”, P4 Performing course evaluation is too time consuming” and “P7 The questionnaires are not distributed adjacent to the course end”. Figure 3 The course evaluation model The fifth phase consists of a qualitative evaluation approach. This phase addresses all the three aims (see section 5.1). The idea is to use the results from phase four (the quantitative evaluation) and to gather more data around problems that are of more concern. The major strengths and problems are focused. Student representatives, teachers and study directors are meeting face-to-face. The aim of this meeting is to identify and suggest proposals for improvements. A protocol is used in order to document what has been agreed upon. Of course, it can be the case that teachers and students could not reach an agreement. In this case the teacher should provide clear arguments of why a proposed improvement is neglected. Otherwise there is a risk that the students perceive the teacher is not being receptive to criticism. This phase addresses primarily the problems “P2 Insufficient feedback”, “P5 Teachers are not receptive for criticism” and “P3 The engagement of the students”. 6. CONCLUSIONS This study has revealed a number of problems concerning course evaluation. Some problems might be well known while others might be unknown which could depend on different contexts or cultures. The knowledge contributions of this paper are: * a documented and structured analysis of problems and problem relations that could appear in the context of course evaluation (see section 5.2-5.3) * proposals for reducing the problems (see section 5.4) * an evaluation process model (see section 5.5) The aim of course evaluation is to provide quality. Today, quality is the most important competition factor among universities. Therefore course evaluation should be used as one of the most important instruments to improve education. Course evaluation should not be perceived as a burden or something that you are forced to do. Perceiving course evaluation as a burden can be compared to Deci’s three levels of behaviour (1992), (see section 4). Deci call one level for “embedded regulation”, that is, the only reason for performing the task is that you have to do it. In this case there is no free will involved. According to Deci the motivation will increase if the behaviour is in line with what Deci calls “integrated regulation”. That is, the demands are incorporated with your existing value system. Perceiving course evaluation as “integrated regulation” means a change of attitude towards course evaluation; to see course evaluation as an excellent quality instrument and not as a burden. The main message in this study is that the questions why, what, when and how must be considered when planning for course evaluation. To plan means to organize a process that is more structured and formal than the character of the current evaluation process. Too much government concerning proposals of structures and detailed processes could be perceived as offensive, especially within the academy where teachers often are used to possessing a high degree of freedom of action. On the other hand, too much freedom could lead to a situation where course evaluations are perceived as optional. The findings in this study show that there is need for an increased governing and the establishment of incentives for performing a satisfactory course evaluation. Two established concepts discussed in general evaluation theory are goal-based evaluation and goal-free evaluation. Goal-based evaluation is defined as measuring the extent to which a program or intervention has attained clear and specific objectives Patton (1990) and goal-free evaluation is defined as gathering data on a broad array of actual effects and evaluating the importance of these effects in meeting demonstrated needs (Patton, 1990, Scriven, 1972). The proposed course evaluation can be characterized as being both goal-based and goal-free. For example, phase four “qualitative course evaluation” is primarily a goal-based activity since the aim is to measure if predefined goals are fulfilled or not; to what extent and in what ways. Phase 5 “qualitative course evaluation” is both goal-based and goal-free. Besides discussing the outcome of the phase 4 other issues not covered by the questionnaire used in phase 4 are also discussed. Other established concepts in general evaluation theory are formative and summative evaluation. The aim of formative evaluation is to provide a systematic feedback to the designers and implementers during an ongoing development process (Walsham, 1993; Scriven, 1967). Summative evaluation is concerned with identifying and assessing the worth of programme outcomes in the light of initially specified success criteria after the implementation of the change programme is completed (Scriven, 1967). The proposed evaluation embraces both formative and summative evaluation. Phase 3 “mid-course evaluation” is primarily formative while phase 4 “quantitative course evaluation” and phase 5 “qualitative course evaluation” are primarily summative. This study can be criticized for being problem oriented and not considering existing strengths. In parallel with being problem oriented there is also a need of identifying strengths that needs to be preserved. The risk of only being problem oriented is that you can “throw out the baby with the bathwater”. Looking closer into strengths is therefore proposed as a direction for future research. This study can also be criticized for proposing a too ambitious evaluation process. Encouraging students to participate in different evaluation phases over and over again could lead to “tiredness of evaluation”. According to The Swedish National Agency for Higher Education (2003) the advantages with presenting a model that creates conditions for increasing the motivation should exceed the disadvantages. According to White (1959) the competence is an important factor for increasing motivation. This claim is also applicable in the context of course evaluation. The results from course evaluation should lead to improvements of the course content. Updated course content will of course create a good condition for improving the competence of the students. Another limitation in this study is that the results only are based on the student perspective. Other complimentary perspectives such as the teachers’, the study directors’ and the university levels’ perspectives would probably bring forward other problems that might be in conflict with the students’ interests. In this study the students’ perspective is chosen. The reason is that it is the students who will primarily benefit from course evaluations. The scope of this study is limited to Swedish conditions for performing course evaluation within the subject of information systems. A reasonable question to ask is: are the findings valid for other conditions as well? There are at least two directions of generalization. The first direction is: are the findings valid for the subject of information systems in other countries. The second direction is: are the findings valid for other subjects? In order to answer both these directions of generalization more research is needed. Therefore, collecting additional data from other subjects and other countries is proposed as future research. However, several of the problems identified and the solutions proposed are not formulated as being specific Swedish or having a specific information systems character. REFERENCES Alvesson M, Sköldberg K (1999) Reflexive Methodology: New Vistas for Qualitative Research, Sage, London Bryman, A. (2001) “Social Research Methods”, Oxford University Press. Carlshamre, P. (1994). A collaborative approach to usability engineering, Licentiate thesis. Department of Computer and Information Science, Linköping University, Sweden. Cronbach L. (1963). Course Improvements Through Evaluation. The Teachers College Record http://www.tcrecord.org/Content.asp?ContentId=2843. Site Accessed May 22, 2008 Cronbach L J, Ambon S R, Dornbusch SM, Hess R D, Hornik RS, Philips DC, Walker D F, Weiner S S. (1980). Toward Reform of Program Evaluation. Jossey-Bass Publishers, San Francisco, CA. Dwyer K (2008). Managing Change: Motivating People. Enzine Articles. Site Accessed: June 11, 2008 Faculty of Arts & Science. (2007). In Swedish: Handlingsplan för kvalitetsfrågor 2007-2009. http://www.liu.se/ffk/organisation/ledning/handlingsplan_kval.pdf. Site Accessed: May 22, 2008. Glaser B, Strauss A (1967) The discovery of grounded theory, Aldine, New York Glynn WJ, Barnes JG (1995) Understanding service management, John Wiley, Chichester Goldkuhl G, Röstlinger A (1993) Joint elicitation of problems: An important aspect of change analysis, in Avison D et al (Eds, 1993) Human, organizational and social dimensions of Information systems development, North-Holland Goldkuhl G, Röstlinger A (2003) The significance of workpractice diagnosis: Socio-pragmatic ontology and epistemology of change analysis, in Proc of the International workshop on Action in Language, Organisations and Information Systems (ALOIS-2003), Linköping University Gummeson, E. (1988). “Qualitative methods in management research”. Chartwell-Bratt, Bromley, UK Heron & Reason (2001) “The Practice of Co-Operative Inquiry: Research ‘with’ rather than ‘on’ people, in Handbook of Action Research (Reason P & Bradbury H eds.), Sage Publications, London. Herzberg F. Mausner B.& Snyderman B.B.(1959). The Motivation to Work. New York: Wiley. Kessler C & Nadim-Tehrani S (2002). Mid Term Course Evaluation with Muccy Cards. ACM SIGCSE Bulletin. Vol 34, Issue 3, pp 233-243. http://portal.acm.org/citation.cfm?doid=637610.544501. Site accessed: June 24, 2008. Kvale S (1989). Issue of Validity in Qualitative Research. Lund. Studentlitteratur. Love A J (1991) Internal Evaluation, Sage Publications Maslow A. H. (1943). A Theory of Human Motivation. Psychological Review, 50, 370-396. Moxnes, P. (1984). In Swedish: Att lära och utvecklas i arbetsmiljön, första utgåvan, sjätte tryckningen, Centraltryckeriet AB, Borås, ISBN 91-27-01187-9 Nielsen, J. (1993). Usability Engineering. Boston: Academic Press Patton M Q (1990) Qualitative evaluation and research methods, Sage, Newbury Park Peirce C S (1931-35) Collected papers, Harvard U.P., Cambridge Remenyi D, Sherwood-Smith M (1999) Maximise Information Systems Value by Continuous Participative Evaluation, Logistics Information Management, Vol 12 No 1/2 pp 14-31 Rutman L (1980). Planning Useful Evaluations. Vol 2. Sage Library of School Research Scriven M (1967) The Methodology of Evaluation, Rand McNally, Chicago Scriven M (1972) “Pros and Cons About Goal-Free Evaluation”, Evaluation Comment 3:1-7, in Philosophical Redirection of Educational Research: The Seventy-First Yearbook of the National Society for the Study of Education, Thomas L G (ed), University of Chicago Press Sears A. (1997). Heuristic walkthroughs: Finding problems without the noise. International Journal of Human-Computer Interaction, 1997. 9(3): p. 213-234 SFS (1993). In Swedish: Högskolefördningen. http://www.notisum.se/. Site Accessed May 14, 2008 Stake R (1990) Understanding Educational Evaluation (ed. Norris), Kogan Page Stephen X (2002). Providing timely feedback important role of manager - Entrepreneur's Notebook - Brief Article. Los Angeles Business Journal. Feb 25, 2002 Strauss, A. L. and J. Corbin (1998). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, CA, SAGE Publications Strauss, A. L. and J. Corbin (2007). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, CA, SAGE Publications The Swedish National Agency for Higher Education (2003). In Swedish: Förnyad granskning och bedömning av kvalitetsarbetet vid Linköpings universitet, Högskoleverkets rapportserie 2003:9 R. http://www.hsv.se/download/18.539a949110f3d5914ec800088323/0309R.pdf. Site accessed: June 26, 2208. Walsham G (1993) Interpreting Information Systems in Organisations, Wiley & Sons White, R. Y. (1959). Motivation Reconsidered: The Concept of Competence. Psychological Review 66, 297-333.