Effectiveness of Online Discussion Groups Karen Filipski Karen_Filipski@hotmail.com Michael W. Bigrigg Bigrigg@iup.edu Computer Science Department Indiana University of Pennsylvania Indiana, PA 15705, USA Abstract To build an online community, which is an essential learning component for an online class, web discussion groups are utilized to engage the student population in appropriate academic learning exchanges. The concern at hand is how effective large threads are in which active students participate. We assert that there are a finite number of possible answers to a direct question, therefore online discussion threads should be capped in participant numbers due to the redundancy of answers and lack of student engagement when message counts become too high. Keywords: Online discussion, expert learners, online communities, online education, thread effectiveness, computer literacy, distance learning 1. INTRODUCTION Building an online community within the confines of an online learning course is imperative to ensuring the success of fostering student learning. To build an online community, web discussion groups are utilized to engage the student population in appropriate academic learning exchanges. The concern at hand is how effective large threads are in which active students participate. Similar concerns surface within the confines of a traditional classroom as well. An instructor’s goal is to facilitate the development of student learning through the usage of engaged pupil discussion. Ideally, an effective traditional classroom discussion would involve student answers building upon previous contributions, including the instructor’s and other pupils’. This is accomplished through real-time instructor facilitation and continuing inquiry due to student learning ownership and accountability. The end result of this dynamic discussion is the creation of a cycle of ideas from which the entire class can benefit, in which ideas build upon on each to construct high levels of topical understanding. However, within the confines of our online learning environment, students only responded to the original thread topic, rather than reshaping the topic as discussion continued. Our undergraduate computer literacy pupils attempted to only answer the question originally posed within that thread and did not attempt to build responses off of other student discussion contributions. The mode of class thinking was very linear; due to the limiting nature of the online learning environment, including asynchronous responses, the teacher was not able to lead a dynamic, real-time discussion in which he could restructure the original question to build off of student discussion and pre-existing knowledge to build that cycle of ideas. This is the fundamental issue – it is not the question being asked, it is the “one shot” nature of the online experience, in which an instructor can only ask one question at a time and not reshape it dynamically in real-time. This problem would be found no matter what level course it is; either a computer literacy course or a senior level course. The problem is in the mode of delivery and constraints therein, not the level of thinking exhibited by the students. This concern regarding the effectiveness of large discussion threads begs the question of should there be a cap implemented on the size of an online discussion thread to ensure quality of active learning and appropriate student engagement? We assert that there are a finite number of possible answers to a direct question, therefore online discussion threads should be capped in participant numbers due to the redundancy of answers and lack of student engagement when message counts become too high. Posts at the very end of the thread appear to be thrown together in last minute, or half-hearted attempts to solve the problem at hand since it has either already been solved or the student feels overwhelmed by the number of posts already indicated. We begin with a discussion of the pedagological background of the question, including the necessity of online discussion groups to foster student learning in a web forum and the special knowledge our expert learners bring to those discussions. This is followed up with an introduction to the case study and a review of the data garnered from that experience. We then dive into the data analysis of our experiment and suggest results and best practices that may be pursued in future work. 2. PEDAGOLOGICAL BACKGROUND Expert Learners All students have ownership of special knowledge within different domains. This special knowledge, or idea web, is a skeleton of information that the pupil has developed during his lifetime. The data might have been acquired through formal educational methods or life experiences. Therefore, since every pupil has preconceived notions and information about different domains before entering your classroom, you must treat those pupils as “expert learners.” This data may or may not be accurate. (Bransford, Brown and Cocking) Since every student is in expert in some way, an instructor must understand how the pupils’ understandings will affect curriculum digestion. Therefore, a baseline study should be conducted to try and gauge what level of understanding students may have about a topic. The way in which an expert student stores and accesses accurate information can cause her application of that knowledge to new instructional situations to be highly efficient. The expert student does not leaf through her entire card catalog of previously collected and compartmentalized data to find for what she is looking; her categorization system is context-sensitive, so she knows what situational understandings are appropriate for the problem at hand or the educational task to conquer. Moreover, because an academic expert student’s knowledge-base is conditional, such students are flexible in both learning new information and understanding application scenarios. (Miller 81-87) Fluency is not based on timely performance; an expert student’s response is efficient and confident, not hasty and risky. A person can only focus on a finite amount of information and sub-processes at a specific moment. (Miller 81-87) By focusing on applicable patterns within data and relating those patterns to predetermined schemas, an expert student can better relate, and respond to, subject-specific scenarios. Transferring is pivotal to successful learning. Byrnes defines transfer as the capability of a student to apply previously-developed learning and understanding to new situations and ideas. (Byrnes) Successful expert students can accurately recognize a new learning situation, search memory banks for relatable data, and apply that information in a successful manner. This transfer grows the expert student’s data bank. The expert student’s learning accomplishment with transfer is contingent on multiple factors. Firstly, the pupil must have mastered the basics of her understanding and not based her knowledge on context-specific scenarios. A remarkable expert student has recognized that her knowledge is conceptual and can be applied to multiple circumstances; moreover the act of transfer is viewed as necessary for all future learning. (Bransford, Brown and Cocking) Online Community Development Distance education, especially online learning, has a special set of needs that a traditional course does not have. Specifically, to successfully implement an online course that fosters the development of topical understandings, the teacher must make appropriate usage of online discussions. Two key ideas should be kept in mind when implementing discussion board threads: using the running dialogue as a method of student inquiry and encouraging the usage of expansive questioning. (Palloff and Pratt) The reason for using online discussion boards is to facilitate student development of ideas through the usage of a running dialogue. This running dialogue should aim to build upon previous work to further inquire into the nature of the topic at hand. For instance, if a topic of a particular discussion board thread is computer performance and speed, students may investigate the concept of memory to figure out the need for memory, and then dive deeper into the topic, exploring such facets as cache and access time. Moreover, the instructor should encourage the tactic of expansive questioning through usage and description. Expansive questioning involves the constant reworking of the topic at hand to appropriately respond to expert learners’ idea webs and the content of the discussion thus far. (Palloff and Pratt) Redirection and refocusing are encouraged through the usage of expansive questioning. To effectively implement these dynamic discussion boards, the class must work to build a unique identity, or a conscious community. A conscious community functions according to its own values, mores, and desires. (Palloff and Pratt) An online class is a gathering of electronic personalities, or the online students with their corresponding behaviors. (Palloff and Pratt) By promoting feedback and having students share in the role of discussion facilitator, the online community grows dependent on each other for idea construction. Facilitation can be directly shared through the requirement of participation and the authoring of replies. Most importantly, for online discussion threads to be successful, ‘infoglut’ must be avoided. According to Paloff and Pratt, infoglut is the overwhelming of students with massive amounts of information. Online community development is a lot of work, and threads can grow out of control, sizing itself with hundreds or more messages. The increase in workload, loss of traditional classroom visual and verbal signs, online navigation difficulty, and posting anxiety due to timing can make a student feel at a loss. (Palloff and Pratt) There are design and feedback methods used within the confines of the structure of the course to minimize infoglut. While we make the suggestion that we limit the number of participants in the thread to minimize infoglut, an alternative is to use other mechanisms that minimize infoglut by focusing on the question itself – such that you need to become better at asking the right question. However, we suggest controlling the size and duration of threads to deal with this situation. 3. CASE STUDY DESCRIPTION This case study is an introductory undergraduate computer based literacy course, offered at Indiana University of Pennsylvania, a rural state university. The goal of the course to provide students with a fundamental understanding of computer usage and composition. By the end of the course, students should have developed an understanding about how computers work and what users can accomplish with computers. The class was administered through the usage of WebCT. Fifty-five pupils were required to complete activities, projects, examinations, papers, and online discussion threads. The students’ 15% participation grades consisted of participating in threads throughout the course of the semester, for which students had one week to post primary and reply contributions. Several of the threads required the students to post primary contributions with a partner. Duplication of ideas within threads tended to happen toward the latter part of the thread itself, and duplication of ideas tended to be witnessed toward the beginning of the semester. This contradicts the idea that duplication happens due to boredom with the course. 4. DATA Background For the purpose of this experiment, we focused on four discussion threads with which the students were required to participate throughout the track of the online class. These threads each had a specific topic established, for which each student (or student pair) had to respond appropriately. Additionally, pupils had to author at least one response to another peer’s ideas (with the exception of the baseline thread). Each post was “graded” according to a three point scale, resulting in a ‘CAP’ score: 1. Completeness: Does the post provide all information required within the wording of the thread topic? Was the question posed answered adequately? Does the student’s response include examples to clarify her point? 2. Accuracy: Is the pupil correct in her ideas and facts? 3. Precision: Is the student’s posting exact? Does she commit to an answer, or does she ramble on and on? Note that the student’s answer may be wrong, but she sticks with her ideas in a clear-headed, direct manner. Each of the criteria listed above were evaluated on a 4-point scale, ranging from inadequate (1) through excellent (4). After each post was evaluated for each of the three criteria established above, the student’s posting received a final CAP score, which is a mean calculation of scores earned. Baseline Thread A baseline thread was used to determine the capabilities of the students in responding to online discussion questions. We wanted the pupils to demonstrate how well they could answer a query posed to them, and how effective the medium could be. This baseline thread’s topic was to introduce themselves to the class, tell us what their major was, and explain how computers are used in their field. For this thread, pupils were not required to respond to any other student’s thoughts. This thread treated all students as expert learners, since each pupil was encouraged to pull from their own personal idea webs about the area of computer usage within their particular field. The thread was subjective, supported by facts. The baseline was also used as an evaluation mechanism; we first needed to determine how well the students could contribute to a discussion. In other words, were we searching to see if there was a bias in the thinking of the expert students we reviewed. This thread served as our pre-evaluation. The supposition is that everyone came into the class as “good” students, capable of drawing from their idea webs to adequately participate in the conscious community’s discussion board. In this thread, there were 56 personal introductions provided and 54 replies posted. Of these original 56 introductions, the CAP scores were broken out as follows in Table 1 and Figure 1: The majority of the students earned exceptional CAP scores. This indicated to us that our pupils were able to adequately build on online community by responding with insightful, thought-provoking posts, filled with pre-conceived knowledge, to aid in the development of topical understandings. The high participation rate was key, in that it aided in the establishment of the online community and definition of the electronic personalities present. I/O Thread The second thread that we analyzed was assigned early in the semester. The topic of the thread was to identify a peripheral device, describe if it served the role of an input and/or an output device, and describe why it should be classified in such a way. For this thread, pairs of pupils were required to respond to any other students’ thoughts by replying with an identification of a similar device to that contained in the original message. Students were not allowed to duplicate peripheral devices, although pupils could choose a similar type of device if the make and model were different than what was already posted. In this thread, there were 23 original, or primary, posts provided; each post was provided by a pair of students. Of these, the CAP scores were broken out as follows in Table 2: The primary student posts were also gauged according to repetitive ideas indicated. The occurrence of repetitive ideas increased as the age of the thread increased as displayed in Table 3. Software Thread The third thread that was analyzed from this semester was completed halfway through the semester. The teams of pupils were asked to research a particular piece of software, briefly describe its functionality, define what type of software category it fit into (based on the textbook’s definitions), and explain why the category was selected. Categories included such types as personal productivity, graphics and multimedia, and personal financial software. Each student was required to respond to another student team’s posting with a piece of similar software that would be classified within the same category. In this thread, there were 24 original, or primary, posts provided; each post was provided by a pair of students. Of these, the CAP scores were broken out as follows in Table 4. The primary student posts were also gauged according to repetitive ideas indicated. As shown in Table 5, the occurrence of repetitive ideas increased as the age of the thread increased. Personal Computing Building Thread The last thread that we reviewed was administered during the end of the semester. Individual students were required to go to www.dell.com and build an ideal personal computer. It was also compulsory for pupils to explain why choices were selected. Furthermore, pupils were supposed to agree or disagree with at least one other posting and explain rational, within an individual reply. In this thread, there were 38 original, or primary, posts provided. Of these, Table 6 shows the CAP scores were broken out as follows: 5. DATA ANALYSIS We analyzed the data based on the age of the thread. The analysis focused on three factors: CAP Score, repetitiveness of ideas, and mistakes/corrections incidents. Baseline Thread As evidenced by the CAP score spread in Table 7, the baseline thread indicated that most of the students were capable of successfully contributing to an online discussion board with appropriate data. During this discussion, most of the registered students participated; it served as a bonding tool for conscious community development and electronic personality acceptance. The baseline thread proved that every participating pupil was an expert learner of some sort. Every student had a positive contribution to make to the discussion, garnered from their pre-existing idea webs. However, the duplication of ideas regarding how computers were used in specific fields grew as the age of the thread increased, as shown in Figure 2. Some of the duplication of ideas included the use of email for communication; the application of educational software to make lesson plans, take attendance, and store grades; research a variety of topics using the Internet; and store appointments. There was also a duplication of fields of study, including legal, healthcare, education, and management fields. The CAP scores of the students were relatively high. Most pupils showed an ability to accurately answer the question at hand fully and succinctly. I/O Thread The analysis of this thread indicates a few key points regarding the problem we are investigating. It appears that info glut has taken hold. The number of students participating in this thread, as compared with the baseline thread, has dropped, as evidenced in Table 8. Moreover, as proven in Table 9, we question whether some of the students read the previous postings by either the professor or other pupils, because there were duplicate mistakes made, the original instances of which were corrected. Some people did not follow directions, as detailed in Figure 3. A collection of students repeated the identification of peripheral devices in posts, even though the directions expressly stated that duplication of ideas in such a way was not permitted. Many of the posts did not contain original ideas; many duplicated the efforts of earlier posts. Most of the posts analyzed mice, keyboards, monitors, printers/scanners, and external hard drives. There was a lot of redundancy on the message board. Toward the end of the thread’s lifetime, most posts regurgitated the same information as earlier posts in a more succinct manner, rambled on and on, or phoned in half-baked incorrect ideas. The incident rate of low CAP scores increased over time. Software Thread The analysis of this thread confirms the findings of the first experimental thread. In examining Table 10, it becomes apparent that as the age of the thread increased, the incidents of repeating already-posted ideas and mistakes increased, and the occurrence of lower CAP scores increased. Many of the later responses did not fully answer the question; the software descriptions were incomplete or the linkage as to why a piece of software fell into a particular category was not fully explained (see Table 11). Since there were finite numbers of ideas for this thread posting, many of the lagging student teams did not read previous posts or seemed to count on earlier posts to explain their ideas (almost building off of previous student teams’ work). This resulted in a lot of duplication of ideas and software titles, which directly conflicted with the directions. Student teams were instructed to choose a unique software package. Multiple student teams that posted later in the thread chose Microsoft Excel, tax, educational, and graphics software that was already discussed in previous posts, as shown in Figure 4. Some of the later-posting students muddied the categories, changing the definition of those categories to suit their needs. One student, who posted later in the duration of the thread, blatantly plagiarized the software description from the manufacturer’s website. This may indicate info glut setting in. Personal Computer Building Thread The analysis of this last experimental thread confirmed our overall findings. As shown in Table 12 and Figure 5, the vast majority of the poor CAP scores occurred toward the end of the thread’s lifespan, and most of the mistakes and incomplete responses were posted toward the end of the thread’s existence. As the discussion list went on, more and more participants did not fully answer the question posed to them. Unfortunately, another student plagiarized, directly copying the computer description from the Dell website. Two students chose to upgrade RAM memory because they incorrectly thought that RAM was hard drive space. This seems to illustrate that these students have a very low level understanding of the topic they are supposed to be discussing. This may indicate that these students did not read the other posts that other pupils made, as these other students correctly identified RAM properties. There was a lot of redundancy within this thread; almost every single student chose a notebook computer. Additionally, as the thread continued, almost every single pupil chose the same computer model, only slightly tweaking component settings. Since almost every student chose the exact same computer, most replies were fluff; many were short and added nothing substantial to the conversation, other than patting each other on the back for choosing similar models. Moreover, almost everyone’s reasons for choosing the same computer with similar settings were the same! 6. POTENTIAL SIDE EFFECTS Two possible side effects were examined: consistent order of posting and relationship to class involvement. While our baseline thread showed that students were equally able to contribute, the side effect analysis asked the question of the student’s ability. Are there some students that are just better at participation in the discussion? The first side effect analysis, as shown in Figure 6, determined if students posted in the same order each time. If they posted in the same order this would mean that the students who posted early were just naturally better at contributing to a topic discussion as we have previously determined that earlier posts were better. It was the case that students posted in an arbitrary order each time. There is no relationship between posting orders. The high point to take away from the chart is that the order of submission is all over the place for each student. The second side effect analysis, as shown in Figure 7, examined the amount of class participation the student engaged in as an indication that the student would be better at producing a high quality post. The total number of posts read was normalized to be included in the chart. There is no relationship between the number of posts read and the order of posting, therefore the reading of the posts do not make a person better able to create a post. 7. CONCLUSION Some submissions were just wrong no matter where it fell in the lifespan of the thread. These were students that were not able to contribute due to a limited understanding of the subject matter. The order of contribution can hurt a student’s ability to contribute but an early post does not add to a student’s understanding of the topic. Students exhausted their limited understanding of the topic before everyone was able to participate, due to the linear nature of the discussion. In an in-person classroom discussion, the instructor can adapt and modify the discussion topic in relationship to the contributions given in class. Experience has shown that students do not respond to the interjections of the instructor as the discussion progresses. All topic information has to be provided when the discussion begins. When a class thread discussion has too many participants, some of the pupils may feel pressured to contribute something new to a finite discussion topic. The students may feel pressured to read an extraordinary amount of repetitive data. The duplicity in ideas is due to the high number of students participating in the thread. Our proposition is to create sub threads that touch upon a specific application of an idea, in which only a subgroup of the class’ students participate. After the discussion thread is closed, then all subgroup ideas can be published to a neutral class location for student body digestion. Additionally, it is suggested that responses be highly structured, to avoid the duplication of ideas or the frivolous “I agree” posts are discarded. 8. REFERENCES Bransford, J.D., A. L. Brown, and R. R. Cocking, eds. How People Learn: Brain, Mind, Experience, and School. Washington, D.C.: National Academy Press, 2000. Byrnes, J.P. Cognitive Development and Learning in Instructional Contexts. Boston, MA: Allyn and Bacon, 1996. Glaser, R. Expert Knowledge and Processes of Thinking. Enhancing Thinking Skills in the Sciences and Mathematics. Hillsdale, NJ: Lawrence Earlbaum Associates, 1992. Miller, G.A. “The Magical Number Seven, Plus or Minus Two. Some Limits on our Capacity to Process Information.” Psychological Review 1956: 81 – 87. Palloff, R. M. and K. Pratt. Building Learning Communities in Cyberspace. San Francisco: Jossey-Bass, 1999. Simon, H.A., "Problem Solving and Education." Problem Solving and Education: Issues in Teaching and Research. Hillsdale, NJ: Lawrence Erlbaum, 1980: 81-96. APPENDIX A Table 1: Personal Introductions – CAP Scores # of Students CAP Score Ratio 32 Students 4.00 CAP Score 57.1% 10 Students 3.67 CAP Score 17.9% 6 Students 3.33 CAP Score 10.7% 1 Student 3.17 CAP Score 1.8% 4 Students 3.00 CAP Score 7.1% 1 Student 2.67 CAP Score 1.8% 2 Students 2.5 CAP Score 3.6% Table 2: I/O Discussion - CAP Scores # of Student Pairs CAP Score Ratio 9 4.00 CAP Score 39.13% 3 3.67 CAP Score 13.04% 1 3.33 CAP Score 4.35% 0 3.17 CAP Score 0.00% 8 3.00 CAP Score 34.78% 1 2.00 CAP Score 4.35% 1 1.00 CAP Score 4.35% Table 3: I/O Discussion - Duplication # of Students Duplicate Ideas Already Posted? Ratio 12 No 52.17% 11 Yes 47.83% Table 4: Software Analysis - CAP Scores # of Student Pairs CAP Score Ratio 5 4.00 CAP Score 20.83% 3 3.67 CAP Score 12.50% 4 3.33 CAP Score 16.67% 0 3.17 CAP Score 0.00% 2 3.00 CAP Score 8.33% 1 2.17 CAP Score 4.17% 2 2.00 CAP Score 8.33% 1 1.33 CAP Score 4.17% 6 1.00 CAP Score 25.00% Table 5: Software Analysis - Duplication # of Students Duplicate Ideas Already Posted? Ratio 17 No 70.83% 7 Yes 29.17% Table 6: Custom Computer - CAP Scores # of Students CAP Score Ratio 15 4.00 CAP Score 39.47% 5 3.67 CAP Score 13.16% 2 3.50 CAP Score 5.26% 2 3.33 CAP Score 5.26% 4 3.00 CAP Score 10.53% 1 2.83 CAP Score 2.63% 2 2.67 CAP Score 5.26% 2 2.33 CAP Score 5.26% 4 1.33 CAP Score 10.53% 1 0.83 CAP Score 2.63% Table 7: Baseline - CAP Analysis CAP 4 CAP 3.67 CAP 3.33 CAP 3.17 CAP 3.00 CAP 2.67 CAP 2.50 23-Jan-07 8 2 0 1 0 0 0 24-Jan-07 7 1 1 0 0 0 0 25-Jan-07 3 3 1 0 0 0 0 26-Jan-07 2 0 2 0 0 0 0 27-Jan-07 2 0 0 0 0 0 0 28-Jan-07 10 4 1 0 3 1 2 29-Jan-07 0 0 0 0 1 0 0 Totals: 32 10 5 1 4 1 2 Table 8: I/O - CAP Analysis CAP 4 CAP 3.67 CAP 3.33 CAP 3.17 CAP 3.00 CAP 2.67 CAP 2.50 CAP 2.00 CAP 1.00 1-Feb-07 1 2 2 0 1 0 0 0 0 2-Feb-07 1 1 0 0 5 0 0 0 0 3-Feb-07 4 0 0 0 1 0 0 0 0 4-Feb-07 4 0 0 0 1 0 0 1 1 Totals: 10 3 2 0 8 0 0 1 1 Table 9: I/O - Duplication Analysis DUPLICATE PRIMARY POSTS - Yes DUPLICATE PRIMARY POSTS – No 1-Feb-07 0 5 2-Feb-07 5 2 3-Feb-07 4 1 4-Feb-07 2 4 Totals: 11 12 Table 10: Software - CAP Analysis CAP 4 CAP 3.67 CAP 3.33 CAP 3.00 CAP 2.83 CAP 2.16 CAP 2.00 CAP 1.33 CAP 1.00 6-Feb-07 1 0 0 0 0 0 0 0 0 7-Feb-07 1 0 0 0 0 0 0 0 0 8-Feb-07 0 0 1 1 0 0 0 0 0 9-Feb-07 1 2 1 0 0 0 0 0 2 10-Feb-07 2 1 0 1 0 0 0 0 1 11-Feb-07 1 0 2 1 1 1 2 1 2 Totals: 6 3 4 3 1 1 2 1 5 Table 11: Software - Duplication Analysis DUPLICATE PRIMARY POSTS - Yes DUPLICATE PRIMARY POSTS – No 6-Feb-07 0 1 7-Feb-07 0 1 8-Feb-07 0 1 9-Feb-07 2 4 10-Feb-07 1 4 11-Feb-07 3 6 Table 12: Custom Computer - CAP Analysis CAP 4 CAP 3.67 CAP 3.5 CAP 3.33 CAP 3.00 CAP 2.83 CAP 2.67 CAP 2.33 CAP 1.33 CAP 0.83 27-Feb-07 2 0 0 0 0 0 0 1 0 0 28-Feb-07 5 0 1 0 2 0 0 0 0 0 1-Mar-07 1 3 0 0 0 0 0 0 0 0 2-Mar-07 3 0 0 1 0 0 0 0 0 0 3-Mar-07 4 1 0 0 0 0 0 0 0 0 4-Mar-07 0 1 1 1 2 0 2 2 3 1 Totals: 15 5 2 2 4 0 2 3 3 1 Table 13: Summary CAP Scores Baseline Mean Average CAP Score I/O Mean Average CAP Score Software Mean Average CAP Score PC Build Mean Average CAP Score Total Mean Average Day 1 3.864545455 3.5 4 3.443333333 3.701969697 Day 2 3.888888889 3.238571429 4 3.6875 3.703740079 Day 3 3.762857143 3.8 3.165 3.7525 3.620089286 Day 4 3.665 3.142857143 2.778333333 3.8325 3.354672619 Day 5 4 0 3.134 3.934 2.767 Day 6 3.556190476 0 2.361818182 2.588461538 2.126617549 Day 7 3 0 0 0 0.75 Total 2.860584176 Table 14: Summary Totals Mean Average of CAP Scores Mean of Median CAP Scores Total Primary Posts Day 1 3.701969697 3.875 20 Day 2 3.703740079 3.75 25 Day 3 3.620089286 3.6675 17 Day 4 3.354672619 3.4575 20 Day 5 2.767 3 12 Day 6 2.126617549 1.86 44 Day 7 0.75 0.75 1 Total 2.860584176 2.908571429 139 APPENDIX B Figure 1: Personal Introductions – CAP Scores Figure 2: Baseline Thread – Duplication Analysis Figure 3: I/O - Detailed Analysis Figure 4: Software - Detailed Analysis Figure 5: Custom Computer - Detailed Analysis Figure 6: Submission Analysis Figure 7: Engagement Analysis