Leveraging Academic Resources in the ABET Accreditation Process: A Case from California University of Pennsylvania Gina Boff boff@cup.edu Gary DeLorenzo delorenzo@cup.edu Lisa Kovalchick kovalchick@cup.edu Paul Sible sible@cup.edu Mathematics and Computer Science Department California University of Pennsylvania California, PA 15419, USA Abstract The process of acquiring ABET accreditation for an academic program is a complex one. Educational objectives and program outcomes must be identified; then the teaching of the skills necessary to meet these educational objectives and program outcomes must be built into the coursework of the program. Finally, the degree to which students actually attain these educational objectives and program outcomes must be measured against a benchmark for each, so that decisions may be made and changes toward educational improvement implemented. In an attempt to automate portions of the accreditation process, an assessment system has been developed that will streamline the outcomes assessment process for ABET- Computing Accreditation Commission accreditation. This paper briefly discusses the ABET accreditation process, other assessment systems that are currently available and the reasoning in developing a new assessment system. Also discussed, is the analysis, development and design of the assessment system including the use of student involvement in the process. Finally, the system limitations and areas for future development are explored. Keywords: assessment tool, accreditation tool, accreditation database, ABET accreditation 1. INTRODUCTION It is not uncommon to hear Information Technology (IT) managers of major corporations joke about providing state-of-the-art information systems for their business counterparts while running their own business with paper and crayons. There is a lot of truth in jest. Most IT/Information Systems (IS) departments in major corporations are designed as a cost center, meaning that they provide IT solutions and charge different areas of the organization for those services (Hoffman, 1999). As such, the resource allocation for the design and implementation of systems to be used by that department are non-existent. The paradigm carries over to academe virtually unchanged; the effects of which are felt when trying to manage the accreditation effort of a computing program. The Computer Information Systems (CIS) faculty at California University of Pennsylvania (CUP), a state-system, liberal arts institution located in southwestern Pennsylvania, was faced with two, opposing goals: * To improve productivity, which is evidenced through such things as increasing class sizes and faculty workloads * To improve the quality of the education being offered in their CIS program, the proof of which was to be the accreditation of the Bachelor of Science degree in CIS In an effort to work toward both goals, the CIS faculty devised a way to utilize the resources at their disposal to assist in the automation of portions of the accreditation process. Their approach in involving student project work as a springboard to their system development allowed them to enhance the student learning experience, receive assistance in building a system to track their accreditation efforts and progress, all while maintaining their current level of productivity in other areas. After offering some background in the ABET accreditation criteria and process, this paper will discuss the system need and objective; briefly cover the use of student class work in the initial development process, detail the overall system functionality and close with areas for future growth and improvement. 2. BACKGROUND Aimed at setting a standard of rigor and caliber of learning to the academic experience, accreditation provides assurance that graduates meet certain minimum standards, qualifying them for professional practice and post-graduate education and assures that some uniformity in education is maintained (Challa, 2005). The fundamental process undertaken by institutions striving for ABET accreditation is shown in Figure 1, developed by ABET Assessment guru and now ABET Executive for accreditation Dr. Gloria Rogers. CUP’s CIS accreditation effort followed this same flow, where first, educational objectives were defined that described professional skills and attributes that graduates of the program are expected to possess after graduation. These were then broken down into program outcomes used to describe the skills and attributes that students should possess upon graduation from the program. The teaching of these skills was built into the coursework of the CIS program and the degree to which students actually attain these skills is measured against a benchmark for each, so that decisions may be made and changes toward educational improvement implemented. Figure 1: Rogers’ (2004) Flow of Assessment for Continuous Improvement CUP’s CIS accreditation effort followed this same flow, where first, educational objectives were defined that described professional skills and attributes that graduates of the program are expected to possess after graduation. These were then broken down into program outcomes used to describe the skills and attributes that students should possess upon graduation from the program. The teaching of these skills was built into the coursework of the CIS program and the degree to which students actually attain these skills is measured against a benchmark for each, so that decisions may be made and changes toward educational improvement implemented. Program outcomes were, initially, institutionally defined. Today, however, the Computing Accreditation Commission (CAC) of ABET, Inc. is in the process of instituting a set list of program outcomes in their computing criteria. Approved in 2006 and piloted in the 2007-2008 accreditation cycle, these new outcomes will be mandated by the 2009-2010 accreditation cycle (ABET, 2007). This is because what was once viewed as a seal of approval by institutions of higher education, accreditation is becoming more of a mandate as “programs are under increasing pressure from […] institutional review and legislative oversight to demonstrate both responsiveness to, and validity of, curricula in meeting the needs of their target professions” (Duff, 2004). Thus, in response, CAC of ABET revised its accreditation criteria for the computing disciplines and now has a stronger outcomes-based focus (Lidtke, Leone and Reichgelt, 2004). “ABET-CAC wants to accredit more programs and encourage innovation; thus, the new standards for computing disciplines contain statements of intent with greater focus on outcomes, assessment and continuous improvement” (Booth, 2006). The continuous improvement component is paramount to the process and is detailed in a Continuous Improvement Plan (CIP) that each educational program seeking accreditation must devise. The CIP describes how the program intends to continually strive for the full achievement of both the educational objectives and program outcomes. Eventually, proof that the improvement process is being carried out to the extent detailed in the CIP is the final indicator as to whether the program should receive accreditation by ABET. Still in the process of data collection in preparation for their first ABET-CAC accreditation visit, the faculty teaching in the CIS program offered at CUP quickly realized the rippling effect of such a change as that which is being rolled out in ABET’s new program outcomes criteria. Similar to many other universities, CUP’s CIP was/is structured in layers (Konsky et al., 2006), where: * Many course objectives may be related to many program outcomes * Multiple measures with associated rubrics and corresponding benchmarks (performance indicators) are in place for each program outcome * Many program outcomes may be related to many educational objectives * Multiple measures with associated rubrics and corresponding benchmarks are in place for each educational objective Structuring the CIP in layers provides the framework that makes assessment possible. Through a comprehensive curriculum design, the attainment of professional skills and attributes are tracked through performance indicators, also commonly called assessment methods. The achievement of the benchmarks set for these indicators infers that the general program outcome has been attained (Konsky et al., 2006). In addition, achieving a collection of the program outcomes infers the attainment of one or more educational objectives. This is verified in the post-graduate measuring process for educational objectives and is the point in the CIP where the loop truly is closed, a paramount element in ABET’s requirements that the program integrity is ensured and that potential areas for improvement are identified (Poger, Schiaffino and Ricardo, 2005). To say that “assessment is difficult and time consuming” is obvious (Booth, 2006). The initial process of designing a comprehensive curriculum, developing outcomes and objectives suitable to the discipline, and then mapping the curriculum’s course objectives to program outcomes and performance indicators, and then mapping program outcomes to educational objectives and their measures required a good deal of research in best practices, documentation and cross-checking and many layers of approvals. The inherent complexity in managing such a system now became evident as the CIS faculty were faced with replacing their ‘similar, but not the same’ program outcomes with the newly ABET-supplied outcomes. It is not just that “paper-based systems are complex and do not provide immediate feedback” (He and Brandt, 2007), but research shows that there is a danger that the burdens of taking on an accreditation process will generate little in the way of meaningful results (Blandford and Hwang, 2003). These dangers became the catalyst to the CIS faculty putting the best practices of their discipline to work. It was decided that a formalized system needed to be developed to streamline the CIP surrounding outcomes assessment for ABET-CAC accreditation. The objectives for the system were as follows. 1. A centralized repository for all accreditation-related information a. Program Outcomes and Educational Objectives i. Measures for performance indicators 1. Methods/Tools 2. Benchmarks 3. Rubrics 2. Relational structure a. Given the task of changing all institutionally-defined program outcomes to ABET-supplied, this was paramount i. Enforcing referential integrity flagged the rippling effects of changes and alerted us to related information that also needed attention 3. A centralized repository for all course-related information a. Course name, numbers and descriptions b. Prerequisites c. Course objectives 4. Automation, in as much as is possible a. Rubrics i. Interfaces for faculty to input rubric scores ii. Automated tallies iii. Flags when scores were outside an acceptable standard of error b. Standard reports i. Assist in the mapping of outcomes to objectives to assessment methods ii. For the assessment of the measures against the benchmarks iii. Others as determined 5. Friendly user interface 6. Accessible from a shared network area that is remotely accessible 7. Pre-designed expandability to include planning and reviewing of outcomes/objectives as well as curriculum needs outside of accreditation, such as course rotation planning and student advisement 3. OTHER ASSESSMENT SYSTEMS So as to not re-invent the wheel, research was conducted to determine if a suitable assessment system existed that could be acquired for use at CUP. First, in terms of commercial systems, there are a multitude of test development utilities and survey development tools available through companies such as Assessment Systems Corporation (Assessment Systems, 2007) as well as electronic portfolio solutions and electronic report cards that may be purchased from companies such as Rediker Software (Rediker Software, 2007); however, nothing in line with a tool for assessment tracking and/or automation was found. Literature searches uncovered tools developed by other institutions, some of these include, but are not limited to the following. Clemson University in Clemson, South Carolina (Owen, Scales and Leonard, 1999) developed a system to assist in the tracking and mapping of course objectives to program outcomes and the measures used for each in their engineering programs. Their system also tracks the educational objectives to their respective measures and reports on the results; however, the relationship between program outcomes and educational objectives was not apparent and reporting was limited to only educational objectives’ actual results without comparison to benchmarks. York College of Pennsylvania in York, Pennsylvania (Walcerz, 1999) developed a system that they call EnableOA. Based on the Principles of Good Practice for Assessing Student Learning, this system tracks program outcomes (referred to as educational outcomes by York) for the university’s General Education and Mechanical Engineering programs that have been based on the standards set by the American Association for Higher Education and did not address the ABET outcomes assessment CIP. Iona College in New Rochelle, New York (Poger, Schiaffino and Ricardo, 2005) underwent a three-year development cycle with undergraduate and master’s students to develop a system through their computer science capstone courses. The main focus of this system is the collection of student evaluations with relation to each assessment tool used in each course. It then reports as to whether students, in general, felt that they met the objectives related to each course. This clearly did not meet CUP’s CIS faculty needs. Armstrong Atlantic State University in Savannah, Georgia (He and Brandt, 2007) developed a system called WEAS (Web-based Educational System) that is used in high school science courses to match teaching assessments to learning assessments. This was, again, out of the scope of CUP’s CIS faculty’s system objectives. Curtin University of Technology in Perth of Western Australia (Konsky et al., 2006) is in the process of developing a system that is designed around a layered assessment process similar to ours, except that their mapping occurs from a learning unit (a task) outcome to a course learning outcome and then to a graduate attribute (program outcome). They also assign percentages of contribution that each has to the next and this is the benchmark used in their assessment. This is in line with the Engineers Australia criteria, which is the accreditation being pursued. There is no consideration for post-graduate attributes (educational objectives). Clayton State University, Morrow Georgia (Booth, 2006) offers a template for a database developed by one of their IT faculty. As it turns out, the bare bones of CUP’s CIS faculty’s database design is very similar to theirs; however, CUP’s evolved to be a bit more complex due to the need to address the number of many-to-many (M: N) relationships that exist and the inclusion of educational objectives, the actual scores and benchmarks for both the outcomes and objectives and storage for faculty feedback. While each of these systems offered some creative insights into ways in which CIS faculty could realize the system objectives, none of them offered enough similarity to be adopted as a starting point, with the exception of Booth’s database template. However, given the resource limitation faced, an alternative approach was adopted that proved just as beneficial. This is discussed in the following section. 4. ANALYSIS, DESIGN AND DEVELOPMENT As previously stated, this initiative began with a CIS program faced with recent ABET changes to the CAC computing criteria for CIS, in an environment where improvements in productivity left an extreme limitation in a variety of resources, not the least of which was faculty time and availability. As such, the faculty designed the effort to be completed as a typical corporate project with some faculty as project leads, some as users and students in the classroom as the developmental team members. In this way, student project work could be used as a catapult to the system development while enhancing the student learning experience. In hindsight, this served the students well as they were much more receptive and willing to put forth the extra effort toward truly superior work when they knew that their efforts were for a ‘real’ system to be used on a regular basis. First, the entire CIP was presented to a student team in a Systems Analysis and Design Course. This forced faculty to think through gray areas that had existed in the process and resulted in a fully-documented analysis of the manual process that existed. Next, students were presented with the needs to be addressed in an automated system and what the system objectives should be. This allowed a first-pass at a possible design solution, which was updated by faculty for the next step, which was a database design for the automated process. Again, with some faculty as project leads and others as users, a student team in a Relational Database Design course was presented with a narrative model of a system design that included the faculty/user needs and the objectives of the system. This resulted in a very nice entity-relationship diagram (ERD) that served as a great start to creating the tables for the system. At this point, some platform decisions were made. With limited tools available at CUP and the security and remote access constraints placed on faculty by central computing, options were limited. There was a possibility of using Oracle 9i with Oracle Forms as the front end; however, due to university constraints with remote access, that path could not be pursued. As such, it was decided to use Microsoft Access as the system platform. This met all of the objectives in that it was relational, it could be accessed remotely and friendly user interfaces could be developed. The next step could have been to hand the design off to a student team in an introductory database management system (DBMS) course to create the tables and populate them with data. Creating the tables and the forms to display and update these tables was not so much the issue as was the population of theses tables. Thousands of records existed in disparate university sources. So, this part of the project was handed off to a Graduate Assistant, who created the tables and forms and meticulously entered the data, as directed by faculty. Ideally, the final step – the step that could be taken by other institutions to fully exploit this resource model – could have been handed off to a student team in a capstone experience, to build an integrated Web-based front-end and the advanced querying and reporting capabilities that were desired. Instead, the CIS faculty pulled together to complete the system. 5. RESULTS In the following sections, the overall system schema will be introduced by way of an ERD and some narrative. Next, the initial user interface will be discussed with the different paths that a user may take upon entering the system and some of the screens and functionality behind those paths will be explored. Finally, the real value of the system will be displayed through the reporting capabilities that are currently in the system as well as those planned for future development. 6. SYSTEM OVERVIEW The system is affectionately called “CISaccred” (pronounced sigh-sacred) which stands for CIS Accreditation. Inherent in any CIP is some sort of backbone process flow that is supported by numerous, underlying data sources; this is no different. As you will see in Figure A-1, in the Appendix, CUP’s process begins with an educational need that was determined by their constituencies. This is what brought CUP to develop a major in CIS and to create a program that contained the coursework necessary to achieve the educational goals that CUP’s constituencies determined most critical in meeting their employer/employee needs, while ensuring that this same coursework aligned with ABET’s program outcomes. Achievement in learning is equated to meeting a standard benchmark of performance on various classroom learning points and behaviors. Once the CIS program was instituted, assessment methods were incorporated into courses as indicators to show if student learning was being achieved with relation to each program outcome. Rubrics are used to evaluate each assessment method and the scores are tallied each year and reports are drawn that compare the actual results to the benchmarks for each assessment method as related to each program outcome. The achievement of program outcomes is expected, to some degree, to ensure the successful attainment of educational objectives such that the graduates prove successful in the workplace or graduate studies. Surveys that have been designed with a rubric format are administered to graduates. As with the program outcomes, there are predetermined benchmarks for each educational objective that equates to a level of achievement. The survey scores are tallied and reports are drawn to compare the benchmarks to the actual scores related to each educational objective. In both cases, these final reports are reviewed by CIS faculty to complete a preliminary analysis, all of which is taken back to the constituents for a final analysis and possible development of an action plan to institute change into the program or assessment process. 7. SUPPORTING DATA AND TABLES: THE ENTITY RELATIONSHIPS As previously stated, inherent in any CIP is some sort of backbone process flow that is supported by numerous, underlying data sources. The system was designed around the system needs depicted in the process flow, the business rules that governed them and traditional normalization techniques, in as much as made sense without degrading the functionality of the system. The data sources, or tables, may be classified into three categories as follows. 1. Master Data Master data is a relatively stagnate data type that is fundamental to the entire database. Master data tables include: * Course * Faculty * Benchmark * Major * Constituency * Assessment Methods * Program Outcomes * Program Objectives 2. Transactional Data Transactional Data is dynamic in nature, is created from an event, which in this case is the offering of a course, and is constrained by a time frame. Transactional data tables include: * Course Offerings * EthicsRubric * SoftwareEngineeringPaperRubric * TechnologyPaperRubric * UserManualRubric * SeniorProjectPresentationRubric * SeniorProjectRubric The purpose of the rubrics tables is twofold. First, they are a means in assisting the CIS faculty member who is currently teaching a course in which assessment methods have been planted, to access the most up-to-date, necessary rubrics to be used in the course and to provide a means to fill out those rubrics to give to students. Second, it is to eliminate the duplicate work in collecting completed paper rubrics and entering scores into a spreadsheet to be tallied for evaluation. 3. Infrastructural Support Data Due to the number of M:N relationships that exist in the process, bridge tables, also called composite entities or linking tables (Rob and Coronel, 2007), were created to resolve these relationships. These bridge tables hold all of the key combinations that occur between the two tables that it relates. Referential integrity is enforced in as many relationships as is possible so that when a new record is entered into a main table, the user is alerted that an entry must be made in a bridge table as well. Infrastructural support data tables include: * Major to Courses * P Outcomes to Assessment Methods * P Outcomes to P Objectives * P Objectives to Assessment Methods Most of the supporting data to the CIP are not independently-functioning stores of data. As depicted in the CISaccred ERD (see Figure A-2 in the Appendix), much of the data is interconnected through dependencies that have all stemmed from the relationship between the educational objectives, program outcomes and assessment methods. 8. INTERFACE CISaccred opens with a switchboard that may lead the user into the two, main functional areas of the system. Figure 2 shows the opening screen of CISaccred. The following section will further explain the main functional areas of the system. Figure 2: CISaccred Switchboard 9. FUNCTIONALITY Functionality, as categorized by the initial user interface of CISaccred, is really determined by user type; that is, when a CIS faculty member enters the system, what is their goal or function. Each is explained below. Maintenance To date, there are no automatic data feeds into CISaccred. All data had previously been maintained, sporadically, in a variety of forms by various parties. To have all related data in a single repository where it may consistently be maintained was one of the system goals. All data in CISaccred was manually entered and, for now, must be manually maintained. This includes the Master Data, Transactional Data and Infrastructural Support Data. An interface for the maintenance of each type of data has been created. Once a user enters the system and selects “Maintenance”, the following selection is given (see Figure 3). From here, records may be inserted and updated in Master, Transactional and Infrastructural support tables. Delete capability is provided under special circumstances; however, in most cases, an “active/inactive” field has been added to each table so that a record that is no longer used may be inactivated rather than deleted in order to maintain an audit trail and retain the capability to run historical reports. Figure 4 through Figure 6 depict typical maintenance screens for each type of data. Figure 3: CISaccred Maintenance Options Figure 4: CISaccred Course Offerings (Master Data) Maintenance Screen From here, records may be inserted and updated in Master, Transactional and Infrastructural support tables. Delete capability is provided under special circumstances; however, in most cases, an “active/inactive” field has been added to each table so that a record that is no longer used may be inactivated rather than deleted in order to maintain an audit trail and retain the capability to run historical reports. Figure 4 through Figure 6 depict typical maintenance screens for each type of data. Figure 5: CISaccred Ethics Rubric (Transactional Data) Maintenance Screen A noteworthy mention is that a CISaccred user must have enough knowledge of the CIS program, the accreditation process and the CISaccred system in order to perform all steps necessary in a maintenance activity. For example, a new course could have been created, to replace an existing one. This person must have enough knowledge of the CIS program and the system to know that this task requires them to: 1. Enter the course information into the Course table 2. Inactivate the course that is being replaced in the Course table This is not an automated process. Figure 6: CISaccred Major to Courses (Infrastructural Support Data) Maintenance Screen Reports Two types of reports may be generated through CISaccred Descriptive reports and Analytical reports. A brief description of each along with an example follows. Descriptive reports: Are used to help physically describe the system. They may help one to understand how different parts of the system relate to one another. For example, if a faculty member is teaching a course where assessment is to be carried out, they may generate the Courses to Assessment Methods report in order to identify the assessment activities that they must conduct during the course. The faculty member may then use this report to help plan an outline of semester activities that may be given to the students. Figure 7 depicts an example of this report. Figure 7: CISaccred Courses to Assessment Methods Report The descriptive reports that are currently available include: Program Objectives to Program Outcomes, Courses to Assessment Methods, Program Objectives to Assessment Methods and Program Outcomes to Assessment Methods. Analytical Reports: This component of CISaccred is where the true beauty and power lie. There are a number of different ways that rubric scores and the criteria on them are viewed and evaluated with respect to each program outcome, the details of which are institutionally chosen and are not specific to all CIS programs. What is key is that for each different use of rubric data, benchmarks have been specified as a threshold for achievement or failure and these benchmarks are stored in a maintainable table. The scores collected through rubrics are tallied a number of different ways in accordance with the specific view that one wishes to assess the outcome. These results appear on reports along with their associated benchmarks. In this way, there is no extra effort required to gather and manipulate data at evaluation time, except to run and print these reports. The power and ease that this lends to decision making concerning plans toward continuous improvement cannot be stressed enough. The Senior Project Rubric Averages To Benchmarks Report depicted in Figure A-3 of the Appendix is an example of one such report. It shows the course semester for which data is displayed, the benchmarks that have been specified as a threshold for achievement or failure and the actual student averages for each item contained in the Senior Project Rubric for that particular semester. Once such a report is generated, the CIS faculty may evaluate whether or not their benchmarks have been met and may take any actions necessary based on the data. 10. CONCLUSION The CISaccred accreditation tracking tool is proving to be useful to the CIS faculty at CUP. However, as with all systems, limitations and areas for improvement have been identified and will be discussed. To date, there are no automatic data feeds into CISaccred. With the help of a Graduate Assistant, all initial data has been manually entered. Currently, if changes need to be made to any of the existing data, the tables in need of changes must be accessed and manually updated. For example, the Course Offerings table contains one record for each section of every course that is offered during a particular semester. Course offerings initially come to CUP’s Mathematics and Computer Science Department from the Dean’s office in the form of a Microsoft Excel spreadsheet that is distributed. Today, a designated person would have to manually key in the information contained in the spreadsheet. A significant improvement would incorporate a means to automatically insert the spreadsheet data into the Course Offerings table. Another limitation of the current system involves the amount of knowledge that the user of CISaccred must possess in order to perform a maintenance activity. Currently, a user wanting to perform maintenance must have an understanding of the CIS program, the accreditation process and the CISaccred system (as an example, refer to the steps required add a new course to the system, located in the Maintenance subsection of the Functionality Section of this document). While the process cannot be totally automated, a significant improvement would be to prompt the user through the necessary, consecutive steps. Along with addressing the system limitations, there are several planned areas for future enhancements and growth. One such area planned for growth is to store all information associated with the CIS program in CISaccred. This would alleviate faculty burdens in maintaining multiple sources of data for curriculum development, course rotations by semester, and having to access multiple data sources, simultaneously, just to advise a student. For example, every semester the CIS faculty members must advise all of the students who are majoring in CIS. Currently, program information is contained in a Microsoft Excel spreadsheet along with course rotations. Current course offerings are located in CISaccred, course pre-requisites are accessed through CUP’s intranet and finally, student records are contained in the registration system. When advising a student a faculty member must bring up all of these various sources of information, simultaneously, in order to offer the student proper advisement. With the exception of the registration system, the goal is to have all of these items available through CISaccred. In summation, CISaccred is proving to be a successful tool in assisting with the CIS accreditation process at CUP. However, expanding CISaccred beyond the scope of accreditation into a centralized repository and tool to assist faculty with curriculum development and advisement offers far-reaching value added to both faculty goals, as well as the overall university productivity goals. 11. REFERENCES ABET, Inc. (2007) Criteria for Accrediting Computing Programs. Retrieved April 21, 2007. Available: http://www.abet.org. Assessment Systems (2007) Assessment related software. Retrieved May 15, 2007. Available: http://www.assess.com. Blandford, D.K. and D.J. Hwang (2003) “Five Easy but Effective Assessment Methods.” Proceedings of SIGCSE’03, February 19-23, pp. 41-44. Booth, L. (2006) “A Database to Promote Continuous Program Improvement.” Proceedings of CITC’07, October 19-21, pp. 83-88. Challa, C.D. (2005) “The Accreditation Process for IS Programs in Business Schools.” Journal of Information Systems Education, 16(2), 207-216. Duff, J.M. (2004) “Outcomes Assessment across Multiple Accreditation Agencies.” Journal of Industrial Technology, 20(4), 2-7. He, L. and P. Brandt (2007) “WEAS: A Web-based Educational Assessment System.” Proceedings of ACMSE‘07, March 23-24, pp. 126-131. Hoffman, T. (1999) Profit Centers vs. Cost Centers, Computerworld, 33(31), 47. Konsky, B., A. Loh, M. Robey, S. Gribble, J. Ivins and D. Cooper (2006) “The Benefit of Information Technology in Managing Outcomes Focused Curriculum Development Across Related Degree Programs.”  Proceedings of ACE’06, January 16-19, pp. 235-242. Lidtke, D.K., J. Leone and H. Reichgelt (2004) “Computing Accreditation Commission Moves to General and Program Specific Criteria.” Proceedings of 34th ASEE/IEEE Frontiers in Education Conference, October 20-23. Owen, C., K. Scales and M. Leonard (1999) “Preparing for Program Accreditation Review under ABET Engineering Criteria 2000: Creating a Database of Outcomes and outcome Indicators for a Variety of Engineering Programs.” Journal of Engineering Education, 88(3), 255-259. Poger, S., R. Schiaffino and C. Ricardo (2005) “A Software Development Project: A Student-Written Assessment System.” Journal of Computing Sciences, 20(5), 229-238. Rediker Software, Inc. (2007) Assessment related software. Retrieved May 15, 2007. Available: http://www.rediker.com. Rob P. and C. Coronel (2007) Database Systems Design, Implementation, and Management. Boston: Thomson Course Technology. Rogers, G. (2004) Portfolios: The Tool that Rocks. Retrieved July 17, 2007. Available: http://www.abet.org/ Linked%20Documents-UPDATE/ Assessment/Portfolios%20Rock_handouts.pdf. Walcerz, D.B. (1999) “EnableOA: A Software-Driven Outcomes Assessment Process Consistent with the Principles of Good Practice for Assessing Student Learning.” Proceedings of ASEE mid-Atlantic Conference, April 17, pp. 30-39. APPENDIX Figure A-1: CISaccred Process Flow Figure A-2: CISaccred ERD Figure A-3: CISaccred Senior Project Rubric Averages To Benchmarks Report