Share page: 

CAEP Contact

Education Unit Head:
Dr. Janet Smith

CAEP Coordinator:
Dr. Jean Dockers

Assessment Coordinator
Steve Brown

Contact Person
Amanda Hill

Phone: (620) 235-4489
Fax: (620) 235-4421


    110 Hughes Hall
    Pittsburg State Univesity
    1701 South Broadway
    Pittsburg, KS 66762-7564

Standard 2: Assessment System and Unit Evaluation

The unit has an assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs.

2.1 How does the unit use its assessment system to improve the performance of candidates and the unit and its programs?

The College of Education (COE) Assessment System was developed to align with the Unit's Conceptual Framework (CF), and state and professional standards for the specific purpose of data collection, analysis, and evaluation that leads to program improvement and candidate success in teaching. The COE Assessment System is structured so that the professional community regularly examines the validity and utility of the data produced through assessments and makes modifications to keep abreast of changes in assessment technology and in professional standards.

2a. Assessment System
The assessment system for Initial programs includes information to make decisions based on multiple assessments at four different checkpoints (2.1a). These include Admission to Teacher Education, Admission to the Professional Semester, Completion of the Professional Semester and the teaching program, and Application for Licensure. A final collection of data used for program assessment is first and third year feedback from first and third year teachers and their building principals. Data from all checkpoints are collected by the Office of Teacher Education, updated as requirements are satisfied, and officially analyzed for Admission to Teacher Education, the Professional Semester, Completion of the Professional Semester, or for Licensure to Teach. If a candidate struggles at any checkpoint, the candidate, advisor, and department chair are notified so a plan of action can be developed to remediate the candidate or to assist in selecting another field of study.

The Advanced program assessment system is also organized around four different checkpoints (2.1b) including Admission to the Program, Approval of Candidacy, Completion of the Program, and Application for Licensure. Graduate programs survey their program completers and their employers to gather feedback for program assessment purposes. Unlike Initial programs, Advanced programs manage and direct the collection of assessment data for their own departments and programs. Requirements for admission to the Graduate School and to Candidacy are shared among the programs but the completion requirements (checkpoint #3) vary from program to program. Data from all checkpoints are used by program faculty members to make curricular and program modifications.

The COE continually reviews all procedures and assessments in order to eliminate bias in assessments and to establish fairness, accuracy, and consistency in procedures and operations. Confidentiality of candidate information and performance results are primary issues in collecting, assessing, and summarizing reports for Unit and program review, and for providing feedback to the candidate. Standard procedures are used to ensure that fairness, accuracy, consistency, and elimination of bias are always practiced. (2.3a) Possibly the most effective factor that ensures the elimination of bias in all areas of COE programs is the close relationships between candidates, instructors and advisors. The positive relationships developed result in a free-flow of feedback.

2b. Data Collection, Analysis, and Evaluation
Through a collaborative evaluation system, extending into local PK-12 schools, the Unit maintains and updates data derived from its assessment system. This data evaluate the candidates relative to their qualifications for admission to the programs, as well as their performance during the program and following graduation. Decisions about candidate performance are based on multiple assessments initiated at multiple points before program completion. The evaluation of such assessments is to ensure that, from the initial point where admission to teacher education is recommended to the actual employment of candidates in their respective fields, the impact of successful teaching and learning for all students remains the primary focus. The multiple sources of data show a strong relationship of performance assessments to candidate success.

The Unit has a system to effectively maintain records of formal candidate complaints. The formal process for candidate complaints follows the PSU grievance policy. When a complaint is filed, the candidate is advised of the procedure to follow in resolving the complaint. First is a meeting with the faculty member or individual with whom the complaint is addressed; if the complaint is not resolved, the candidate next meets with the department chair for a resolution. If not resolved, the grievance moves to the College of Education Dean followed by the Provost/Vice President for Academic Affairs. For complaints directly related to admission or retention issues, the candidate first files a petition with the Committee for Admission to and Retention in Teacher Education (CARTE). If the CARTE decision is not satisfactory to the candidate, the next step is to see the Dean. (2.6) Files and complaint information are available in the Office of Teacher Education for review. (2.7)

All data are regularly and systematically compiled, aggregated, summarized, analyzed, and reported to all parties involved in COE professional education programs and to PK-12 schools that employ our graduates. It is common practice to disaggregate data according to the type of program delivery. For example, since beginning the Master of Arts in Teaching (MAT) with a program on campus and one at the KC Metro Center, assessment results from each program have been disaggregated, analyzed, and reported in order to ensure that programs in all locations are of the same quality and that candidates perform equally well in all programs. This process includes SSLS programs in Leadership and Special Education which are located on-campus and off-campus. MAT program candidate test results are also compared to the performance of teacher candidates in the traditional teacher education program. Findings from the MAT vs. traditional program candidates were instrumental in securing support campus-wide for the MAT Restricted Licensure program. Initially, Arts and Sciences programs questioned the implementation of the MAT program. Once they saw that MAT candidates performed equally as well as or better than our traditional candidates on the Praxis II Content Tests, they were supportive of the program. (2.5b)

2c. Use of Data for Program Improvement
The Unit's assessment system is comprehensive in that it tracks data on program quality, unit operations, and candidate performance at each stage of the programs and into the first three years of teaching for both initial and advanced program completers. When assessing candidate performance, particular attention is paid to the values, commitments, and professional attitudes that influence candidate behavior. The CF Knowledge Base for both initial and advanced programs includes indicators related to diversity and dispositions. Ratings from evaluations completed by cooperating teachers, practicum/internship mentors, university and academic supervisors, and from first and third year program completers and their principals provide systematically collected data from all candidate field experiences and into the first three years of professional practice.

Since 2003, an evaluation system developed for our Initial program, dataStream, has been used for collecting and storing knowledge base data. Each university supervisor had the program uploaded onto his or her laptop computer so that student teacher observations were completed electronically and feedback to the candidate was immediate. For example, at each observation the supervisor identified knowledge base indicators for commending and some indicators were targeted for improvement. This information was then shared with the candidate and cooperating teacher immediately after the teaching session. In addition, supervisors completed the final evaluation document for each student teacher through the dataStream program. Data for each of the knowledge base indicators were downloaded into an Excel spreadsheet and the results were analyzed according to how well the candidates performed on the six sections of the knowledge base, on diversity indicators, on disposition indicators, and an overall rating. This was a valuable tool for program improvement and for identifying the strengths and challenges of each candidate in becoming a Competent, Committed, Caring Professional. This is a prime example of how faculty share assessment information with candidates to help them grow and be successful in the classroom.

At the advanced level, candidates receive feedback concerning their progress throughout their program on academic assignments and Praxis II exams. Candidates are assessed on formal examinations, presentations, case studies, candidate-created materials, and projects that involve working with individual candidates to demonstrate skills in assessment, classroom management, clinical management, and teaching. They complete course evaluations each semester and during the exit interview, candidates are interviewed regarding the strengths and weaknesses of each program.

For several years, the Unit has used multiple evaluations for assessing the performance of candidates, changes have been made based on the analysis of data, (2.8) and candidates and faculty have used the analysis to make changes that improve candidate success. However, a more user friendly process for summarizing and sharing results is needed. (2.4a, CF.1c)


2.2.1 Describe work undertaken to move to the Target Level

Since our last KSDE/NCATE visit in 2004, several initiatives have been addressed. We believe that the changes listed below will strengthen all programs and ensure that candidates become Competent, Committed, Caring Professionals.

Development and Implementation of the GUS Electronic Tracking System (GETS)
One of the most timesaving and exciting initiatives we have started and will soon have fully implemented is the new GETS program. Since 2003 we had used a program developed with File Maker Pro (dataStream) to store all initial program candidate information, placement sites, names of cooperating teachers and school contact people, form letters, and the initial program knowledge base used by university supervisors for observations and evaluations throughout the professional semester. In order to analyze data from the knowledge base, we had to download the files from each supervisor's computer to a flash drive and then to the computer to sort in a spreadsheet. The process was tedious and data were easily lost or stored inaccurately. Initially this system served a purpose but it soon became obsolete and we started brainstorming how we could secure a system that provided accurate data and that would save secretarial time in inputting candidate information and storing various assessment data. The President of the university requested that ITS assign a programmer to work with the Director of Teacher Education and the Office of Teacher Education Administrative Specialist in order to develop the system we needed. The original dataStream provided us the knowledge of what we wanted and how we wanted it to work.

The GETS system was first implemented for the Fall 2009 semester. All sections were not complete at that time but we were able to identify problems and correct them immediately. During the fall semester, more was added so that for the Spring 2010 semester, university supervisors have used the system for observations and evaluations, cooperating teachers now complete their evaluations using the system, and demographic information is available, in tables, for all BSE and BME majors and advanced program candidates. Sections of the program that are set up in a report format include the following:

Admission Summary Report (Checkpoint #1)
Checkpoint 2 Summary
Undergraduate Completion Summary
Admittance Check points
Professional Semester Checkpoints
Education Student Checkpoints
Incoming Freshman Average ACT

Examples of the GETS used for evaluating the Initial CF Knowledge Base can be viewed in Exhibits 3.6a, 3.6b, 3.6c, 3.6d. While the GETS is currently used only for Initial programs, a programmer from ITS is working to set up the same type of reporting system and the Advanced CF Knowledge Base evaluation system. Data needed for the Advanced electronic tracking system is readily available because the Graduate Office started building a system for graduate programs campus wide. The real benefit of these systems is that they are tailored to the COE Conceptual Framework and knowledge bases and reports have been developed that meet our program and candidate needs.

Assessment System Handbook (CF.1c)

Attempts had been made to set the proposed Assessment System in action starting in 2005. Although we continued to collect, disaggregate, aggregate, analyze, and report the results to the professional community and candidates, much of the work had to be done manually and did not allow for timely dissemination of assessment results. 

During the past two years, the Assessment System has regained life and the technology necessary for collecting and analyzing data and reporting the results is close to being ready to fully implement.  The transition from dataStream to GETS for maintaining Initial program information and generating reports has enabled the unit to provide assessment results to the candidates more quickly.  Using GETS, data can be aggregated and disaggregated much easier because the data do not have to be manipulated manually.  In addition, utilization of GETS increases accuracy because it eliminates a step of manually inputting data and various functions using Excel. 

The Assessment System Handbook (CF.1c) includes information needed by the professional community and candidates to understand what all is involved in the system. The Handbook includes the revised CF Knowledge Base for initial programs and the recently adopted CF Knowledge Base for advanced programs. Unit assessments are identified for initial and advanced programs for all transition points. This section includes the assessments at each transition point, the data to be collected, the individuals responsible for collecting data, and the criteria for meeting requirements at each transition point. The Handbook also includes revised procedures and a time line for collecting, aggregating, disaggregating, analyzing, and reporting data. Revisions were the direct result of collaboration among program faculty, candidates, program completers, and other professionals. Also included is information for ensuring that assessments of candidate performance, program quality, and unit operations are consistent, fair, accurate, and free of bias.

In recent years, collection, aggregation and disaggregation, analysis, and sharing of assessment data has varied across the unit due to differences in programs, transition of personnel, technical issues, and other discrepancies within the system. While the basic tenets of the system have been carried out, variation in timelines and methods of data collection have prevented the full implementation of the assessment system as it was intended. Implementation of improved information technologies, increased technical support from the University, acceptance of the system by faculty, and revision of the procedures and timelines will allow the full implementation of the assessment system. 2009-2010 will be the first year that ALL initial and advanced programs will have fully implemented the assessment system. This will include the collection of data for all key unit and program assessments; aggregation, disaggregation, and analysis of data and completion of annual program and unit reports.

Advanced Program Knowledge Base Reflects the Conceptual Framework
In 2008-2009 the Graduate Knowledge Base Committee met to review and revise the CF Knowledge Base. The Advanced programs had been using the former initial knowledge base as well as assessments for each individual program, which made unit wide analysis of data problematic. PSU administrators and faculty, PK-12 teachers and administrators, and PSU candidates reviewed research, program data, current practices, and district needs. They developed a new Advanced Knowledge Base to reflect the CF vision to prepare Competent, Committed, Caring Professionals. The knowledge base is comprised of 7 categories including Professionalism, Communication, Leadership, Instruction and Assessment, Diversity, Technology, and Research. Beginning Spring 2009, each program used the 38 indicators within these categories to assess the achievement of its candidates and the effectiveness of its program. Results of the common knowledge base will allow the unit to assess the effectiveness of its programs and use the data to plan for continuous improvement.

Reviewed and Revised the Initial Program Knowledge Base
The Undergraduate Knowledge Base Committee composed of representatives from PSU, PK-12 schools, and candidates, reviewed the knowledge base program data and found the system to be an effective measure of candidate achievement and growth. While retaining the same system, they also studied educational research, current practices, and needs of area schools to evaluate specific indicators. After reviewing the original 68 indicators of the Initial Knowledge Base, they revised some indicators, reducing the number to 60. Indicators that assess diversity, dispositions, and technology were identified; and the original categories, Professional Characteristics, Relationships with Students, Instructional Planning, Instruction, Classroom Management, and Evaluation, were retained.

The Initial Knowledge Base is a major strength of the assessment system. Indicators assess the development of candidates as Competent, Committed, Caring Professionals by tracking assessments of indicators from Explorations in Education, the beginning education course, during other field experiences, and throughout the professional semester. Data are collected from outside sources such as the cooperating teacher and from university supervisors and academic supervisors. Such data provide powerful information to evaluate the candidate, the classes, and the program. Data from all assessments are directly entered into GETS for immediate access by the candidate.


2.2.2 Discuss plans for continuing to improve

Implement and Analyze Assessment System
An Assessment System Evaluation Survey will provide data on the effectiveness of the assessment system. This data will be initiated by the COE Assessment Committee each Spring and will be sent to all unit faculty. The COE Assessment Committee will tabulate data and provide a summary report, identifying target areas for improvement of the Assessment System each year.

LiveText Implementation
Starting with the Fall 2010 semester, LiveText will be populated with candidates enrolled in Explorations in Education, the introduction to teacher education program at the Initial level. The implementation of LiveText will enable unit faculty to administer assessments, collect data, aggregate and disaggregate data, and store data for ongoing evaluation. Although it is expected to be a 1 to 2 year transition, LiveText will enhance procedures for collection, analysis, and storage of assessment data. In addition, the department chair and faculty can review their rubrics and data tables at any time to consider program curriculum changes or for the annual review of assessments. The advanced programs will start implementation as new cohorts begin.

Graduate Programs Advisory Committee
Another plan for continued improvement is the creation of a Graduate Programs Advisory Committee (GPAC) to be included as a part of the advanced programs assessment system. The GPAC would assess data from key unit assessments for advanced programs prior to the annual assessment review by the COE Assessment Committee. The GPAC may also serve a role similar to that of the CARTE such as considering issues in regard to admission or retention in advanced programs.

Training for Using GETS and for Inter-Rater Reliability
The continuous training of university and academic supervisors on using all strands of the GETS, and more indepth training to ensure a high level of inter-rater reliability when scoring rubrics for the Teacher Work Sample and various program assessments is needed in order to maintain a quality program. We provide GETS training as part of our semester preparation for the student teaching experience, but we do need to bring in an expert on inter-rater reliability to train Unit faculty for scoring rubrics. This is a goal which has been discussed and will be implemented spring and summer semesters of 2011.

Share Reports with Professional Community
The sharing of annual summary reports with faculty, candidates, and other professionals will be instrumental in our success in fully implementing the assessment system. Open discussions and honest evaluations of all programs are key to our success. The major means by which we would continue to strengthen both initial and advanced programs will be to make sure individuals in leadership roles emphasize the importance of the entire assessment process to ensuring that programs are strong and that candidates are successfully prepared to become Competent, Committed, Caring Professional Educators.


Standard 2 Exhibits

Exhibit Number Exhibit Name Format of Exhibit
2.1a Unit Assessment System - Initial Programs
2.1b Unit Assessment System - Advanced Programs Word
2.2 Basic Skills Requirement Mean Ratings Word
2.2 (Updated) Basic Skills Requirement Mean Ratings (Updated) Word
2.3a Procedures for Ensuring Fairness, Accuracy, Consistency and Freedom of Bias Word
2.3b Teacher Work Sample Curriculum Map (Elementary)
2.3c Teacher Work Sample Curriculum Map (Secondary/PK-12) Word
2.4a Procedures for Data Collection and Analysis Word
2.4b PLT Mean Scores for Initial Programs Word
2.4b (Updated) PLT Mean Scores for Initial Programs (Updated) Word
2.4c PLT Pass Rates for Initial Programs Word
2.4c (Updated) PLT Pass Rates for Initial Programs (Updated) Word
2.4d PLT/Content Candidate Summary Excel
2.4d (Updated) PLT/Content Candidate Summary (Updated) Excel
2.4e Program Completers Key Assessments Three Year Summary
2.4e (Updated) Program Completeres Key Assessments Three Year Summary (Updated) Excel
2.4f Content Exams Means and Pass Rates for Initial Programs Word
2.4f (Updated) Content Exams Means and Pass Rates for Initial Programs (Updated) Word
2.4g Advanced Program Content Mean Scores
2.4g (Updated) Advanced Program Content Mean Scores (Updated)
2.5a Comparison BSE vs. MAT Knowledge Base Ratings
2.5a (Updated) Comparison BSE vs. MAT Knowledge Base Ratings Excel
2.5b Comparison BSE vs. MAT PLT-Content Mean Scores
2.5b (Updated) Comparison BSE vs. MAT PLT-Content Mean Scores Word
Comparison Advanced Programs On-Campus vs. Off-Campus
2.5c (Updated) Comparison Advanced Programs On-Campus vs. Off-Campus (Updated)
2.6 Student Complaint Process
2.7 Student Complaint Files
2.8 Examples of Changes Resulting from Analysis of Assessment Data