split face logo
Colleges
Share page: 

CAEP Contact

Education Unit Head:
Dr. Howard Smith

CAEP Coordinator:
Dr. Jean Dockers

Assessment Coordinator
Steve Brown

Contact Person
Amanda Hill

Phone: (620) 235-4489
Fax: (620) 235-4421

Address

    110 Hughes Hall
    Pittsburg State Univesity
    1701 South Broadway
    Pittsburg, KS 66762-7564

STANDARD 2: Assessment System and Unit Evaluation

The unit has an assessment system that collects and analyzes data on applicant qualifications, candidate and graduate performance, and unit operations to evaluate and improve the performance of candidates, the unit, and its programs.

2.1 How does the unit use its assessment system to improve candidate performance, program quality and unit operations?

The College of Education (COE) Assessment System (see assessment system description document) was developed to align with the Unit’s Conceptual Framework (CF) and state and professional standards for the specific purpose of data collection, analysis, and evaluation that leads to program improvement and candidate success. The Unit assessment system provides monitoring of candidates at strategic locations along the path leading to initial license, endorsement, or advanced degree. Both initial and advanced program candidates transition thorough four checkpoints as they move from program entry to program completion.

Initial Programs

Assessment of initial programs is based on multiple assessments at four different checkpoints (2.1a). These include admission to teacher education, admission to the professional semester, Completion of the Professional Semester and the teaching program, and Application for Licensure. A final collection of data used for program assessment is follow-up surveys from first and third year completers and their employers. Data from all checkpoints are collected by the OTE, updated as requirements are satisfied and analyzed, checking for adequate progress for each candidate at each checkpoint. If a candidate struggles at any checkpoint, the candidate, advisor, and department chair are notified so a plan of action can be developed. Data from all checkpoints are used by the unit, coordinating committees, advisory councils and program faculty members to make curricular and program modifications (see committee meeting minutes).

Throughout the program, candidates are rated by clinical faculty in the field on their success during field experiences using the indicators from the knowledge base. Evaluation instruments for early field experiences have been revised and aligned with the knowledge base since the last NCATE visit. During the first experience, (Explorations in Education Field Experience Evaluation) students are rated on the first two categories of the Knowledge Base: Professional Characteristics and Relationships with Students, by the mentor teacher. This rating corresponds with checkpoint one and is another data point for admission. As candidates move through the program other categories are assessed (Clinical Experience Field Experience Evaluation and Overview Field Experience Evaluation) in an effort to measure candidate professional growth. The same indicators are then used to assess candidates during the professional semester.

For the professional semester, assessment is completed using an online version of the Knowledge Base. At each observation the supervisor identifies knowledge base indicators where the candidate is being successful and also indicators targeted for improvement. This information is shared with the candidate and cooperating teacher directly after the teaching session. This allows the candidate and mentor teacher to immediately target problem areas for improvement. At the Unit level, the data received from the knowledge base ratings are used for continuous improvement efforts. For example, in spring 2012, coordinating committees and program areas reviewed knowledge base ratings (2.4.a) and also Teacher Work Sample ratings (2.4.h and 2.4.i) and discovered that our students struggled with differentiating instruction (see committee meeting minutes). In an effort to improve our candidates’ performance in this area eight faculty members are attending professional development on this topic in September of 2012. In November and December 2012, this faculty group will provide training for remaining faculty. This training will be the springboard for faculty to translate the training into curriculum for candidates in methods courses, and modeling of strategies by faculty in methods courses. Program areas will map the current curriculum to find where new strategies can be added and a curriculum audit will be completed to determine intentional addition of strategies to the program.

Advanced Programs

Assessment of advanced programs is based on multiple assessments at four different checkpoints (2.1.b). These checkpoints include admission, approval of candidacy, completion of the program, and application for licensure. Follow-up surveys for first and third year program completers are administered to the candidates and their employers. Requirements for admission to the Graduate School and the program (checkpoint #1) and approval of candidacy (checkpoint #2) are aligned with common data points, but with slight differences by program. Completion requirements (checkpoint #3) vary from program to program. Data from all checkpoints are used by the unit, coordinating committees, advisory councils and program faculty members to make curricular and program modifications. The multiple sources of data show a strong relationship of performance assessments to candidate success.

At the advanced level, candidates receive feedback concerning their progress throughout their program on academic assignments and Praxis II exams. Candidates are assessed on formal examinations, presentations, case studies, candidate-created materials, and projects that involve demonstration of skills in assessment, classroom management, clinical management, and teaching. They complete course evaluations each semester and during the exit interview, candidates are interviewed regarding the strengths and weaknesses of their program.  Follow-up survey data and exit interview data are also reviewed by the APCC and program areas for continuous improvement.

Other Aspects of the Unit Assessment System:

Consistency of programs: It is common practice to disaggregate data according to the type of program delivery. The Master of Arts in Teaching (MAT) with a program on campus and one at the KC Metro Center, assessment results from each program have been disaggregated, analyzed, and reported in order to ensure that programs in all locations are of the same quality and that candidates perform equally well in all programs. This process includes TCHL programs in Leadership and Special Education which are located on-campus and off-campus. MAT program candidate test results are also compared to the performance of teacher candidates in the traditional teacher education program. Findings from the MAT vs. traditional program candidates (2.5.a, 2.5.b, 2.5.c) were instrumental in securing support campus-wide for the MAT program.

Student complaints: The Unit has a system to effectively maintain records of formal candidate complaints. The formal process for candidate complaints follows the PSU grievance policy. When a complaint is filed, the candidate is advised of the procedure to follow in resolving the complaint. Due process procedures include an initial meeting with the faculty member or individual with whom the complaint is addressed; if the complaint is not resolved, the candidate next meets with the department chair for a resolution. If not resolved, the grievance moves to the College of Education Dean followed by the Provost/Vice President for Academic Affairs. For complaints directly related to admission or retention issues, the candidate first files a CARTE petition with the Committee for Admission to and Retention in Teacher Education (CARTE). If the CARTE decision is not satisfactory to the candidate, the next step is to see the Dean (2.6). Files and complaint information are available in the OTE for review.

2.2.1 Describe work undertaken to move to the Target Level

Unit Assessment System

Since the last visit, the Unit has centralized the collection of assessment data into the OTE. The unit has developed a clear vision of the assessment system including timelines for data collection and review. The COE data flow chart has helped to clarify where data is originated and to whom data is reported for both initial and advanced programs.  This flow chart was created through a collaborative effort of all coordinating committees; the assessment committee and the OTE (see committee meeting minutes). Members of these committees include faculty and staff from the COE and College of Arts and Sciences and College of Technology.

Centralization of Data Collection

In the past, collection, aggregation and disaggregation, analysis, and sharing of assessment data has varied across the unit due to differences in programs, transition of personnel, technical issues, and other discrepancies within the system.  While the basic tenets of the system have been carried out, variation in timelines and methods of data collection have prevented the full implementation of the assessment system as it was intended. 

Since the last visit, data collection has been centralized. All initial and advanced program data now flows through the OTE. Once data are received, it is aggregated and disaggregated, analyzed and shared through Live Text (Pass Code C8D97170 Click “Exhibit Center” tab). The data are then accessible to the unit as a whole and individual programs and faculty. Annual Assessment Data Summaries are now required from each program to be reviewed by the leadership team, Assessment Committee, and coordinating committees annually. Trends in data are reviewed and recommendations are made to the unit, departments and programs.

Unit Assessment System Handbook

The Assessment System Handbook (CF.1c) has been revised to reflect changes in the assessment system.  The Assessment System Handbook includes information needed by the professional community and candidates to understand what is involved in the system.  The Handbook includes the revised CF Knowledge Base for initial programs and the CF Knowledge Base for advanced programs.  Unit assessments are identified for initial and advanced programs for all checkpoints. The Handbook also includes revised procedures and a timeline for collecting, aggregating, disaggregating, analyzing, and reporting data.  Revisions were the direct result of collaboration among program faculty, candidates, program completers, and other professionals. 

Advanced Programs

For Advanced Programs, a new database program has been developed through our university system to show checkpoint data across the unit (this website requires secure log-in, it will be available for viewing during the visit). We have added unit wide common data points at both admissions and in the follow-up surveys. For admissions, we have implemented a common advanced programs admission recommendation form for candidates to use when applying to any program. This form allows us to gather unit wide data on all candidates that apply to any advanced program. All follow-up surveys for advanced programs have been revised since the last NCATE visit and are now being distributed and collected through the OTE. These surveys have a set of common questions and a set of program specific questions. This allows us to examine data both unit wide and at program level.

Since the last visit, an online evaluation system for advanced programs has been implemented. This system is used by Clinical Faculty to evaluate all candidates during field experiences (This system requires a secure log in and will be available to view at the visit). In an effort to ensure fairness, consistency and accuracy of this assessment, candidates are exposed to this instrument at the beginning of any field experience course and throughout their program. We have scheduled training (fall 2012) for advanced programs faculty to gain inter-rater reliability on the scoring of this instrument. At the conclusion of this training, an informational Power Point will be developed for P-12 School Personnel to clarify scoring of this instrument. The Power Point will be sent to all clinical faculty who will be required to use the instrument to score students with the directions for accessing the online system.

Initial Programs

Since the last visit, initial programs have reviewed and piloted an updated version of the Teacher Work Sample (Teacher Work Sample Template, Teacher Work Sample Rubric) to align with the newly revised Kansas Performance Teaching Portfolio (KPTP) required by KSDE. In fall 2012, the new TWS will be reviewed by all coordinating committees, internal advisory councils and external advisory groups and any recommended changes implemented. Inter-rater reliability training for clinical faculty will occur fall 2012.

2.2.2 Plans for Continuous Improvement     

Continuous Improvement Goals

Unit goals for continued improvement in the area of assessment include efforts to foster a culture of data within the COE. Unit data are reviewed and analyzed monthly by several committees including Unit Assessment Committee, Leadership Team, Secondary Education Coordinating Committee (SECC), APCC, and EECC. Recommendations for program changes are made based on the data trends (see committee meeting minutes). A second goal is continuous evaluation of the effectiveness of the Assessment System.

Initial programs goals include evaluation of the course sequence for elementary (K-6 and ECU) with the focus of changing current courses to reflect the input from advisory councils and data analysis. For all initial programs a current goal is to build better collaborative partnerships with P-12 schools to find and increase opportunities for meaningful field experiences for candidates.

Continuous Improvement goals for advanced programs include continuing to align programs across the unit in terms of admission requirements, follow-up data and course offerings.

Ongoing Training for Inter-Rater Reliability

In an effort to ensure a high level of inter-rater reliability on common assessments, continuous training of university and academic supervisors is needed in order to maintain a quality program. With the continuous improvement of major assignments, like the Teacher Work Sample (new TWS piloted in spring 2012, assessor training for inter-rater reliability will occur fall 2012), training of clinical faculty will need to be an ongoing endeavor.