Assessment of Institutional Effectiveness and Student Learning at Ursinus College
Institutional effectiveness is a commonly used “umbrella” term that refers to the level of quality with which an institution delivers the programs and services designed to achieve its educational mission and strategic goals.
Assessment of institutional effectiveness includes the assessment of student learning, as well as the non-instructional aspects of the college that directly or indirectly contributes to student success.
All Ursinus academic departments, academic support and student services departments (also referred to as co-curricular departments) and administrative areas endeavor to engage in useful assessment practices that help their areas improve student learning and/or their programs, services, or functions.
Student Learning Assessment: “the systematic collection of information about student learning, using the time, knowledge, expertise, and resources available, in order to inform decision that affect student learning” (Walvoord, 2010).
Institutional Effectiveness Assessment: “any effort to gather, analyze, and interpret evidence which describes institutional, divisional, or department effectiveness. Institutional effectiveness includes not only assessing student learning outcomes, but assessing other important outcomes, such as cost effectiveness, clientele satisfaction, meeting clientele needs, complying with professional standards, and comparisons with other institutions” (Upcraft & Schuh, 1996).
Types of Assessment:
Learning Outcomes Assessment: measuring the impact our curriculum, programs, services, and facilities have on students’ learning, development, and student success.
Tracking: monitoring who uses our programs, services and facilities (e.g. raw numbers, frequency, age, class standing, gender, race, residence, etc).
Needs Assessment: identifying needs of our students (e.g. student perceived, research supported).
Satisfaction Assessment: measuring the level of student satisfaction with our programs, services, and facilities.
Student Cultures and Campus Environments Assessment: assessing the collective perception of campus and student experience (e.g. campus climate, academic environment, residential quality of life).
Comparable Institution Assessment (Benchmarking): identifying how the quality of our programs, services and facilities compare with peer institutions’ best practices.
National Standards Assessment: using nationally accepted standards to assess our programs and services (e.g. national assessment inventory– EBI, CAS standard self-assessment, departmental review by consulting group).
Cost Effectiveness Assessment: determining whether the programs, services and facilities we offer to students are worth the cost.
Adapted from Upcraft, M. L., & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass
Assessment leads to numerous benefits to improve, inform, and support the student experience from both a student learning perspective and an institutional effectiveness perspective. Assessment can help review the effectiveness and alignment of curricula; improve curricular and co-curricular program outcomes; inform planning and decision-making; understand impact of program changes; highlight program successes; provide evidence when requesting resources; and inform students of the intended learning outcomes.
Curricular and co-curricular program assessment of student learning is one of the best ways for colleges and/or academic and academic support departments to find out the current state of student learning and the impact of their programs and services. Assessment of student learning, both direct and indirect, has been a commonplace process at colleges and universities across the country. Direct assessment of student learning is an essential component of determining the impact of our core curriculum, academic department major(s) and programs, and our co-curricular student services and programs. In addition to student learning assessment, assessment focused on administrative and operational functions is essential to understanding the effectiveness of the college. Through the combined assessment of student learning and institutional effectiveness, we work toward achieving the Strategic Plan goals and fulfilling the mission of the college. Assessment also plays an integral role in our Middle States accreditation, which is a prerequisite for access to federal or state funds for research, programs, and facilities, as well as government sources of financial support for students.
Course-embedded assessment involves the assessment of the actual work produced by students in our courses. The assessment may select course papers or elements of final exams and use these student artifacts to assess the achievement of specific course or department learning objectives. It is important that the purpose of this assessment is to assess the learning outcome of the course or department and not the grade of the student. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a departmental or college-wide outcome to demonstrate information literacy).
Direct and indirect methods of evaluating learning relate to whether or not the method provides evidence in the form of student products or performances.
Direct methods of evaluating student learning demonstrate that actual learning has occurred relating to a specific content or skill. Direct assessment provides clear, tangible evidence that students have or have not learned. Students can directly demonstrate their achievement in a variety of ways.
Student writing samples/presentations scored by rubrics
Embedded course assignments assessed for achievement of both course and program goals
Exhibits and/or performances
Direct assessment of non-academic functions/areas include methods that may measure the effectiveness of services, programs, initiatives, etc. in areas with outcomes not related to student learning.
Quantitative reports on accuracy and/or timeliness
Number of individuals served
Participation numbers and percentages
Number of complaints received relative to total population served
Indirect methods reveal characteristics associated with learning, but they only imply that learning has occurred. Indirect evidence provides signs that students are likely learning, but the proof that they are learning is not as clear or convincing. Indirect methods of assessment are techniques that ask students to reflect on what they have learned and experienced and provide proxy information about student learning.
Senior exit interviews
Student engagement and satisfaction surveys
Graduate school admission and employment rates
Alumni outcomes surveys
Student Perception of Teaching Questionnaires
Indirect assessment methods for non-academic areas may collect findings about attitudes, perceptions, feelings, values, etc. about services provided or experiences encountered.
Survey responses – satisfaction, impact ratings
Social media posts
Other examples of Direct and Indirect Assessment Methods
Direct Evidence: demonstrates knowledge and skills
Project from capstone course, portfolio of student work, standardized test, licensure exams, essay questions with blind scoring, reflection papers, qualitative papers, oral presentations, lab experiments, quizzes/tests, summaries and analyses of electronic class discussion threads, employer ratings of skills of recent graduates, pre and post-tests, course embedded assessment, external evaluation of performance during internship or student teaching, research project and presentations, class discussion participation, artistic performances
Indirect evidence: provides proxy information about student learning
Alumni surveys, employer surveys, student surveys, exit interviews, focus groups, graduate follow up studies, external or peer reviews, benchmarking data, utilization numbers, graduation rates and retention rates, job placement data, graduate school placement rate, number of student publications; research presentations, student participation rates in faculty research, publications and conference presentations, GPA distributions, course evaluations, GRE, GMAT scores for students going on to graduate school
Formative assessment is assessment conducted during the classroom or program experience and is intended to provide real-time feedback on student learning to allow for immediate changes to classroom activities and assignments. Formative assessment is used internally, primarily by those responsible for teaching a course or developing a program. In contrast, summative assessment occurs at the end of a course or program. The purpose of this type of assessment is to determine whether or not overall goals have been achieved.
Goals for student learning are expressed in summative terms when describing what students are able to do or what skills students have when they complete a course or a program or when they graduate from the institution. Information from summative assessment can be used to make changes to a course or program before it is run again.
Evaluation typically focuses on the work of an individual and is generally used for grading and reporting of feedback to an individual. Assessment focuses on teaching and learning or administrative process and the use of aggregate outcomes for continual improvement.
The main difference is that grades focus on individual students, while assessment focuses on entire cohorts of students and how effectively everyone, not an individual faculty member, is helping them learn. While grades certainly are important, they are usually not sufficient for answering questions about whether specific learning goals have been achieved.
Some reasons that grades are not appropriate for overall assessment:
Course grades reflect what students have achieved in a single course.
Grades usually are a composite of a student’s achievement of course outcomes and do not differentiate achievement by learning objective.
Grades reflect the evaluation practices, policies, and criteria of individual instructors.
Faculty teaching the same course may teach different material or emphasize different course objectives.
It is reasonable to find that some outcomes have not been met. All assessments show areas of strength and weakness. Knowing those strengths and weaknesses will help improve our work in the development of courses, programs, and services to benefit our students. Assessment is an iterative process meaning that it is on-going and cyclical in order to reach a desired outcome. What is learned from the assessment process for student learning and program improvement and how charges are implemented are the most important use of assessment results.
Assessment is not about the evaluation of a single student, faculty, or staff member. It is conducted in order to determine what we as a whole can do to improve learning of students and the effectiveness of our programs and services. It should not and will not be used as an evaluation of an individual person. The purpose is to identify the strengths and weaknesses of the college as an aggregate, in order to inform institutional, curricular, and pedagogical changes. When there is evidence of inadequate student learning or of an ineffective service or process, all involved should collectively take appropriate action to address the issues and make improvement.
The Middle States Commission on Higher Education (MSCHE) is a voluntary, non-governmental membership association dedicated to quality assurance and improvement through accreditation via peer evaluation. Middle States examines each institution as a whole, rather than specific programs within institutions. The accreditation process is an opportunity to demonstrate an institution’s accountability and improvement, both internally and externally.
In June 2016, MSCHE accepted our Monitoring Report. Our next evaluation visit is scheduled for 2018-19. More information about the self-study process is posted on the Accreditation website.
The Monitoring Report was requested after the review of our Periodic Review Report in November 2014. The report focused on “documenting the further implementation of an organized and sustained process for the assessment of institutional effectiveness and the achievement of institutional and program level student learning outcomes in all academic programs with evidence that assessment results are used to inform decision making and to improve teaching and learning (Standards 7 and 14).”
Goldman, G. & Zakel, L. (2009) Assessment Update, May–June 2009, Volume 21, Number 3, San Francisco, CA: Wiley Periodicals, Inc.
Middle States Commission on Higher Education. (2007). Student Learning Assessment Options and Resources. Philadelphia, PA: Middle States Commission on Higher Education. (2nd Edition)
Suskie, L. (2009). Assessing student learning: A common sense guide. Bolton, MA: Anker Publishing Company Inc. (2nd Edition)
Upcraft, M. L., & Schuh, J. H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass
Walvoord, B. E. (2010). Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education. San Francisco, CA: Jossey-Bass