Latest News
Scroll through the following presentation to learn about logic models and help create your own.
The American Association for Higher Education and Accreditation (AAHEA) established the 9 Principles for Good Practice for Assessing Student Learning. The principles are in no particular order. Please, read over the 9 Principles to learn more.
The following assessment terms have been provided by the NILOA Assessment Glossary.
Term | Definition |
Assessment | A participatory, iterative process that provides data institutions need on their students’ learning, engages the college and others in analyzing and using that information to confirm and improve teaching and learning, produces evidence that students are learning the outcomes the institution intended, guides colleges in making educational and institutional improvements, evaluates whether changes made improve/impact student learning, and documents the learning and institutional efforts. The Higher Learning Commission (HLC) - https://www.hlcommission.org/ |
Benchmark | A criterion-referenced objective performance data point that can be used for the purposes of internal or external comparison. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark. https://case.edu/assessment/about/assessmentglossary |
Capstone Courses and Projects | Whether they’re called “senior capstones” or some other name, these culminating experiences require students nearing the end of college to create a project that integrates and applies what they’ve learned. The project might be a research paper, a performance, a portfolio, or an exhibit of artwork. Capstones can be offered in departmental programs and in general education as well. |
Criterion Referenced | A test in which the results can be used to determine a student's progress toward mastery of a content area. Performance is compared to an expected level of mastery in a content area rather than to other students' scores. Such tests usually include questions based on what the student was taught and are designed to measure the student's mastery of designated objectives of an instructional program. The "criterion" is the standard of performance established as the passing score for the test. Scores have meaning in terms of what the student knows or can do, rather than how the test-taker compares to a reference or norm group. |
Direct Assessment of Learning | Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself. Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year). https://www.cmu.edu/teaching/assessment/basics/glossary.html |
Embedded Assessment | A means of gathering information about student learning that is built into and a natural part of the teaching-learning process. Often uses for assessment purposes classroom assignments that are evaluated to assign students a grade. Can assess individual student performance or aggregate the information to provide information about the course or program; can be formative or summative, quantitative or qualitative. Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy) https://www.aacu.org/publications-research/periodicals/beyond-confusion-assessmentglossary |
Evaluation | Both qualitative and quantitative descriptions of progress towards and attainment of project goals. Using collected information (assessments) to make informed decisions about continued instruction, programs, activities. Leads to statements of the value, worth, or merit of a program. |
Formative Assessment | Formative assessment is often done at the beginning or during a program, thus providing the opportunity for immediate evidence for student learning in a particular course or at a particular point in a program. Classroom assessment is one of the most common formative assessment techniques. The purpose of this technique is to improve quality of student learning, leading to feedback in the developmental progression of learning. This can also lead to curricular modifications when specific courses have not met the student learning outcomes. Classroom assessment can also provide important program information when multiple sections of a course are taught because it enables programs to examine if the learning goals and objectives are met in all sections of the course. It also can improve instructional quality by engaging the faculty in the design and practice of the course goals and objectives and the course impact on the program. |
Indirect Assessment of Learning | Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students’ self-assessments, and admissions to graduate schools are all indirect evidence of learning. https://www.cmu.edu/teaching/assessment/basics/glossary.html |
Norm Referenced Tests | A test in which a student or a group's performance is compared to that of a norm group. The student or group scores will not fall evenly on either side of the median established by the original test takers. The results are relative to the performance of an external group and are designed to be compared with the norm group providing a performance standard. Often used to measure and compare students, schools, districts, and states on the basis of norm-established scales of achievement. |
Performance-Based Assessment | Performance-based assessment is a test of the ability to apply knowledge in a real-life setting. Assessment of the performance is done using a rubric, or analytic scoring guide to aid in objectivity. |
Portfolio | A systematic and organized collection of a student's work that exhibits to others the direct evidence of a student's efforts, achievements, and progress over a period of time. The collection should involve the student in selection of its contents, and should include information about the performance criteria, the rubric or criteria for judging merit, and evidence of student self-reflection or evaluation. It should include representative work, providing a documentation of the learner's performance and a basis for evaluation of the student's progress. Portfolios may include a variety of demonstrations of learning and have been gathered in the form of a physical collection of materials, videos, CD-ROMs, reflective journals, etc. |
Reliability | How consistently a measure of the same phenomenon leads to the same result after multiple administrations or across multiple scorers/raters. https://case.edu/assessment/about/assessment-glossary |
Rubric | A rubric is an evaluative tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both. https://www.cmu.edu/teaching/assessment/basics/glossary.html |
Self-Assessment | A process in which a student engages in a systematic review of a performance, usually for the purpose of improving future performance. May involve comparison with a standard, established criteria. May involve critiquing one's own work or may be a simple description of the performance. Reflection, self-evaluation, metacognition, are related terms. |
Standardized Test | An objective test that is given and scored in a uniform manner. Standardized tests are carefully constructed and items are selected after trials for appropriateness and difficulty. Tests are issued with a manual giving complete guidelines for administration and scoring. The guidelines attempt to eliminate extraneous interference that might influence test results. Scores are often are often norm-referenced. |
Student Learning Outcome (SLO) Statement | A specific description of what a student will be able to do at the end of the period during which that ability is presumed to have been acquired, and the focus of outcome assessment. (Note: some professional organizations may refer to these with different terms, such as objectives, indicators, abilities, or competencies). https://case.edu/assessment/about/assessment-glossary |
Summative Assessment | Summative assessment is comprehensive in nature, provides accountability and is used to check the level of learning at the end of the program. For example, if upon completion of a program students will have the knowledge to pass an accreditation test, taking the test would be summative in nature since it is based on the cumulative learning experience. Program goals and objectives often reflect the cumulative nature of the learning that takes place in a program. Thus, the program would conduct summative assessment at the end of the program to ensure students have met the program goals and objectives. Attention should be given to using various methods and measures in order to have a comprehensive plan. Ultimately, the foundation for an assessment plan is to collect summative assessment data and this type of data can stand-alone. Formative assessment data, however, can contribute to a comprehensive assessment plan by enabling faculty to identify particular points in a program to assess learning (i.e., entry into a program, before or after an internship experience, impact of specific courses, etc.) and monitor the progress being made towards achieving learning outcomes. |
Validity | The test measures the desired performance and appropriate inferences can be drawn from the results. The assessment accurately reflects the learning it was designed to measure. |
Value Added | The increase in learning that occurs during a course, program, or undergraduate education. Can either focus on the individual student (how much better a student can write, for example, at the end than at the beginning) or on a cohort of students (whether senior papers demonstrate more sophisticated writing skills-in the aggregate-than freshmen papers). To measure value-added a baseline measurement is needed for comparison. The baseline measure can be from the same sample of students (longitudinal design) or from a different sample (cross-sectional). https://www.cmu.edu/teaching/assessment/basics/glossary.html |
VALUE Rubrics | Developed by teams of faculty experts representing colleges and universities across the United States through a process that examined many existing campus rubrics and related documents for each learning outcome and incorporated additional feedback from faculty. The rubrics articulate fundamental criteria for each learning outcome, with performance descriptors demonstrating progressively more sophisticated levels of attainment. The rubrics are intended for institutional-level use in evaluating and discussing student learning, not for grading. The core expectations articulated in all 15 of the VALUE rubrics can and should be translated into the language of individual campuses, disciplines, and even courses. The utility of the VALUE rubrics is to position learning at all undergraduate levels within a basic framework of expectations such that evidence of learning can by shared nationally through a common dialog and understanding of student success. https://www.usna.edu/Academics/AcademicDean/Assessment/All_Rubrics.pdf |