Increasing Student Success

Assessment
            
These glossary terms are primarily related to student and program assessment. Some related terms are located under the Program Management category. More comprehensive glossaries of terms can be found in the Greenwood Dictionary of Education (Collins & O’Brien, 2011) and the Handbook of Practical Program Evaluation (Newcomer, Hatry, & Wholey, 2015). 

affective domain
1. Definition: “A part of Bloom’s Taxonomy of Educational Objectives for student attitudes, values, and emotional growth. The affective domain includes five basic categories: receiving, responding, valuing, organization, and characterization by a value” (Dembo, 1994. p. G-1).
2. Compare with COGNITIVE DOMAIN and METACOGNITIVE DOMAIN.
 
alternate assessment
1. Definition: “Examination of student progress through direct observation of student performance and judgment of learning products through a collection of authentic sources such as behavior, student presentations, and work” (Collins & O’Brien, 2011, p. 18).
2. Compare with ASSESSMENT, DIFFERENTIATED PLACEMENT, DIRECTED SELF-PLACEMENT, PLACEMENT, and PLACEMENT TESTING.
 
assessment
1. Definitions: (a) “Process of applying systematic formal and informal measures and techniques to ascertain students’ current competencies and abilities; (b) Process of determining students’ strengths and weaknesses in cognitive and affective areas for the purpose of generalized placement; (c) Act of assessing, or taking a measurement by counting, rating, or estimating the amount of skill, ability, or knowledge of some element of an individual or a program); (d) ASSESSMENT should be as objective as possible (value-free), as opposed to EVALUATION, which suggests that value has been added. Assessment does not assume, in advance, what is good, worthwhile, or desirable. In analogy to science, assessment is observation. Although objectivity is always relative, it is important to separate the measurement from the interpretation of its meaning” (Collins & O’Brien, 2011, p. 36); and (e) “While ‘ASSESSMENT’ means “measurement,” the term is increasingly used in the higher education context to refer to a systematic cycle of collecting and reviewing information about student learning. The complete cycle involves: clearly stating expected goals for student learning, offering learning experiences, measuring the extent to which students have achieved expected goals, and using the evidence collected to improve teaching and learning” (Office of the Provost, n.d., para. 1).
2. Examples: College entrance examination scores, scores on pretests for all students enrolled in a course, and graduation rates for students in a particular academic degree program.
3. Compare with ALTERNATE ASSESSMENT, CAUSATION and CORRELATION, DIFFERENTIATED PLACEMENT, DIRECT SELF-PLACEMENT, EVALUATION, PLACEMENT TESTING, PROGRAM GOAL, PROGRAM OBJECTIVE, RESEARCH, and SYSTEMIC SELF-STUDY.
 
backwash
1. Definition: Describing the positive or negative impact that an assessment of a specific skill has on whether that skill has been acquired.
2. Examples: (a) Instructors organize their class learning activities directly to prepare for high-stakes tests that can impact funding for the school; and (b) Supplemental learning topics are ignored to permit more time for the instructor to teach to the test.
 
baseline
1. Definitions: (a) Natural occurrence of behavior before intervention; and (b) Data collected to establish a point of comparison between previous behavior and that which occurs after an intervention is introduced.
 
behavioral change
1. Definition: Difference in performance that is observable and documentable.
2. Examples: Course dropout rate, final course grade, and persistence toward graduation following an intervention activity.
3. Compare with ACADEMIC MENTORING, COURSE-BASED LEARNING ASSISTANCE, and TUTORING.
 
causation and correlation
1. Definitions: (a) CAUSATION occurs when one variable increases or decreases directly from another variable. This is difficult to establish with human subjects since other variables may have an influence. This is easier to establish under carefully monitored scientific studies that are replicated numerous times, and (b) CORRELATION suggests a high likelihood that two variables are associated. Studies may report the likelihood of this relationship by establishing the percentage of chance that some other variable might explain the results.
2. Examples: (a) Carefully designed studies replicated many times established the CAUSATION of cigarette smoking to various medical conditions including lung cancer; and (b) Attending a student-led study group results in a CORRELATION of higher course grades.
3. Compare with ASSESSMENT, EVALUATION, FORMATIVE EVALUATION, and RESEARCH.
 
cognitive domain
1. Definition: “A part of Bloom’s Taxonomy of Educational Objectives. Bloom divides the objectives in the cognitive domain into six categories: knowledge, comprehension, application, analysis, synthesis, and evaluation” (Dembo, 1994, p. G-2).
2. Compare with AFFECTIVE DOMAIN, ASSESSMENT, and DIAGNOSIS.
 
cohort
1. Definitions: (a) Specific subpopulation or a subset of the entire student body studied over a period through the examination of their attitudes, behaviors, or scores on assessment instruments; and (b) Group of students who are a subset of the entire student body.
2. Examples: entering first-year COHORT of students at a college or university; subpopulation of students such as student-athletes, fraternities and sororities, or students over the age of 25.
 
college and career readiness
1. Definitions: (a) Level of preparation at which a student possesses the content knowledge, strategies, skills, and techniques necessary to be successful in any of a range of postsecondary settings (Collins, 2007; Conley, 2012); and (b) COLLEGE READINESS and CAREER READINESS are relative terms because they are dependent upon a particular institution, specific degree program within that institution, and a particular instructor teaching a course within a degree program.
2. Compare with COLLEGE-LEVEL, DEVELOPMENTAL, and DEVELOPMENTAL-LEVEL COURSE.
 
criterion
1. Definitions: (a) Measurable objective that describes the characteristics of acceptable performance; and (b) Specific standard by which performance is evaluated.
2. Compare with PROGRAM GOAL, PROGRAM OBJECTIVE, MISSION STATEMENT, and VISION STATEMENT.
 
developmental profile
1. Definition: Description of an individual’s academic or cognitive competencies as measured by, for example, high school grades, standardized college entrance exams, interviews, and surveys.
 
diagnosis
1. Definitions: (a) Process of determining students’ specific strengths and weaknesses to create a prescription for treatment (Harris & Hodges, 1981); (b) Planning of instruction based on an evaluation of students’ needs; and (c) The classification of people into established categories (Harris & Hodges, 1981).
2. Compare with ASSESSMENT.
 
direct measures
1. Definition: “Processes used to directly evaluate student work. They provide tangible, self-explanatory, and compelling evidence of student learning. 
2. Examples: Exam questions, portfolios, performances, projects, reflective essays, computer programs, and observations” (Office of the Provost, n.d., para. 3).
2. Compare with INDIRECT MEASURES.
 
directed self-placement
1. Definition: Students make an informed choice about which level of their first college mathematics and writing course. Students complete a self-assessment of their reading, mathematics, and writing skills based on their high school experiences, reflection survey, and review requirements for different college-level mathematics and writing courses. The survey is not a placement test and the student's decision for the course selection is made alone.
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, DIFFERENTIATED PLACEMENT, PLACEMENT, and PLACEMENT TESTING.
 
evaluation
1. Definitions: (a) “Process of establishing the utility or value of a particular activity or program; (b) Decision-making process of interpreting test/assessment results, deciding what is good, good enough, or effective, for instance. Thus, in EVALUATION, an important component is subjective and philosophical; (c) Making data-based judgments and decisions about student academic skills, student progress, and/or program effectiveness; and (d) Measuring an activity or program with the desired outcome” (Collins & O’Brien, 2011, p. 171).
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, CAUSATION and CORRELATION, PLACEMENT TESTING, PROGRAM EVALUATION, and RESEARCH.
 
evaluation standards
1. Definition: Established to measure the efficacy of an activity or program, such as its worth, effectiveness, or efficiency for producing desired outcomes.
 
formative evaluation
1. Definitions: (a) Rather than waiting until the end of an education activation, EVALUATION valuation occurs while the event is underway. This provides an opportunity for immediate changes to occur to the educational activity to increase the positive effect for the students and accomplish goals that were desired; and (b) “A process to determine the extent to which students are progressing through a certain learning or development goal; used to provide continuous or frequent feedback to help shape, modify, or improve the program or service while it is happening” (Council for the Advancement of Standards, 2020, para. 29).
2. Compare with ASSESSMENT, CAUSATION and CORRELATION, EVALUATION, and SUMMATIVE EVALUATION.
 
human subjects research
1. Definition: Investigations (other than normal evaluation of student learning by instructional staff) involving people as participants. Such investigations may require prior approval by the institution and require following federal, state, and institutional rules for such research studies.
 
indirect measures
1. Definition: “Processes that provide evidence that students are probably attaining learning goals. These require inference between the student’s action and the direct evaluation of that action. Examples include: course grades, student ratings, satisfaction surveys, placement rates, retention and graduation rates, and honors and awards earned by students and alumni” (Office of the Provost, n.d., para. 7).
2. Compare with DIRECT MEASURES.
 
measurement
1. Definitions: (a) Process of determining the extent to which some characteristic is associated with an object; and (b) a unit of performance generally used to assess the efficacy of a particular intervention or treatment.
2. Examples: Outcome scores of students and graduation rates of students within a given time period.
3. Compare with ASSESSMENT.
 
placement
1. Definition: “Assignment of students to an appropriate course or educational program in accordance with their aims, capabilities, readiness, educational background, and aspirations. PLACEMENT may be based on previous experiences, scores on admissions or entrance tests, or tests specifically designed for placement purposes” (Arendale et al., 2007, p. 25).
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, DIFFERENTIATED PLACEMENT, DIRECTED SELF-PLACEMENT, AND PLACEMENT TESTING.
 
placement testing
1. Definition: ASSESSMENT is given by an institution or by groups like the College Board to determine the academic or skill levels of students, especially new students, in order to place them in appropriate courses and programs.
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, DIFFERENTIATED PLACEMENT, DIRECTED SELF-PLACEMENT, and PLACEMENT.
 
power test
1. Definitions: Test of a particular skill having no time limits to determine an individual's strength in a particular skill area.
 
program assessment
1. Definition: “The systematic and ongoing method of gathering, analyzing, and using information from various sources about a program and measuring PROGRAM OUTCOMES in order to improve student learning” (Selim, et al., 2008, p. 3). PROGRAM ASSESSMENT is diagnostic, process-oriented, and provides feedback. It is a process used to provide a program with feedback on its performance with the intent of helping improve the program and in particular, improve student learning (Selim, et al., 2008, p. 3). Effective program assessment plans should address (a) what a program is trying to accomplish, (b) how well it does it, (c) how does the program contributes\ to student development and growth and (d) how student learning can be improved (Selim et al., 2008).
2. Compare with ASSESSMENT
 
program evaluation
1. Definition: “Systematic method for collecting, analyzing, and using the information to answer questions about projects, policies, and programs and for program improvement” (Collins & O’Brien, 2011, p. 374).
2. Compare with ASSESSMENT, EVALUATION, and RESEARCH.
 
program outcomes
1. Definitions: (a) Typically part of a program’s ASSESSMENT plan, address specific actions and achievements that a program has reached. PROGRAM OUTCOMES are often used to measure program-level goals, and operational outcomes, (Council for the Advancement of Standards, 2015).
2. Examples: (a) PROGRAM OUTCOMES often describe programmatic elements, such as the quality or quantity of program usage, such as growth of students’ enrollment or students’ use of a program; (b) PROGRAM OUTCOMES can indicate fiscal sustainability or facilities and infrastructure improvements. However, it is important to distinguish between PROGRAM OUTCOMES and STUDENT LEARNING OUTCOMES. Note that PROGRAM OUTCOMES do not describe student learning (North Carolina Agricultural and Technical State University, n.d.).
 
readiness profile
1. Definition: “Readiness for learning, whether in absolute terms (as in relation to cut scores), relative terms (as in relation to other students), and self-referential terms (as in relation to student goals, interests, and aspirations)” (Conley, 2012, p. 18).
 
research
1. Definitions: (a) Investigation of an original nature conducted to gain understanding and knowledge. This inquiry may be undertaken to obtain new information or confirm previously conducted investigative studies; and (b) Asking questions and using quantitative or qualitative means to achieve the goal of obtaining new knowledge.
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, EVALUATION, and PLACEMENT TESTING.
 
student development
1. Definitions: (a) Learning outcomes that occur through interaction with an environment that enhances academic, intellectual, interpersonal, psychosocial, moral, and faith/spiritual (for some institutions) development; and (b) “Individual growth that is an intended outcome of engaging with functional area programs and services. Student learning and development refers to the changes that result when students are exposed to new experiences, concepts, information, and ideas; the knowledge, understanding, and personal growth are generated, in this context, from interactions with higher education learning environments” (Council for the Advancement of Standards, 2020, para. 56).
2. Compare with STUDENT DEVELOPMENT OUTCOMES and STUDENT LEARNING OUTCOMES.
 
student development outcomes (SDOs)
1. Definition: Statements that specify what students can do when they have completed or participated in a course or program. SDO's specify characteristics (such as responsibility, resilience, self-awareness, and cultural humility) that must be observable, measurable, and demonstrable.
2. Compare with STUDENT DEVELOPMENT, STUDENT LEARNING GOALS, and STUDENT LEARNING OUTCOMES.
student learning goals
1. Definition: Programs often include STUDENT LEARNING GOALS, which generally long-range statements are written from an educator’s perspective that gives general content and direction of a learning experience for students. These goals focus on the general aims of the curriculum. Good instruction begins with distinct STUDENT LEARNING GOALS from which to select appropriate instructional activities and outcome assessments that help determine students’ mastery of learning (DePaul, n.d.).
2. Compare with STUDENT DEVELOPMENT, STUDENT DEVELOPMENT OUTCOMES, STUDENT LEARNING GOALS, and STUDENT LEARNING OUTCOMES.
 
student learning objectives
1. Definition: STUDENT LEARNING OBJECTIVES (not to be confused with STUDENT LEARNING OUTCOMES) are more specific than the goals and are used to describe what an educator intends to teach in a learning experience. STUDENT LEARNING GOALS and STUDENT LEARNING OBJECTIVES are often used to structure the content of an educational activity (UCLA Health, 2016).
2. Compare with student learning goals, STUDENT DEVELOPMENT, STUDENT DEVELOPMENT OUTCOMES, and STUDENT LEARNING OUTCOMES
 
student learning outcomes (SLOs)
1. Definition: A learning outcome, [often referred to as STUDENT LEARNING OUTCOMES (SLOs)], and also part of a program’s ASSESSMENT plan. SLOs are “describe significant and measurable change occurring in students as a direct result of their interaction with an educational institution and its program and services” (Council for the Advancement of Standards, 2015, para. 51). More specifically, SLOs demonstrate an action which is observable and measurable, and one that can demonstrate attainable skills, abilities, and/or competencies (Oxnard College, n.d.). Please note that STUDENT LEARNING OUTCOME statements are sometimes referred to as STUDENT LEARNING OBJECTIVES within the literature (Selim et al., 2008). For purposes of this glossary, the term STUDENT LEARNING OBJECTIVES describes what an educator intends to teach in a learning experience and STUDENT LEARNING OUTCOMES are what students can know or do after a learning experience. The achieved results are then often revealed in an ASSESSMENT report. 
2. Compare with STUDENT DEVELOPMENT, STUDENT DEVELOPMENT OUTCOMES, and STUDENT LEARNING GOALS.
 
student success
1. Definition: “Aggregate of many aspects of the student experience, including academic success, connection to the campus, developing interpersonal and intrapersonal skills, and preparing for entrance into the global society and workforce. Institutions may define student success for their population, considering student goals, and evidence of learning and development. Those attempting to measure student success often point to rates of year-to-year retention and percent of students who persist to the completion of their goals” (Council for the Advancement of Standards, 2019, para. 56).
 
student success goals
1.    Definition: measurements of grades, completion, retention, persistence, graduation, indicated benchmarks, and competences after learners participate in learning experiences such as a single or set of courses/activities. However, they are inexact measures of actual learning, as students enter a course/activity with varying levels of already achieved knowledge or competence.
2.    Compare with STUDENT DEVELOPMENT GOALS, STUDENT SUCCESS, and STUDENT SUCCESS OUTCOMES.
 
student success outcomes
1.    Definition: measurements of grades, completion, retention, persistence, graduation, and/or competence to demonstrate program value. Both newer and established programs track these measurements to determine the effectiveness of courses and services. STUDENT SUCCESS measures may correlate with learning if the student reaches a set competence or benchmark after participating in learning experiences such as a single or series of courses, activities, or services. However, they are inexact measures of actual learning or development, as students enter with varying levels of already-achieved knowledge or competence. 
2.    Example: Tutoring services programs intend that students who participate in tutoring activities will improve their grades (a STUDENT SUCCESS measure) and their learning (a student learning outcome). Accrediting agencies are interested in student learning and development, and well-established tutoring services programs are uniquely positioned to assess those outcomes.
3.     Compare with STUDENT DEVELOPMENT OUTCOMES, STUDENT SUCCESS, and STUDENT SUCCES GOALS.
 
summative evaluation
1. Definitions: (a) “Evaluation activities used to decide if a particular activity or function should be continued, enhanced, curtailed, or eliminated; and (b) Sometimes refers to evaluation activities that occur after the event under investigation has concluded to generate information for future decisions” (Council for the Advancement of Standards, 2020, para. 57).
2. Compare with ASSESSMENT, EVALUATION, FORMATIVE EVALUATION, and SYSTEMIC SELF-STUDY.
 
systematic self-study
1. Definition: judge the value and worth of an educational program based on its stated mission. The MISSION STATEMENT is a concise, well-articulated public declaration of the general values and principles that guide the program. The statement should describe the program, its purpose and function, its rationale, and its stakeholders (e.g., what it is, what it does, why it does it, and for whom).  Often, programs also provide a public statement of their vision used to describe what they hope to achieve-their loftiest aspirations-in tandem with their mission. The terms MISSION STATEMENT and VISION STATEMENT are often conflated and used interchangeably. Yet, for purposes of this self-study, a MISSION STATEMENT declares its intended present-oriented overarching purposefulness; a VISION STATEMENT expresses a future-oriented hoped-for reality.
2. Compare with ASSESSMENT, FORMATIVE EVALUATION, MISSION STATEMENT, SUMMATIVE EVALUATION, and VISION STATEMENT.


What is Increasing Student Success?

This is an essential guide for educators, administrators, policymakers, and the media. Glossaries are dynamic expressions of current language usage. Education has changed dramatically in recent years, and so must also the language used to describe and define them. We believe this glossary is useful for a wider field of educators promoting student success. This glossary provides precise language and definitions to use when communicating with peers and more effectively influencing administrators, legislators, and the media.

Assessment

These glossary terms are primarily related to student and program assessment. Some related terms are located under the Program Management category. More comprehensive glossaries of terms can be found in the Greenwood Dictionary of Education (Collins & O’Brien, 2011) and the Handbook of Practical Program Evaluation (Newcomer, Hatry, & Wholey, 2015).

affective domain
1. Definition: “A part of Bloom’s Taxonomy of Educational Objectives for student attitudes, values, and emotional growth. The affective domain includes five basic categories: receiving, responding, valuing, organization, and characterization by a value” (Dembo, 1994. p. G-1).
2. Compare with COGNITIVE DOMAIN and METACOGNITIVE DOMAIN.

alternate assessment
1. Definition: “Examination of student progress through direct observation of student performance and judgment of learning products through a collection of authentic sources such as behavior, student presentations, and work” (Collins & O’Brien, 2011, p. 18).
2. Compare with ASSESSMENT, DIFFERENTIATED PLACEMENT, DIRECTED SELF-PLACEMENT, PLACEMENT, and PLACEMENT TESTING.

assessment
1. Definitions: (a) “Process of applying systematic formal and informal measures and techniques to ascertain students’ current competencies and abilities; (b) Process of determining students’ strengths and weaknesses in cognitive and affective areas for the purpose of generalized placement; (c) Act of assessing, or taking a measurement by counting, rating, or estimating the amount of skill, ability, or knowledge of some element of an individual or a program); (d) ASSESSMENT should be as objective as possible (value-free), as opposed to EVALUATION, which suggests that value has been added. Assessment does not assume, in advance, what is good, worthwhile, or desirable. In analogy to science, assessment is observation. Although objectivity is always relative, it is important to separate the measurement from the interpretation of its meaning” (Collins & O’Brien, 2011, p. 36); and (e) “While ‘ASSESSMENT’ means “measurement,” the term is increasingly used in the higher education context to refer to a systematic cycle of collecting and reviewing information about student learning. The complete cycle involves: clearly stating expected goals for student learning, offering learning experiences, measuring the extent to which students have achieved expected goals, and using the evidence collected to improve teaching and learning” (Office of the Provost, n.d., para. 1).
2. Examples: College entrance examination scores, scores on pretests for all students enrolled in a course, and graduation rates for students in a particular academic degree program.
3. Compare with ALTERNATE ASSESSMENT, CAUSATION and CORRELATION, DIFFERENTIATED PLACEMENT, DIRECT SELF-PLACEMENT, EVALUATION, PLACEMENT TESTING, PROGRAM GOAL, PROGRAM OBJECTIVE, RESEARCH, and SYSTEMIC SELF-STUDY.

backwash
1. Definition: Describing the positive or negative impact that an assessment of a specific skill has on whether that skill has been acquired.
2. Examples: (a) Instructors organize their class learning activities directly to prepare for high-stakes tests that can impact funding for the school; and (b) Supplemental learning topics are ignored to permit more time for the instructor to teach to the test.

baseline
1. Definitions: (a) Natural occurrence of behavior before intervention; and (b) Data collected to establish a point of comparison between previous behavior and that which occurs after an intervention is introduced.

behavioral change
1. Definition: Difference in performance that is observable and documentable.
2. Examples: Course dropout rate, final course grade, and persistence toward graduation following an intervention activity.
3. Compare with ACADEMIC MENTORING, COURSE-BASED LEARNING ASSISTANCE, and TUTORING.

causation and correlation
1. Definitions: (a) CAUSATION occurs when one variable increases or decreases directly from another variable. This is difficult to establish with human subjects since other variables may have an influence. This is easier to establish under carefully monitored scientific studies that are replicated numerous times, and (b) CORRELATION suggests a high likelihood that two variables are associated. Studies may report the likelihood of this relationship by establishing the percentage of chance that some other variable might explain the results.
2. Examples: (a) Carefully designed studies replicated many times established the CAUSATION of cigarette smoking to various medical conditions including lung cancer; and (b) Attending a student-led study group results in a CORRELATION of higher course grades.
3. Compare with ASSESSMENT, EVALUATION, FORMATIVE EVALUATION, and RESEARCH.

cognitive domain
1. Definition: “A part of Bloom’s Taxonomy of Educational Objectives. Bloom divides the objectives in the cognitive domain into six categories: knowledge, comprehension, application, analysis, synthesis, and evaluation” (Dembo, 1994, p. G-2).
2. Compare with AFFECTIVE DOMAIN, ASSESSMENT, and DIAGNOSIS.

cohort
1. Definitions: (a) Specific subpopulation or a subset of the entire student body studied over a period through the examination of their attitudes, behaviors, or scores on assessment instruments; and (b) Group of students who are a subset of the entire student body.
2. Examples: entering first-year COHORT of students at a college or university; subpopulation of students such as student-athletes, fraternities and sororities, or students over the age of 25.

college and career readiness
1. Definitions: (a) Level of preparation at which a student possesses the content knowledge, strategies, skills, and techniques necessary to be successful in any of a range of postsecondary settings (Collins, 2007; Conley, 2012); and (b) COLLEGE READINESS and CAREER READINESS are relative terms because they are dependent upon a particular institution, specific degree program within that institution, and a particular instructor teaching a course within a degree program.
2. Compare with COLLEGE-LEVEL, DEVELOPMENTAL, and DEVELOPMENTAL-LEVEL COURSE.

criterion
1. Definitions: (a) Measurable objective that describes the characteristics of acceptable performance; and (b) Specific standard by which performance is evaluated.
2. Compare with PROGRAM GOAL, PROGRAM OBJECTIVE, MISSION STATEMENT, and VISION STATEMENT.

developmental profile
1. Definition: Description of an individual’s academic or cognitive competencies as measured by, for example, high school grades, standardized college entrance exams, interviews, and surveys.

diagnosis
1. Definitions: (a) Process of determining students’ specific strengths and weaknesses to create a prescription for treatment (Harris & Hodges, 1981); (b) Planning of instruction based on an evaluation of students’ needs; and (c) The classification of people into established categories (Harris & Hodges, 1981).
2. Compare with ASSESSMENT.

direct measures
1. Definition: “Processes used to directly evaluate student work. They provide tangible, self-explanatory, and compelling evidence of student learning.
2. Examples: Exam questions, portfolios, performances, projects, reflective essays, computer programs, and observations” (Office of the Provost, n.d., para. 3).
2. Compare with INDIRECT MEASURES.

directed self-placement
1. Definition: Students make an informed choice about which level of their first college mathematics and writing course. Students complete a self-assessment of their reading, mathematics, and writing skills based on their high school experiences, reflection survey, and review requirements for different college-level mathematics and writing courses. The survey is not a placement test and the student's decision for the course selection is made alone.
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, DIFFERENTIATED PLACEMENT, PLACEMENT, and PLACEMENT TESTING.

evaluation
1. Definitions: (a) “Process of establishing the utility or value of a particular activity or program; (b) Decision-making process of interpreting test/assessment results, deciding what is good, good enough, or effective, for instance. Thus, in EVALUATION, an important component is subjective and philosophical; (c) Making data-based judgments and decisions about student academic skills, student progress, and/or program effectiveness; and (d) Measuring an activity or program with the desired outcome” (Collins & O’Brien, 2011, p. 171).
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, CAUSATION and CORRELATION, PLACEMENT TESTING, PROGRAM EVALUATION, and RESEARCH.

evaluation standards
1. Definition: Established to measure the efficacy of an activity or program, such as its worth, effectiveness, or efficiency for producing desired outcomes.

formative evaluation
1. Definitions: (a) Rather than waiting until the end of an education activation, EVALUATION valuation occurs while the event is underway. This provides an opportunity for immediate changes to occur to the educational activity to increase the positive effect for the students and accomplish goals that were desired; and (b) “A process to determine the extent to which students are progressing through a certain learning or development goal; used to provide continuous or frequent feedback to help shape, modify, or improve the program or service while it is happening” (Council for the Advancement of Standards, 2020, para. 29).
2. Compare with ASSESSMENT, CAUSATION and CORRELATION, EVALUATION, and SUMMATIVE EVALUATION.

human subjects research
1. Definition: Investigations (other than normal evaluation of student learning by instructional staff) involving people as participants. Such investigations may require prior approval by the institution and require following federal, state, and institutional rules for such research studies.

indirect measures
1. Definition: “Processes that provide evidence that students are probably attaining learning goals. These require inference between the student’s action and the direct evaluation of that action. Examples include: course grades, student ratings, satisfaction surveys, placement rates, retention and graduation rates, and honors and awards earned by students and alumni” (Office of the Provost, n.d., para. 7).
2. Compare with DIRECT MEASURES.

measurement
1. Definitions: (a) Process of determining the extent to which some characteristic is associated with an object; and (b) a unit of performance generally used to assess the efficacy of a particular intervention or treatment.
2. Examples: Outcome scores of students and graduation rates of students within a given time period.
3. Compare with ASSESSMENT.

placement
1. Definition: “Assignment of students to an appropriate course or educational program in accordance with their aims, capabilities, readiness, educational background, and aspirations. PLACEMENT may be based on previous experiences, scores on admissions or entrance tests, or tests specifically designed for placement purposes” (Arendale et al., 2007, p. 25).
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, DIFFERENTIATED PLACEMENT, DIRECTED SELF-PLACEMENT, AND PLACEMENT TESTING.

placement testing
1. Definition: ASSESSMENT is given by an institution or by groups like the College Board to determine the academic or skill levels of students, especially new students, in order to place them in appropriate courses and programs.
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, DIFFERENTIATED PLACEMENT, DIRECTED SELF-PLACEMENT, and PLACEMENT.

power test
1. Definitions: Test of a particular skill having no time limits to determine an individual's strength in a particular skill area.

program assessment
1. Definition: “The systematic and ongoing method of gathering, analyzing, and using information from various sources about a program and measuring PROGRAM OUTCOMES in order to improve student learning” (Selim, et al., 2008, p. 3). PROGRAM ASSESSMENT is diagnostic, process-oriented, and provides feedback. It is a process used to provide a program with feedback on its performance with the intent of helping improve the program and in particular, improve student learning (Selim, et al., 2008, p. 3). Effective program assessment plans should address (a) what a program is trying to accomplish, (b) how well it does it, (c) how does the program contributes\ to student development and growth and (d) how student learning can be improved (Selim et al., 2008).
2. Compare with ASSESSMENT

program evaluation
1. Definition: “Systematic method for collecting, analyzing, and using the information to answer questions about projects, policies, and programs and for program improvement” (Collins & O’Brien, 2011, p. 374).
2. Compare with ASSESSMENT, EVALUATION, and RESEARCH.

program outcomes
1. Definitions: (a) Typically part of a program’s ASSESSMENT plan, address specific actions and achievements that a program has reached. PROGRAM OUTCOMES are often used to measure program-level goals, and operational outcomes, (Council for the Advancement of Standards, 2015).
2. Examples: (a) PROGRAM OUTCOMES often describe programmatic elements, such as the quality or quantity of program usage, such as growth of students’ enrollment or students’ use of a program; (b) PROGRAM OUTCOMES can indicate fiscal sustainability or facilities and infrastructure improvements. However, it is important to distinguish between PROGRAM OUTCOMES and STUDENT LEARNING OUTCOMES. Note that PROGRAM OUTCOMES do not describe student learning (North Carolina Agricultural and Technical State University, n.d.).

readiness profile
1. Definition: “Readiness for learning, whether in absolute terms (as in relation to cut scores), relative terms (as in relation to other students), and self-referential terms (as in relation to student goals, interests, and aspirations)” (Conley, 2012, p. 18).

research
1. Definitions: (a) Investigation of an original nature conducted to gain understanding and knowledge. This inquiry may be undertaken to obtain new information or confirm previously conducted investigative studies; and (b) Asking questions and using quantitative or qualitative means to achieve the goal of obtaining new knowledge.
2. Compare with ALTERNATE ASSESSMENT, ASSESSMENT, EVALUATION, and PLACEMENT TESTING.

student development
1. Definitions: (a) Learning outcomes that occur through interaction with an environment that enhances academic, intellectual, interpersonal, psychosocial, moral, and faith/spiritual (for some institutions) development; and (b) “Individual growth that is an intended outcome of engaging with functional area programs and services. Student learning and development refers to the changes that result when students are exposed to new experiences, concepts, information, and ideas; the knowledge, understanding, and personal growth are generated, in this context, from interactions with higher education learning environments” (Council for the Advancement of Standards, 2020, para. 56).
2. Compare with STUDENT DEVELOPMENT OUTCOMES and STUDENT LEARNING OUTCOMES.

student development outcomes (SDOs)
1. Definition: Statements that specify what students can do when they have completed or participated in a course or program. SDO's specify characteristics (such as responsibility, resilience, self-awareness, and cultural humility) that must be observable, measurable, and demonstrable.
2. Compare with STUDENT DEVELOPMENT, STUDENT LEARNING GOALS, and STUDENT LEARNING OUTCOMES.
student learning goals
1. Definition: Programs often include STUDENT LEARNING GOALS, which generally long-range statements are written from an educator’s perspective that gives general content and direction of a learning experience for students. These goals focus on the general aims of the curriculum. Good instruction begins with distinct STUDENT LEARNING GOALS from which to select appropriate instructional activities and outcome assessments that help determine students’ mastery of learning (DePaul, n.d.).
2. Compare with STUDENT DEVELOPMENT, STUDENT DEVELOPMENT OUTCOMES, STUDENT LEARNING GOALS, and STUDENT LEARNING OUTCOMES.

student learning objectives
1. Definition: STUDENT LEARNING OBJECTIVES (not to be confused with STUDENT LEARNING OUTCOMES) are more specific than the goals and are used to describe what an educator intends to teach in a learning experience. STUDENT LEARNING GOALS and STUDENT LEARNING OBJECTIVES are often used to structure the content of an educational activity (UCLA Health, 2016).
2. Compare with student learning goals, STUDENT DEVELOPMENT, STUDENT DEVELOPMENT OUTCOMES, and STUDENT LEARNING OUTCOMES

student learning outcomes (SLOs)
1. Definition: A learning outcome, [often referred to as STUDENT LEARNING OUTCOMES (SLOs)], and also part of a program’s ASSESSMENT plan. SLOs are “describe significant and measurable change occurring in students as a direct result of their interaction with an educational institution and its program and services” (Council for the Advancement of Standards, 2015, para. 51). More specifically, SLOs demonstrate an action which is observable and measurable, and one that can demonstrate attainable skills, abilities, and/or competencies (Oxnard College, n.d.). Please note that STUDENT LEARNING OUTCOME statements are sometimes referred to as STUDENT LEARNING OBJECTIVES within the literature (Selim et al., 2008). For purposes of this glossary, the term STUDENT LEARNING OBJECTIVES describes what an educator intends to teach in a learning experience and STUDENT LEARNING OUTCOMES are what students can know or do after a learning experience. The achieved results are then often revealed in an ASSESSMENT report.
2. Compare with STUDENT DEVELOPMENT, STUDENT DEVELOPMENT OUTCOMES, and STUDENT LEARNING GOALS.

student success
1. Definition: “Aggregate of many aspects of the student experience, including academic success, connection to the campus, developing interpersonal and intrapersonal skills, and preparing for entrance into the global society and workforce. Institutions may define student success for their population, considering student goals, and evidence of learning and development. Those attempting to measure student success often point to rates of year-to-year retention and percent of students who persist to the completion of their goals” (Council for the Advancement of Standards, 2019, para. 56).

student success goals
1. Definition: measurements of grades, completion, retention, persistence, graduation, indicated benchmarks, and competences after learners participate in learning experiences such as a single or set of courses/activities. However, they are inexact measures of actual learning, as students enter a course/activity with varying levels of already achieved knowledge or competence.
2. Compare with STUDENT DEVELOPMENT GOALS, STUDENT SUCCESS, and STUDENT SUCCESS OUTCOMES.

student success outcomes
1. Definition: measurements of grades, completion, retention, persistence, graduation, and/or competence to demonstrate program value. Both newer and established programs track these measurements to determine the effectiveness of courses and services. STUDENT SUCCESS measures may correlate with learning if the student reaches a set competence or benchmark after participating in learning experiences such as a single or series of courses, activities, or services. However, they are inexact measures of actual learning or development, as students enter with varying levels of already-achieved knowledge or competence.
2. Example: Tutoring services programs intend that students who participate in tutoring activities will improve their grades (a STUDENT SUCCESS measure) and their learning (a student learning outcome). Accrediting agencies are interested in student learning and development, and well-established tutoring services programs are uniquely positioned to assess those outcomes.
3. Compare with STUDENT DEVELOPMENT OUTCOMES, STUDENT SUCCESS, and STUDENT SUCCES GOALS.

summative evaluation
1. Definitions: (a) “Evaluation activities used to decide if a particular activity or function should be continued, enhanced, curtailed, or eliminated; and (b) Sometimes refers to evaluation activities that occur after the event under investigation has concluded to generate information for future decisions” (Council for the Advancement of Standards, 2020, para. 57).
2. Compare with ASSESSMENT, EVALUATION, FORMATIVE EVALUATION, and SYSTEMIC SELF-STUDY.

systematic self-study
1. Definition: judge the value and worth of an educational program based on its stated mission. The MISSION STATEMENT is a concise, well-articulated public declaration of the general values and principles that guide the program. The statement should describe the program, its purpose and function, its rationale, and its stakeholders (e.g., what it is, what it does, why it does it, and for whom). Often, programs also provide a public statement of their vision used to describe what they hope to achieve-their loftiest aspirations-in tandem with their mission. The terms MISSION STATEMENT and VISION STATEMENT are often conflated and used interchangeably. Yet, for purposes of this self-study, a MISSION STATEMENT declares its intended present-oriented overarching purposefulness; a VISION STATEMENT expresses a future-oriented hoped-for reality.
2. Compare with ASSESSMENT, FORMATIVE EVALUATION, MISSION STATEMENT, SUMMATIVE EVALUATION, and VISION STATEMENT.