How Can Student Learning Be Assessed?
The main focus of this chapter is to provide a glossary of some of the key terms used to describe various assessment tools and strategies. The author begins by defining direct and indirect evidence (see definitions below). She makes it clear that a proper assessment of student learning should not consist of indirect evidence alone (course grades, test grades, assignment grades, etc…); any assessment effort should be diverse, using both direct and indirect evidence.
The author then defines summative assessment (the kind obtained at the end of a course) and formative assessments (those undertaken while student learning is taking place). The main point here is that, while summative assessment is important to determine whether or not students are graduating with the competencies you want them to have and is important in regards to satisfying external audiences such as accreditors, employers, and policymakers, it is necessary to apply formative assessment in order to provide students with prompt feedback on their strengths and weaknesses throughout the semester. Overall, in order to properly assess our students, assessment of student learning needs to be an ongoing process throughout the semester.
The author concludes the chapter by defining various types of assessments; traditional, performance, authentic, embedded, add-on, local, published, quantitative, qualitative, objective and subjective. I found two points particularly relevant: (1) When it comes to implementing various types of assessment, student ‘buy-in’ is just as important (if not more important) as faculty ‘buy-in’. It is critical that we convince our students that it is important (for both themselves and the college) to participate in the assessment process; particularly with assessment ‘add-ons’ that may not directly influence their grades. (2) The assessment process is not a ‘one-size-fits-all’ process. Each program needs to determine which form of assessment will work best for their students and their program objectives.
Given the fact that one of our group objectives is to develop a common vocabulary, I felt that it was pertinent (necessary!) to provide everyone with a list of terms defined in this chapter. (Please pardon the blatant use of plagiarism.)
- Direct Evidence: evidence of student learning that is tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned. The kind of evidence that a skeptic would accept. Examples: scores and pass rates on appropriate licensure or certification exams; capstone experiences such as research projects, presentations, theses, etc…; presentations; portfolios; various other examples given on pg 21.
- Indirect Evidence: proxy signs that students are probably learning. Examples: course grades; assignment grades; retention and graduation rates; various other examples given on pg 21.
- Summative Assessment: assessment obtained at the end of a course or program
- Formative Assessment: assessment undertaken while student learning is taking place rather than at the end of a course or program.
- Traditional Assessments: tests that are designed to collect assessment information: multiple-choice tests, essay tests, and oral examinations. Students typically complete traditional assessments in controlled timed examination settings.
- Performance Assessments (Alternative Assessments): asks students to demonstrate their skills rather than relate what they’ve learned through traditional tests. Writing assignments, projects, laboratory and studio assignments, and performances are examples.
- Authentic Assessments: performance assessments that ask students to do real-life tasks, such as analyze case studies, conduct realistic laboratory experiments, or complete internships.
- Embedded Assessments: program, general education, or institutional assessments that are embedded into course work.
- Add-on Assessments: assessments that go beyond course requirements such as assembling a portfolio throughout a program or taking a published test or participating in a survey or focus group.
- Local Assessments: those created by faculty and staff at a college
- Published Assessments: those published by an organization external to the college and used by a number of colleges.
- Quantitative Assessments: assessments that use structured, predetermined response options that can be summarized into meaningful numbers and analyzed statistically. Examples: test scores, rubric scores, survey ratings, and performance indicators.
- Qualitative Assessments: assessments that use flexible, naturalistic methods and are usually analyzed by looking for recurring patterns and themes. Examples: reflective writings, online class discussion threads, and notes from interviews, focus groups, and observations.
- Objective Assessments: assessment that needs no professional judgments to score correctly.
- Subjective Assessments: assessments that yield many possible answers of varying quality and require professional judgment to score.
Patrick's Post facilitated by Pat while we both learn blog ins and outs!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment