Wednesday, June 30, 2010

Meeting Summary June 30th

Today we broke up into our workteams and got busy creating outcomes for our competencies and descriptions for our rubrics. Towards the end of the meeting we began to "Show and Share". We found out that Sharon Carson rules when it comes to writing outcomes that are measureable!!! She will work with us next sessions to get our outcomes in shape.
We wondered if the most pragmatic way to assess Teamwork would be to have the faculty teaching the course assess it -- we are not really sure abou this one and will need to talk to faculty that will be assessing this competency and also check for models for assessing it on-line.
We wondered if all the outcomes in a competency's generic rubric will need to be assessed by all the courses that "own" that competency...

June 29th Meeting Summary

This meeting was spent reviewing a proposal of an overall plan for accomplishing gen ed assessment Fall 10 and Spring 11 with a focus on Fall 10. A sampling plan was also reviewed.
Here are the decisions that we made:
- we will use generic objectives and rubrics created by the assessment team this summer to assess Communication Skills, Teamwork, and Social Responsibility. All aspects of the rubric will be created by the assessment team this summer including the outcomes and the descriptions.
- faculty from selected sections of courses will be asked to dedicate an an existing assignment to serve the dual purpose of a course grade as well as assessing a competency or competencies (embedded assessment)
- the student artifacts for each competency will be assessed by an inter-disciplinary team of faculty (for instance, if there are 400 artifacts that need to be assessed in written communication, then 20 teams of 2 faculty each from a variety of disciplines would be assigned 20 artifacts to assess).
- we are going to ask IR for information about the courses students enrolled Fall 09 that had 48 hours or more and had completed Engl 1302 and one of the Government Core courses to get a feel for which courses on our list of 43 were most likely courses students close to graduation were taking last Fall. We are hoping to reduce the number of courses based on this information.

All the workteams are going to have their objectives and rubrics with descriptions completed by the last Summer I session for "show and share" (July 7th). All the workteams are going to share where they are on the outcomes for the rubrics at the June 30th session for feedback. The Summer I group will also brainstorm some elements of Guidelines for the faculty to give to the Summer II group for completion.

Tuesday, June 29, 2010

Suskie Chapter 9 - Rubrics

This chapter gives details about Rubrics and includes a great list of reasons for using rubrics on page 139! The following types of rubrics are discussed with examples along with the pros and cons of each type:
• Checklist Rubrics
• Rating Scale Rubrics
• Descriptive Rubrics
• Holistic Scoring Guides
• Structured Observation Guides
Linda Suskie advises that looking for models is a great way to create effective rubrics (we have been doing this!)
Here are the steps for creating a rubric:
1. List the learning goals
2. Create the rating scales (at least 3 levels) – use names and not just numbers
3. Fill in the boxes if you are creating a descriptive rubric.

We are going to use a descriptive rubric with 3 levels for our gen ed assessment Fall 10.

Monday, June 28, 2010

Suskie, Chapter 7: Organizing an Assessment Process

Basically, this chapter builds upon the information that was presented in the previous chapters. Assessment should consist of a committee under faculty leadership. The committee should develop an assessment plan beginning with the following: a clear understanding of why we are assessing; a clear understanding of the audiences for the assessment findings; a supportive climate for assessment; and adequate resources to support assessment efforts (p.99).
Take Stock of Curricula & Learning Opportunities
Suggested options for reviewing our curriculum to ensure alignment with learning outcomes:
Curriculum mapping—map learning goals against its courses; this helps to identify that some important learning outcomes are addressed in just one or two courses instead of throughout the curriculum (p.99).
Transcript analysis—review sample of graduate students transcripts to learn which courses they choose and when they take them; this will reveal which gen education courses students usually take, when they take them, and if the gen ed goal is achieved through courses in their majors (p.101).
Syllabus Analysis—this helps determine if students have enough assignments and class work to achieve intended course learning goals; this should be an ongoing review since syllabus may change often; best results occur when syllabi is required (p.101).
This whole process may reveal that the curriculum does not align well with our learning goals. Students may graduate without taking courses that include our overall learning goals (p.102).
Take Stock of Available Assessment Information
Time is better spent if we us any assessment information that is already available to us: student evaluations, capstone project results, retention & graduation rates, student transcripts, etc (p103).
Set parameters
Define key terms—develop locally accepted definitions of terms including assessment, goal, standard, and rubric (p.104).
Statement of principles of good assessment practice—should describe characteristics of good assessment and the college’s commitment to fostering successful assessment practices (p.105).
Provide guidelines on what everyone is to do—gives faculty and staff clear expectations and guidance on precisely what they are to do (p.105).
Identify who beyond students might provide assessment information
Other possible sources of information regarding student learning: students’ peers, alumni, former students who leave before graduating, field experience supervisors, employers, and faculty (p.105-8).
Work out the logistics
Table 7.2 (p.109) lists some logistical questions that may need to be answered for each key learning goal in a program or curriculum. Some examples: In which courses will faculty use this assessment strategy?; From which sections will you collect examples of student work?; How do you expect to use the results?; etc.
The logistics plan needs to be flexible which means that a single time line for everyone across the college may not work. Faculty, staff, and student need to be involved in deciding the details (p.110).
Special challenges
Four situations may have additional considerations: general education core curricula, interdisciplinary and independently designed programs, adjunct faculty, and a curriculum that’s about to change. Few general education curricula have fully developed assessment programs in place. A collection of interdisciplinary/independent courses from which students can choose from often lack curricular coherence and faculty ownership making student learning assessment a challenge. Lack of time to plan and implement an assessment plan is difficult when a fairly large number of part-time adjuncts teach general education courses. And finally, it may not make sense to begin an assessment plan for a curriculum that may drastically change or become obsolete soon (p.110-13).
Submitted by Tina Mesa.

Suskie Chap 11 pgs 167 – 179 & Chap 16 pgs 260 – 261

In Chapter 11 Linda Suskie explains the process of developing a test blue print-- “an outline of the test that lists the learning goals that students are to demonstrate on the test.”
She also gives these arguments for developing a test blue print:
•It helps ensure that the test focuses on the learning goals you think are most important (instead of just writing questions over the content without a plan for emphasis)
•It helps ensure that the test is not just asking for basic knowledge but also asks higher order thinking skills questions.
•It makes test creation faster
•It helps document student attainment of learning outcomes.
This chapter provides a recipe for creating a test blueprint and also provides an example of a test blue print on page 169.
Recipe
Step 1: List the major areas that the test will cover
Step 2: Allocate fractions of the test to each of those areas to reflect the relative importance of that area (assign points or number of questions)
Step 3: Within each area, list the learning goals that you want to assess – use action verbs
Step 4: Spread the points or test questions within the area among the learning goals within that area in proportion to their importance.

The rest of this chapter is devoted to giving nuts and bolts advice on:
•Writing Multiple Choice questions
•Avoiding Trick questions
•Writing Interpretive Exercises (an example of an interpretive exercise is one where you ask a student to read a passage and then you ask the student questions based on the reading)
•Writing Matching Items
•Writing Completion or Fill in the Blank items
These pages would be helpful for all faculty and may be a nice focus for a professional development session.

In Chapter 16 (Summarizing and Analyzing Assessment Results) pages 260 & 261, Linda Suskie advises that if a test blue print is written, it can be used to group the items with a common learning goal and aggregate the results for that particular learning goal. Some program areas a Palo Alto used this technique to assess program learning outcomes Spring 10.

Wednesday, June 23, 2010

Meeting Summary 6/23/10

We finished all the case studies reports today! We also had a discussion about the questions that we want to have answers to as a result of our assessment. We kept track of some concerns/issues that came out of the discussion on the gen ed page in our wiki. One issue is which students in which courses will be the target of the assessment so that we can show "value-added" for Communication skills, Teamwork, and Social Responsibility from the educational experience students receive at PAC. Next we looked at a model for doing the assessment that was a balance between reliable (same outcomes) and valid (program faculty create the descriptions for the rubric categories for each outcome and faculty choose the assignment that will be the target for the assessment).
Finally, we explored the resources on the North Carolina State Assessment site in our competency groups to get ideas for our competency outcomes that will work across disciplines.

Coker College Gen. Ed. Assessment

I. How they began


  • Tried an existing, external test first (Academic Profile). Rejected it because it did not provide answers to the questions they really wanted to ask.

  • They created gen ed goals and defined them broadly (for general acceptance by an educated audience).

  • Wanted to assess students after graduation, but fell back on embedded, pre-grad. assessments.

II Deciding how to do the assessment

  • Tried to create a master rubric for each core skill (for example, critical thinking or effective writing), but found it difficult to reduce to a single rubric. They decided to let each instructor decide what to assess in their class and create a rubric for it. The rubric was included on their course syllabus.

III Doing the assessment

  • Each faculty member reported their assessment, for every student, online. There were four possible ratings: remedial, fresh/soph level, jr/senior level, or graduate level. This generated thousands of assessments each year. They found indications of inter-rater reliability (instructors rating the same student for the same core skill would make similar judgements of that student).
  • ****Back to our earlier discussion of inter-rater reliability vs validity. The assessment team at Coker tried to balance the standardization which is required for reliability with the complexity which is needed for validity (see p. 51). They felt they were leaning toward validity by letting each faculty member choose their assessment, but that there were costs associated with this: what meaning did their results have for others?
  • They found reassurance in that students with higher assessments also had higher GPAs. Does this seem reasonable? Also they found a correlation with higher writing assessment scores and records of library usage.

IV Other methods attempted: they tried an online portfolio for writing assessment, but ended it. Found it too hard to create a rubric, and went to more standardized writing assignments.

V Outcomes

  • Students at all levels (i.e. freshman , sophmore...senior) were assessed. They found students closer to graduation generally had higher scores, but the assessment generated no true longitudinal data. Would that have value?
  • Different majors had different average assessment scores.
  • Evening students assessed at different levels from the daytime students

VI Using the results

  • Created a new writing program, funded a writing center, created an online portfolio system for students (revisiting the portfolio concept, later re-interoduced into their assessment plans).
  • Added new assessment methods: exit surveys, alumni surveys, portfolio review, National Survey of Student Engagement and the Faculty Survey of Student Engagement.

VII Their tips for success

  • Make it esy to use (they still like the ease of the faculty reporting system).
  • Four levels was an inadequate scale. They are considering going to a 1-10 scale or using a sliding scale.
  • They would like to close "the communication loop" with students without giving them their individual ratings. There was mention of some use of their new portfolio system, but I could not determine what in fact they were doing (see bottom of p 56).
  • Their internal assessment method makes it dificult to compare themselves with their peers, but standardized assessments were oversimplified for the purposes of improving student learning. Still unresolved...

Tuesday, June 22, 2010

Today's Meeting Summary

Today we worked in our competency groups and started collecting ideas on our workteam's wiki page for student learning outcomes for our competency.
Larry Rodriguez provided us with a summary of the Raymond Walters CC Case Study and Yolanda Reyna provided us with a summary of the Miami Dade CC Case Study.
Larry provided a handout from the Northern Virginia CC --"Performance Assessment, Authentic Assessment, and Primary Trait Analysis" -- I will bring the copies of this handout to tomorrow's meeting.
Yolanda shared with us a power point on Miami Dade's approach. She also shared a video where Miami Dade faculty talked about Outcomes Assessment and how it applied to their courses or student services area. We are going to put both of these in our "Resources not in the Binder" folder in our Wiki!

Blackburn College Case Study

1. What are the details of the Gen Ed Assessment Process for your case study college?
• Faculty clusters create a common rubric.
• Individual faculty have the flexibility to develop the assignment that will be assessed
• The rubric scores are recorded on a form and sent to IR

2. What practices, if any, might be good for us to adopt at PAC?
I like the idea of interdisciplinary faculty clusters owning a competency – this may be something we can implement when we slow down and do just one or two competencies a year. I really like the idea of the assignment that will be the focus of the assessment being developed by departments or possibly individual faculty -- this will help with faculty buy-in.

TCU and Alverno

TCU

1. What are the details of the Gen Ed Assessment Process for your case study college?

In order to develop an appropriate and meaningful assessment for each of the six categories within the curriculum, a faculty learning community (FLC) was funded to create and maintain an appropriate assessment strategy for each of the categories and to share the results of the assessment process with faculty who teach in each category.

A rubric was developed to look across courses and items to determine how well students had met the outcome. Each FLC developed specific measures for their core curriculum outcome; therefore, findings are different for each area. Members of FLC used these findings to reevaluate and articulate more effectively how their courses met the outcomes and reexamine how their different courses have a common theme that makes the courses appropriate for this core curriculum. The finding for one area of the core could have an indirect impact on other areas. As a result of the core curriculum assessment, students learn how a specific course is actually a part of the larger core curriculum.

2. What practices, if any, might be good for us to adopt at PAC?

The focus of FLC is more on what students are learning rather than on what the faculty are teaching. The development of a learner-centeredness rubric is applied to course syllabi. Professional development is a key element in pushing forward the learner-centered agenda.

Alverno

1. What are the details of the Gen Ed Assessment Process for your case study college?

There are two assessments of Gen ED, the first is the assessment of individual students with regard to the outcomes for Gen ED; the second is the assessment of how well Gen ED is working. The data from the in-class assessments and the external assessments were used collaboratively to develop and refine the Gen ED curriculum. It is concentrated in the first two years, where students develop and demonstrate the eight abilities (communication, analysis, problem solving, valuing in decision-making, social interaction, developing a global perspective, effective citizenship and aesthetic engagement) through the intermediate level required of all graduates. All Gen ED courses as well as the introductory courses in the majors assure that students demonstrate the eight abilities.

2. What practices, if any, might be good for us to adopt at PAC?

Faculty across disciplines work together to identify the learning outcomes and to develop meaningful in-class as well as external assessments of those outcomes. The in-class assessments support student learning and improve teaching. The Gen ED learning outcomes are evaluated in an ongoing way, which results in fed back to the design of the program.

Monday, June 21, 2010

Folks,
Here's the ENGL0301 Grading rubric. All students enrolled in 0301 take the 'Exit Exam'. For most classes this is done in two 1-hr. classes (same as the regular classtimes). The first class they pre-write in a little blue exam book. The second class they type, format, & proof. The exams are then graded by a group of English 0301 Instructors who gather in a classroom. Each exam is scored according to the rubric below by 2 graders (neither of whom is to see the other's grade). If there is a wild discrepancy in the grading a third instructor will grade the exam and the higher 2 scores will be counted.
Exams, blue books, and scores are then returned to the class instructors who enter the grades. Those who fail the exit exam receive an "IP" no matter their grade for the course up to that time.
Those whose exam scores are 'borderline' are reviewed taking into account their work throughout the course and may end up passing.
Here's the rubric:
English 0301 Exit Exam Rubric & Scoring Guidelines

4 A well-formed essay that effectively communicates a whole message to a specified audience.
s a unified and developed topic maintained throughout the essay
s a clearly stated thesis
s controlled development of ideas; clear, specific, and relevant supporting details
s effective sentence structure, free of errors
s precise and careful word choice
s mastery of mechanical conventions, such as spelling and punctuation

3 An adequately formed essay that attempts to communicate a message to a specified audience.
s clear focus and purpose
s partially developed supporting details
s ambiguous, incomplete, or partially ineffective organization
s minor errors in sentence structure, usage, and word choice
s some mechanical errors, such as spelling and punctuation

2 A partially developed essay in which the characteristics of effective written communication are only partially formed.
s unclear statement of purpose
s lack of focus on the main idea
s largely incomplete or unclear development and/or organization
s poorly structured sentences with noticeable and distracting errors
s imprecise word choice and usage
s lack of control of mechanical conventions, such as spelling and punctuation

1 An inadequately formed essay that fails to communicate a complete message.
s inappropriate language and style for the given purpose or audience
s no clear statement of a main idea
s inconsistent or ineffectual supporting detail
s ineffective organization
s ineffective sentence structure
s imprecise word choice and usage
s lack of control of mechanical conventions, such as spelling and punctuation


IMPORTANT: Instructors do NOT score their own students’ exams. Instead, two other English 0301 instructors will each read the exam and assign a score based on the rubric above. Half point increments are permitted. The scores are combined, and the student must receive a combined score of 5 or higher in order to pass the exit exam. If a student receives a 4 or 4.5, the student’s instructor may jury him/her out of the course based upon the student’s portfolio. This is at the instructor’s discretion.

***Students are required to submit a portfolio to the instructor before taking the exam.

Wednesday, June 16, 2010

June 16th Meeting Summary

We had a lively meeting focused on aspects of gen ed assessment at Tomkins CC and CC of Baltimore Co. today.

We summarized the Gen Ed assessment process for each of the case study colleges and we discussed possible practices for PAC.

We will keep track of our thoughts about the case studies as we review them on the "Case Studies Ideas and Reactions" page in our Wiki http://pacassesscop.pbworks.com

Tuesday, June 15, 2010

"What is Assessment?" Suskie Chap 1

Here are some ideas that I thought were helpful in this Chapter:

Page 4 Table 1.1 – The Assessment Cycle
­ Establish clear, measurable expected student learning outcomes
­ Make sure students have sufficient opportunities to achieve the outcomes
­ Systematically gather, analyze, and interpret evidence to determine how well students learning matches our expectations
­ Use the resulting information to understand and improve student learning

Suskie also makes these points about the contemporary way to think about assessment:
Assessment promotes aligned goals – a coming together collaboratively on what we want students to learn (PAC faculty did this with Program Assessment Spring 10)
Assessment is used to make decisions about changes in curricula and teaching methods as well as to evaluate and assign grades – especially if course assessments align with the program goals, gen ed goals, institutional goals. When I studied the Program Review documents for the last 6 years at PAC, there was a lot of evidence that Program Review motivated changes in curricula and teaching methods.
Assessment is used to distinguish our college – and show how successful we are in meeting students’ and societal needs. PAC is very interested in becoming known as a place where students are successful!

As Suskie talks about assessment at the course level, she suggests that it may be helpful if there were some common test questions, or assignments with common criteria to evaluate common goals across courses. I believe that some program areas are doing this at PAC already and we may want to capitalize on this for our gen ed assessment.

Many PAC program areas assessed using embedded course assignments, and capstone experiences Spring 10.

Suskie discusses three approaches to gen ed assessment:
1. Let faculty identify their own embedded assessments – I call this "Embedded All the Way"
2. Use collegewide assessment (like a portfolio, published test, capstone requirement)- I call this "College-Wide All the Way"
3. Faculty teaching courses in a group of related disciplines or subjects identify a common assessment strategy -- I call this "Faculty Cluster"

Suskie discusses the difference between assessment and grading:
­ Grades focus on individual students while assessment focuses on cohorts of students and how effectively everyone (not an individual faculty member) is helping the students learn
­ Grades do no usually provide meaningful information on exactly what students have and have not learned.
­ Grades might include student behaviors that may not be related to course learning goals.
­ Grading standards might be vague or inconsistent
­ Grades don’t reflect all learning experiences.

June 15th Session Summary

We had some awesome discussions in our session today!
We looked at the assessment work that had been done at PAC Spring 10 to identify some attributes that we want to build from --
In the Core Reaffirmation Report we justified that our students had an opportunity to learn the core outcomes using the matrices that cross-walked our course outcomes to the core outcomes (we want to keep this in mind for the future and make sure that we have a mechanism in place to do this for our new gen ed assessment effort that we will launch -- an easy way that I have seen is to include the core outcome(s) in the syllabi of core courses). Another thing that we did in the Core Reaffirmation report is to use multiple measures -- CCSSE data, Program review data, Unit Planning data, High Risk Course data.
In the Written Communication Rubric and the Speech Rubric developed and used for Program Assessment this semester, we have an idea of student learning outcomes and criteria already that we want to consider for our gen ed assessement this Fall.

A second thing that we talked about were the three approaches to gen ed assessment and we pretty much ruled out the "College-wide All the Way" approach. We talked about the "Embedded Assessment" approach and the "Faculty Cluster" approach -- we seemed to be leaning toward using Faculty Clusters with Embedded Assessment.

We touched upon resources for looking at crafting Student Learning Outcomes but ran out of time to get much done on these. Tomorrow we will have the work groups start to brainstorm a student learning outcome for their competency.

Chap 2: How Can Student Learning Be Assessed?

The main focus of this chapter is to provide a glossary of some of the key terms used to describe various assessment tools and strategies. The author begins by defining direct and indirect evidence (see definitions below). She makes it clear that a proper assessment of student learning should not consist of indirect evidence alone (course grades, test grades, assignment grades, etc…); any assessment effort should be diverse, using both direct and indirect evidence.

The author then defines summative assessment (the kind obtained at the end of a course) and formative assessments (those undertaken while student learning is taking place). The main point here is that, while summative assessment is important to determine whether or not students are graduating with the competencies you want them to have and is important in regards to satisfying external audiences such as accreditors, employers, and policymakers, it is necessary to apply formative assessment in order to provide students with prompt feedback on their strengths and weaknesses throughout the semester. Overall, in order to properly assess our students, assessment of student learning needs to be an ongoing process throughout the semester.

The author concludes the chapter by defining various types of assessments; traditional, performance, authentic, embedded, add-on, local, published, quantitative, qualitative, objective and subjective. I found two points particularly relevant: (1) When it comes to implementing various types of assessment, student ‘buy-in’ is just as important (if not more important) as faculty ‘buy-in’. It is critical that we convince our students that it is important (for both themselves and the college) to participate in the assessment process; particularly with assessment ‘add-ons’ that may not directly influence their grades. (2) The assessment process is not a ‘one-size-fits-all’ process. Each program needs to determine which form of assessment will work best for their students and their program objectives.

Given the fact that one of our group objectives is to develop a common vocabulary, I felt that it was pertinent (necessary!) to provide everyone with a list of terms defined in this chapter. (Please pardon the blatant use of plagiarism.)


- Direct Evidence: evidence of student learning that is tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned. The kind of evidence that a skeptic would accept. Examples: scores and pass rates on appropriate licensure or certification exams; capstone experiences such as research projects, presentations, theses, etc…; presentations; portfolios; various other examples given on pg 21.
- Indirect Evidence: proxy signs that students are probably learning. Examples: course grades; assignment grades; retention and graduation rates; various other examples given on pg 21.

- Summative Assessment: assessment obtained at the end of a course or program
- Formative Assessment: assessment undertaken while student learning is taking place rather than at the end of a course or program.

- Traditional Assessments: tests that are designed to collect assessment information: multiple-choice tests, essay tests, and oral examinations. Students typically complete traditional assessments in controlled timed examination settings.
- Performance Assessments (Alternative Assessments): asks students to demonstrate their skills rather than relate what they’ve learned through traditional tests. Writing assignments, projects, laboratory and studio assignments, and performances are examples.
- Authentic Assessments: performance assessments that ask students to do real-life tasks, such as analyze case studies, conduct realistic laboratory experiments, or complete internships.
- Embedded Assessments: program, general education, or institutional assessments that are embedded into course work.
- Add-on Assessments: assessments that go beyond course requirements such as assembling a portfolio throughout a program or taking a published test or participating in a survey or focus group.
- Local Assessments: those created by faculty and staff at a college
- Published Assessments: those published by an organization external to the college and used by a number of colleges.
- Quantitative Assessments: assessments that use structured, predetermined response options that can be summarized into meaningful numbers and analyzed statistically. Examples: test scores, rubric scores, survey ratings, and performance indicators.
- Qualitative Assessments: assessments that use flexible, naturalistic methods and are usually analyzed by looking for recurring patterns and themes. Examples: reflective writings, online class discussion threads, and notes from interviews, focus groups, and observations.
- Objective Assessments: assessment that needs no professional judgments to score correctly.Subjective Assessments: assessments that yield many possible answers of varying quality and require professional judgment to score.

Chap. 2: How Can Student Learning Be Assessed?

How Can Student Learning Be Assessed?

The main focus of this chapter is to provide a glossary of some of the key terms used to describe various assessment tools and strategies. The author begins by defining direct and indirect evidence (see definitions below). She makes it clear that a proper assessment of student learning should not consist of indirect evidence alone (course grades, test grades, assignment grades, etc…); any assessment effort should be diverse, using both direct and indirect evidence.
The author then defines summative assessment (the kind obtained at the end of a course) and formative assessments (those undertaken while student learning is taking place). The main point here is that, while summative assessment is important to determine whether or not students are graduating with the competencies you want them to have and is important in regards to satisfying external audiences such as accreditors, employers, and policymakers, it is necessary to apply formative assessment in order to provide students with prompt feedback on their strengths and weaknesses throughout the semester. Overall, in order to properly assess our students, assessment of student learning needs to be an ongoing process throughout the semester.
The author concludes the chapter by defining various types of assessments; traditional, performance, authentic, embedded, add-on, local, published, quantitative, qualitative, objective and subjective. I found two points particularly relevant: (1) When it comes to implementing various types of assessment, student ‘buy-in’ is just as important (if not more important) as faculty ‘buy-in’. It is critical that we convince our students that it is important (for both themselves and the college) to participate in the assessment process; particularly with assessment ‘add-ons’ that may not directly influence their grades. (2) The assessment process is not a ‘one-size-fits-all’ process. Each program needs to determine which form of assessment will work best for their students and their program objectives.

Given the fact that one of our group objectives is to develop a common vocabulary, I felt that it was pertinent (necessary!) to provide everyone with a list of terms defined in this chapter. (Please pardon the blatant use of plagiarism.)
- Direct Evidence: evidence of student learning that is tangible, visible, self-explanatory, and compelling evidence of exactly what students have and have not learned. The kind of evidence that a skeptic would accept. Examples: scores and pass rates on appropriate licensure or certification exams; capstone experiences such as research projects, presentations, theses, etc…; presentations; portfolios; various other examples given on pg 21.
- Indirect Evidence: proxy signs that students are probably learning. Examples: course grades; assignment grades; retention and graduation rates; various other examples given on pg 21.

- Summative Assessment: assessment obtained at the end of a course or program
- Formative Assessment: assessment undertaken while student learning is taking place rather than at the end of a course or program.

- Traditional Assessments: tests that are designed to collect assessment information: multiple-choice tests, essay tests, and oral examinations. Students typically complete traditional assessments in controlled timed examination settings.
- Performance Assessments (Alternative Assessments): asks students to demonstrate their skills rather than relate what they’ve learned through traditional tests. Writing assignments, projects, laboratory and studio assignments, and performances are examples.
- Authentic Assessments: performance assessments that ask students to do real-life tasks, such as analyze case studies, conduct realistic laboratory experiments, or complete internships.
- Embedded Assessments: program, general education, or institutional assessments that are embedded into course work.
- Add-on Assessments: assessments that go beyond course requirements such as assembling a portfolio throughout a program or taking a published test or participating in a survey or focus group.
- Local Assessments: those created by faculty and staff at a college
- Published Assessments: those published by an organization external to the college and used by a number of colleges.
- Quantitative Assessments: assessments that use structured, predetermined response options that can be summarized into meaningful numbers and analyzed statistically. Examples: test scores, rubric scores, survey ratings, and performance indicators.
- Qualitative Assessments: assessments that use flexible, naturalistic methods and are usually analyzed by looking for recurring patterns and themes. Examples: reflective writings, online class discussion threads, and notes from interviews, focus groups, and observations.
- Objective Assessments: assessment that needs no professional judgments to score correctly.
- Subjective Assessments: assessments that yield many possible answers of varying quality and require professional judgment to score.

Patrick's Post facilitated by Pat while we both learn blog ins and outs!

Monday, June 14, 2010

Suskie Ch. 3: What is GOOD assessment????

The bottom line of this chapter is that if you really want to do good assessment you will do it often and in a number of different ways. It must be ongoing and take into account the learning styles, mental competencies, and atmospherics of students. No assessment is completely accurate. Faculty sharing their assessments/ideas is a good thing....

Planning, Tips for Implementing, and Embedded Assessment

Palomba "Planning for Assessment" Reading

This article advised that we need to think about who will be responsible for the assessment activities in each program area. This piece echoes the need to have a statement about the purposes of assessment (we started a working draft in our wiki last session!). This document suggests that we make a planning matrix showing the types of assessment activities that will be used, and the schedule for the assessment activities -- an idea that we will want to revisit once we figure out what we are doing! The article suggests that we set up a matrix with a separate row for each expected competences and columns that show the data needed for assessment, the group that will be assessed, the assessment method, the individual responsible for conducting the assessment and the timeline -- we had a similar scheme for program assessment this Spring only it was a document set up vertically instead of horizontally. “One of the best initial steps that faculty can take when selecting assessment techniques is to discuss thoughtfully the characteristics of methods that matter to them. Issues of technical quality, convenience, timeliness, and cost will likely dominate that discussion. The value that an assessment activity has for students also is important." -- we will have discussions like this periodically from now on as we design and look at assessment results and determine next steps.
A final take-away for me from this article was a listing of assessment practices that contribute to learning:
Setting high expectations, Creating synthesizing experiences, promoting active learning and ongoing practice of learned skills, encouraging collaborative learning, and providing assessment with prompt feedback

"Tips for Implementing the Process" Reading

These tips all seem worth thinking about:
­ - Establish faculty buy-in before beginning the assessment process
­ - Start small
­ - Involve the faculty who teach the courses including adjuncts
­ - Reward faculty leaders and/or faculty who are involved in assessment
­ - Provide workshops to assist faculty in developing assessment plans – especially faculty-to-faculty workshops
­ - Provide support with data collection and data analysis


"Embedded Assessment" Reading

This article gives concrete examples of what we might suggest to groups of faculty for assessment of gen ed.
This article also affirmed the approach used by many program areas this Spring 10:
­ - Fieldwork activities, service-learning activities
­ - Exams and parts of exams
­ - Homework assignments
­ - Oral presentations
­ - Group projects and presentations
­ - In-class writing assignments and Learning Journals
­ - Etc.

The article also gives insight on the strengths and weaknesses of embedded assessment – it advises that if we use embedded assessment, then we need to agree on a grading scheme.

The article advises that some assignments used for embedded assessment may need some minor tweaking and that in general we should embed the assessment in more that one course taught by more than one instructor so that information from the assessment can be generalized. The article also suggests that we pilot test embedded assessment assignments and scoring guides to ensure that the leaning outcome being assessed is appropriately targeted.

The conclusing examples of colleges use of Embedded Assessment didn't seem too helpful to me.

Thursday, June 10, 2010

Peter Ewell's Assessment, Accountability, and Improvements: Revisiting the Tension

This paper was incredibly helpful!
I think the table on page 8 of this article is a great way to see the difference between viewing "Accountability Paradigm" and the "Improvement Paradigm".
This paper does an excellent job of explaining the sources of the tension between the two paradigms.
The best part of this paper for me began on page 14 -- "Managing the Tension" Here some ideas Ewell gives us for doing this:
1. "Setting learning objectives rests with the institutions themselves. See assessment as part of our accountability to ourselves – like researches embrace peer review to maintain scholarly integrity."
2. "Show action on the results of assessment." Ewell points out that our competitive position is slipping internationally. He suggests that we need to be ready to report actual learning outcomes in comparative or benchmarked forms as well as being transparent about internal efforts for continuous improvement. He admonishes us to close the loop on assessment results. He gives some tricks of the trade to use assessment for improvement – think about this as a part of the design in the beginning (page 16 --I think those Suskie questions in Chapter 4 might helps us here!) We need to craft specific questions that we want answers to!!! What do we expect the data to reveal? What might be the action that we take as consequences of this or that result? He suggests that we also consider disaggregation of the results for specific populations or outcomes dimensions (this might be too much to hope for on our first attempt but as we get better, we will want to think about this one). He advises us to create concrete opportunities for faculty to look at the results together and discuss what the data mean and consider the action implications (We are going to do this 8/17/10). He tells us that the learning objectives must be inescapable – on syllabi, in catalogues, and visible in the criteria that faculty use to assign grades. This one definitely needs some attention...
3. "Emphasize assessment at transition points in the students’ college careers. Exit tests out of DS. Assessment at the conclusion of a Program (degree)" -- we are doing this -- yea!!!
4. "Embed assessment in the regular curriculum. Assignments need to be carefully designed to elicit responses appropriate for consistent scoring, scoring rubrics need to be developed that yield reliable ratings across graders, and a mechanism needs to be in place to assemble and store the artifacts themselves." We are going to work on this one!

Lots of wonderful ideas for balancing the tension!

June 10th Meeting Summery

We had an awesome meeting even if my Bob the Builder rock song version was drowned out by my Pandora folk guitar music!
I think everyon got into the Wiki and almost everyone got into the Blog by the time we left.
We had a great discussion about the tension between assesing for accountability and assessing for improvement and everyone is going to add their ideas about the 'locating the value of assessment for us' on our Welcome Wiki page!

I created a Wiki page with a draft timeline to give everyone an idea of how I see the summer going. I also uploaded the latest draft of the Assessment Training file that gives the dates and the locations of our meetings.

The inaugural reading leaders were fantastic! Thanks Beth,Sabrina,and Abel!

We are off to a great start!

Tuesday, June 8, 2010

Chap 4: Why Are You Assessing Student Learning

Well, that is the question we have all been asking ourselves now isn't it??? Go ahead, nod emphatically....You can be honest!

In this chapter, Linda Suskie offers us two reasons to assess student learning: One, we are assessing student learning so that we can improve our teaching, our programs, our planning... and two, we are assessing student learning so that we are accountable to state and government agencies, to our students and because it helps us to validate our programs and their effectiveness. The chapter goes on to give many and various examples of these two reasons, but I think you get the pictures.

I can see us thinking about this in terms of improving student retention, improving quality of teaching, improving our professional development offerings by assessing student learning. For example, we improve student retention by employing more self-directed learning in the classroom -- actively involving the students in their own learning -- assessing that learning formatively and immediately adjusting our instruction as necessary (reviewing material again, offering the material to be learned using another method, etc) to ensure that students have learned the material. Our students get immediate feedback on their work, take a larger part in/responsibility for their own learning and therefore are more engaged in their courses. Thus, they are retained in our courses, we have all improved our quality of teaching and have learned something ourselves through persevering through the "assessment learning curve."

"Assessing Student Learning in General Ed," Paradise Valley Examples

Paradise Valley Community College is in the Maricopa County (in Arizona) Community College District. This reading is an excerpt from a larger article, and it begins with a brief mention of re-structuring the college went through to become learning centered. Like us, they have their high enrollment in gen. ed. classes, and so want to emphasize assessment of student learning in that context, but they also want to assess the student learning done outside of the classrooms, and student support areas have been engaged, also, in forming/assessing learning outcomes.



There follows an overview of the General Education program. For the Maricopa County CCD gen ed program there are seven "essential knowledge and skills" that are taught across the gen ed curriculum:


  • communication

  • arts and humanities

  • numeracy

  • scientific inquiry

  • information literacy

  • problem-solving and critical thinking

  • cultural diversity

The article also lists twelve skills and qualities students should have as a result of the gen ed curriculum, and they read like specific learning outcomes (see p 128).


From the Maricopa County gen ed, Paradise Valley CC has distilled their own gen ed goals. They have embraced critical thinking overall, and they list six specific learning outcomes for this(convenient to review when we turn to assessing critical thinking). They divided their gen ed into four areas, all of which will contribute to the common goal of these critical thinking skills. The four areas are:



  • communication

  • information literacy

  • problem solving

  • technology (new to any list)

It is not completely clear to me how it all flows together; each of these areas have their own learning outcomes, which I found on their web site and printed out for us all (you haven't enough to read...thank me later).


There is a paragraph next which mentions the collaborative role of student support professionals; assessment of out-of-class learning is still a work in progress.


The last piece of the article is perhaps most pertinent to us. It describes briefly their assessment of the gen ed. program, and discusses first why they do it. There is nothing surprising in their list of reasons (seee p 131).


The structure of their assessment process is the following:

I The Academic Assessment Team (AAT): they developed a multiyear assessment plan and completed a course mapping matrix, linking gen ed courses with gen ed outcomes (as we did for the core curriculum)

II General Assessment Teams: There is one for each of the four gen ed areas (see list above), and their duties are to research best practices, create learning outcome rubrics and scoring guides, approve the list of courses for initial implementation of assessment, conduct interdisciplinary discussions w/colleaugues, assist w/college-wide training.

These teams work on an annual cycle:

Fall--Assessment planning (cross disciplinary rubrics for each area, three level scale established (meets, does not meet, exceeds standard), courses identified, faculty trained in use of the rubrics)

Spring--Assessment

Summer--data analysis

Where PVCC is now: they've completed the first round of assessments using 89 class sections, and faculty are developing strategies to improve learning based on these results.

Relevance to our plans: The overall structure is a decent snapshot of a whole-college plan for assessing routinely. They, too, are using a subset of their classes, and clearly cross disciplinary discussions will be needed. But it is a bare bones picture where one suspects the devil is in the details. I am interested in their plans to asssess learning outcomes out of the classroom.

Sunday, June 6, 2010

Reflections on Dr. Millis' Workshop

I just gassed up at Walmart -- and I noticed that there was an offer from Walmart to enter me into a drawing to receive a $100 gift card if I would provide feedback to them on-line using the information on the back of the receipt to access their survey. It occurred to me that 'assessment' is pretty standard practice in the business world these days...
Dr. Millis' workshop to kick off our assessment focus this summer provided me with plenty to think about -- here are some ideas she that I believe relate to our assessment work:
  • She shared the Assessment Process -- formulate outcomes, develop or select assessment measures, create experiences to promote learning of the outcomes, assess and use results to improve learning
  • She tied this cycle to Wiggins and McTighe's Backward Design -- identify desired results, determine acceptable evidence, plan learning experiences and instruction
  • She shared Fink's Key Components of Curricular Design (a Backward Design approach) and Fink's Taxonomy of Significant Learning. One of the elements in particular in the Taxonomy resonates with me: "Learning how to learn" -- this is the ultimate goal and involves students learning how to assess their own learning.
  • She shared Ron Carriveau's three tiered model for writing student learning outcomes -- Communication Skills, Teamwork, and Social Responsibility are our general goals. We need to develop general outcomes and specific outcomes (these may be aspects of our rubric or test item blue prints that we may choose to develop as a part of our plan).
  • She advised us to share our student learning outcomes in our syllabus (I ordered her Syllabus book today!) to create a learning-centered approach -- we should consider this advice once we develop our outcomes for our three goals.
  • She explained the difference between summative and formative assessment and she emphasized the importance of formative assessment to motivate learning - both learning for the student and learning for the teacher. This ties back to Fink's "learning how to learn". It also connects to the Three Key Learning Principles from "How People Learn: Brain, Mind, Experience and School". The idea that "people construct new knowledge and understandings based on what they already know and believe" and the "metacognitive approach to instructions to help students learn to take control of their own learning by defining learning goals and monitoring progress in achieving them" particularly connect with our work on assessment.
  • She shared with us information on Rubrics

What I really enjoyed about Dr. Millis' workshop was the skillful way she connected Backward Design, Fink's Course Design principles, models, and tools, Ron Carriveau's information on writing student learning outcomes, the learning principles from "How People Learn", Angelo and Cross' CATS, and information on creating rubrics -- all using collaborative learning techniques!

So how does all this connect to our work this summer? As a community college math professor, I routinely assessed my students. I firmly believe that "if you want students to learn it, you have to assess it'. We assess students to help them improve and deepen their learning -- with the hope that students will eventually self assess to monitor and improve learning. All this assessment practice should also apply to us -- if assessment is a powerful way to help students learn how to learn, then the same learning principle should apply to all of us as well as we attempt to improve and learn as educators. The assessment movement is alive and well at the Walmart gas pumps, Target, Kolhs, etc. because these businesses desire to use feedback to improve. That is the power of assessment. We will also harness the power of assessment to learn and improve too as we work to help our students become better learners.