Wednesday, June 23, 2010

Coker College Gen. Ed. Assessment

I. How they began


  • Tried an existing, external test first (Academic Profile). Rejected it because it did not provide answers to the questions they really wanted to ask.

  • They created gen ed goals and defined them broadly (for general acceptance by an educated audience).

  • Wanted to assess students after graduation, but fell back on embedded, pre-grad. assessments.

II Deciding how to do the assessment

  • Tried to create a master rubric for each core skill (for example, critical thinking or effective writing), but found it difficult to reduce to a single rubric. They decided to let each instructor decide what to assess in their class and create a rubric for it. The rubric was included on their course syllabus.

III Doing the assessment

  • Each faculty member reported their assessment, for every student, online. There were four possible ratings: remedial, fresh/soph level, jr/senior level, or graduate level. This generated thousands of assessments each year. They found indications of inter-rater reliability (instructors rating the same student for the same core skill would make similar judgements of that student).
  • ****Back to our earlier discussion of inter-rater reliability vs validity. The assessment team at Coker tried to balance the standardization which is required for reliability with the complexity which is needed for validity (see p. 51). They felt they were leaning toward validity by letting each faculty member choose their assessment, but that there were costs associated with this: what meaning did their results have for others?
  • They found reassurance in that students with higher assessments also had higher GPAs. Does this seem reasonable? Also they found a correlation with higher writing assessment scores and records of library usage.

IV Other methods attempted: they tried an online portfolio for writing assessment, but ended it. Found it too hard to create a rubric, and went to more standardized writing assignments.

V Outcomes

  • Students at all levels (i.e. freshman , sophmore...senior) were assessed. They found students closer to graduation generally had higher scores, but the assessment generated no true longitudinal data. Would that have value?
  • Different majors had different average assessment scores.
  • Evening students assessed at different levels from the daytime students

VI Using the results

  • Created a new writing program, funded a writing center, created an online portfolio system for students (revisiting the portfolio concept, later re-interoduced into their assessment plans).
  • Added new assessment methods: exit surveys, alumni surveys, portfolio review, National Survey of Student Engagement and the Faculty Survey of Student Engagement.

VII Their tips for success

  • Make it esy to use (they still like the ease of the faculty reporting system).
  • Four levels was an inadequate scale. They are considering going to a 1-10 scale or using a sliding scale.
  • They would like to close "the communication loop" with students without giving them their individual ratings. There was mention of some use of their new portfolio system, but I could not determine what in fact they were doing (see bottom of p 56).
  • Their internal assessment method makes it dificult to compare themselves with their peers, but standardized assessments were oversimplified for the purposes of improving student learning. Still unresolved...

No comments:

Post a Comment