05_22_2008 EvalSys Team Meeting (Adobe Connect)
Sakai Course Evaluation Meeting
05/21/2008
Attendees:
Sean, U-M
Dick, U-M
Aaron, Cambridge
Hattie, Cambridge
Ellen, Maryland
Rick, Maryland
Seth, Columbia
Objective: Update the Sakai community on recent pilot results and discuss proposed future work to ensure technical alignment
Agenda:
0. Update on what's in 1.2 (5 minutes, Aaron)
   - Multiple choice & multiple answer
   - Extra blocking support
   - Reporting "fixed up" (html & Excel, CSV, PDF export)
   - Ad hoc groups rough around the edges
   - Ability to reopen evaluations; close early
   - copying items & scales
   - commenting on multi choice; multi-scale
   - hierarchy improvements
   - email templating
   - mark items to include in a set (tagging; building block for full hierarchy)
      - not yet in UI
   - Hattie to work on internal docs; not necessarily for end-user use
1. Update on U-M's Fall planning: requirements review and design implications (30 minutes, Sean & Dick)
- Project overview
   - Fall commitment (begins in early Sept with several eval windows, including end of term which starts in Sept->Dec
     - 6-10x increase over pilots
      - (30k possible evaluations, 8300 submitted evaluations; 7k possible participants, 3k submitted)
      - Dates:
         - dev through end of June; QC to follow (functional testing)
         - cap testing starting mid July
   - Group definition requirement being put off for now
   - Technical changes:
      - Email handling (Dick)
      - one email per student
   - Performance monitoring (Dick)
2. Updates from each institution on plans and any resource changes coming
in the future (10 minutes)
   - Maryland: Considering adding dept level hierarchy for Fall
   - Maryland: looking to hire Sakai programmer (end of summer target?)
   - Columbia: "keeping an eye on things" for the moment ¿
   - Cambridge: pilot OK, but lost some data
      - go live w/1.2 in October
      - desire to pretty up the templates and UI over summer
3. Update on the Maryland pilot (Ellen, 10 minutes)
   - Assessment office entered evaluations and manually created hierarchy
   - Colleges added their own questions; instructor items were duplicated for each instructor
   - 15 University wide questions
   - 5294 courses, 123k possible evals
   - 76000 completed (62%; just below Fall rate of 63%)
   - Second eval run due to missed courses
   - 12430 evals on busiest day; 2647 in busiest hour
   - reminders every 4 days, last 3 days were every day
   - lower response rate for schools with instructor added questions
      - survey fatigue?
      - worst case was crosslisted (3 instructors + institutional q's)
      - 69 questions; evaluations get long fast
   - ran for 2 weeks
   - summer sessions 1 week each using same version of software
   - initial email handled with external mail app; reminders via Sakai
     - modified email reminder for last message
   - power outage took down eval system; reminders resent
   - very few help desk tickets
4. Next Steps (Sean, 5 minutes)
- Do we need another meeting?
- Maryland technical performance issue
   - what's been done with Maryland observed performance issues?
   - in 1.1; currently only on Maryland branch; need to resolve tech approach
- Conference presentation details (if known)
- Best source of info on ad hoc evaluation support?; available instance?
- Tool road map (concerns over outdated Confluence space; new schools using tool?)
5. Action Items:
Action Item: Aaron to share aspectJ sample with Dick for profiling
Action Item: Sean to send out meeting notes and schedule next meeting