MISI Calls Summary - Jan 22 & 23

NEXT STEPS:

In order to have our "core" set of survey questions ready by the end of February, we have some work to do. Below are three things we are asking everyone to do.

1. Review the requirements for human subjects (IRB) approval / exemption at your institution. You may want to start by calling a member of your IRB and see if this type of work qualifies for an exemption.
If you would like assistance in filling out an IRB application, Steve and Stephanie are happy to provide advice from our IRB experiences.

2. Vote on the Michigan survey questions.
Go to this Confluence page: http://confluence.sakaiproject.org/confluence/x/FwDeAg
Login and click "Edit"
Click on the "Rich Text" tab
For each survey item, please indicate your vote: +1 for inclusion in the "core" set of survey questions, -1 for non-inclusion, or 0 if you are unsure or have no opinion. Also include your first name and institution in parentheses after your vote. For example:  +1 (Steve, UMich)
Click "Save" when done voting

   OPTIONAL:
    If you would like to suggest additional survey items not addressed in the Michigan survey (e.g., migration items),
    please add them on this Confluence page: http://confluence.sakaiproject.org/confluence/x/1oHhAg

3. Provide some additional information about your institution and your implementation of Sakai. Based on the suggestions made during the conference call, we have developed a list of several items that will provide more context about your institution and use of Sakai. In order to make the entry of this information easier, Steve has put together a short online survey for you to fill out.
    Please fill out the survey at this URL: http://lessons.ummu.umich.edu/2k/misi/info

When everyone has submitted their information, Steve will update this Confluence page: http://confluence.sakaiproject.org/confluence/x/LoLhAg

- - - - - - - - - - - - - - - - - - - - - - - - -

Conference Call Summary

Participants: Steve & Stephanie (Michigan), Gail (Rutgers), Mary & Jim (Mt. Holyoke), Salwa (Texas State), Barb (Bradley), Yitna (Virginia), Lisa (UC Berkeley), Angelica (Limerick), Robin (Wyoming), Anna (Windsor), Kelli (Stanford), Jim (Marist), Stephen (Georgia Tech), Dana (Missouri)

TOPIC 1: Motivations for participating.

    There were several different reasons why institutions chose to participate in this initiative. Among those reasons were to gain information about teaching & learning, provide data to administration about need for more Sakai resources, compare own campus experience with others using Sakai, help be part of the Sakai community, obtain user information in preparation of Sakai 3.0, obtain user information as baseline before full Sakai adoption.

TOPIC 2: Human subjects approval / Institutional Review Board (IRB)

    Stephanie mentioned that institutions that are interested in sharing their results in order to compare institutions and also to hopefully publish these results in a scholarly journal may have to get permission from their local human subjects review board. At our institution, Michigan, we qualified for an exemption because our research is conducted in an established educational setting that looks at the effectiveness of or comparison among instructional tools (e.g., Sakai). Someone requested that we provide our IRB exemption letter as an example (see attached) - please note that this application letter was written in 2002 for the Sakai predecessor, CHEF, but still applies to Sakai. Also, our IRB now requires a much more thorough application process online.

TOPIC 3: Survey Implementation

    There was some initial confusion from some on the conference calls about who would implement the final online surveys. As part of this initiative, each institution will administer their own online survey, hopefully using the "core" survey questions that we agree on, as well as any locally-oriented questions that you would like to add for your own use. In order to limit variability, we hope that participating institutions will administer their survey sometime in the months of March, April, or May, depending on your own campus calendar.

TOPIC 4: Michigan Sampling & Distribution

    Steve described the process that we go through at Michigan for our surveys. First, in early March, we obtain a list of all faculty (including graduate student instructors) and enrolled students from the Registrar which includes names, campus id, gender, student level, and faculty rank (tenure, lecturer, etc.). We eliminate any students who are also listed as faculty. Since our students outnumber our faculty 4-to-1, we generate a random sample 25% of the students, stratified by five groups (Letters Science & Arts, Medicine, Engineering, Business, and Other).
    Once our sample is set, we send out email invitations to the instructors & students via a listserve. We also send out reminder emails each week, but first remove those who have already responded to the survey from the listserve. The survey is open for about 4 weeks through the entire month of April. Also, we offer an incentive of a random drawing for four $100 gift certificates to Amazon.com for our survey (2 for instructors, 2 for students).
    Since each institution has very different populations and needs for their surveys, we will let you determine how best to define your sample and whether to provide incentives for your survey. If you would like some advice or assistance in limiting or sampling your instructor or student population, please contact Steve or Stephanie.

TOPIC 5: Moving Forward

    Now that MISI is underway and we have several different institutions, how should we determine which questions from the Michigan survey should be included in the "core" questions? Also, how can we add new questions? We decided to use Confluence and that we will start by voting on the existing Michigan questions using a +1, 0, -1 scale (more detail above).
    There was also some discussion about providing more information about our institutions and our implementations of Sakai so that we can more easily compare our results. While Confluence was suggested as the place to gather this information, we have arrived at a different solution (see above).