Notes from 11-8-10 Portfolio Reports Meeting
Portfolio Reporting Notes 11-8-10
Participants: (Indicate if you are present with an XX after your name.)
- Robin Hill, University of Wyoming
- David McPherson, Virginia Tech
- Nancy O'Laughlin, U of Delaware
- Jacques Raynauld, MATI Montreal
- Debbie Runshe, IUPUI
- Janice Smith, Three Canoes
- Lynn Ward, Indiana University
- Sean Keesler, Three Canoes
What functionality do we need for portfolio reporting?
Sean: What we want is a tool to extract, transform, and load data from one database to another. We don't want a tool in Sakai to do this magic. If there is a lot of overlap of the questions, we all want to know of the data, providing an interface to that makes tons of sense in Sakai. Where practices differ, we need each institution to come up with their own method to do this.
Lynn: We need to define the kinds of reports we need and figure out if there is some kind of commonality. We should not be defining how to provide the functionality, but focus on what we need. Some institutions will need quick and dirty reporting.
Robin: It is easy to find the quick and dirty data. It is just going through all the XML fields and just counting.
Lynn: It's not as simple as that. The same form can be applied to multiple matrix cells, for evaluating multiple outcomes, each in a different cell. Counting alone will result in data that is not useful, because it depends upon the context of each cell.
Robin: Some institutions may want this raw data without context of cells.(But, yes, it's dirty! And crude.)
Lynn: Many institutions will need to drill down or filter further.
Nancy: Status reports can be built into the matrix itself. It is also important for use to look at reporting work done by other vendors.
Janice: As Chris Mauer indicates, we can separate status reports from data reports by building status reports into the tools themselves.
Sean: If we look at the links Lynn has posted, we can see that needs the market has already identified.
Links to Vendor Examples of Reports
Chalk and Wire
http://www.chalkandwire.com/help/contexthelp/zone_manage.html
http://www.chalkandwire.com/help/contexthelp/page_results.html
http://www.chalkandwire.com/help/contexthelp/results_questions.html
http://www.chalkandwire.com/help/contexthelp/results_statsoverall.html
http://www.chalkandwire.com/help/contexthelp/results_statsmeanl.html
http://www.chalkandwire.com/help/contexthelp/results_performance.html
http://www.chalkandwire.com/help/contexthelp/page_statusreports.html
http://www.chalkandwire.com/help/cwr/CWReporter.html (This is a desktop application for running reports on C&W data.)
Taskstream:
http://www.taskstream.com/pub/images/screenshots/OutcomesAssessment-TrackActivity.gif
http://www.taskstream.com/pub/images/screenshots/AMS-Report.gif
http://www.taskstream.com/pub/images/screenshots/OutcomesAssessment-Report.gif
http://www.taskstream.com/pub/images/screenshots/PerformanceAssessment-Report.gif
College LiveText:
https://www.livetext.com/college/assessments.html
Status of reporting at Indiana University
Links to IU Reports
http://confluence.sakaiproject.org/display/OSP/Indiana+University+Reporting+Info
https://oncourse.iu.edu/access/content/user/leward/savedReports/ratings%20report.html
Lynn: To describe the needs of our community, look at the Task Stream link to Performance Assessment Report and the link to IU Sample Reports on the child page of the Portfolio Reports page. To deal with the performance problems that come with larger amounts of data, it might be possible to run a batch job so that these reports are run once per night.
Lynn: The IU report may be one representation of a report that could be useful. While the Reports tool was moved back to Contrib because of real problems, if you import a report definition into the reports tool and you have forms on which the report relies conform to a particular standard, you can generate the report. In order to make this work, we had to have some agreed upon standards for representing rating data. (There is an IU document listing the standards.) In any cell, you can have up to 20 rating fields. The ratings have to be sequentially numbered as field names, but the display names can be whatever you want. For each evaluation form, the rating scale has to be the same for all fields. From one cell to the next, you can use evaluation forms with different scales. The IU standards for constructing the evaluation form allow you to use these reports. You can click to see the matrices in the site you want to report against. You can navigate up and down in the reports, to see individual data or aggregated data. There is a return button to get back to the first page of the report. These reports are a proof of concept. We started out with what are the questions we wanted to offer. The first round of original reports were modified. We also created a matrix summary and a detailed interaction report. Then there were additional refinements beyond that. There is also a comparison spread sheet for different vendor reports posted. It is based on sample accounts with vendors and a Google search. You can see the requirements on the Indiana Reports page linked to the Portfolio Reports page.
Jacques: We also want to envision what we want in Sakai OAE. These reporting tools are related to the actual set-up in Sakai 2.
Janice: Yes, there are two goals for this group, planning for Sakai OAE and deciding what can be done about reporting in 2.x in the meantime.
Robin: Eventually, we will want to do personas and MiniSpecs for Sakai OAE.
Sean: Are there IU thresholds for reports?
Lynn: Departments set their own targets.
Sean: Is the IU setting for report needs really unique? Or can it be generalized to other institutions?
Lynn: An example of our common needs is the Task Stream Performance Assessment Report, which is similar for all vendors and may be needed at all institutions.
Sean: What do we share in common? Vendors found a commonality among their customers.
Lynn: There is a threshold of continuous improvement based on the assessment of programs and departments. Many institutions want to determine whether, what, and how well students are learning, whether it is sufficient, and how to improve learning performance. To do that, you have to have ways of measuring learning.
Sean: Measuring student progress, a view that needs to be available to students and advisors.
Lynn: I have been talking about reporting at the program level. For example, here is a learning outcome where the majority of students are doing well. It's not so much looking at students, but at the program. It is having the ability to sit down with an external reviewer to take a look at what each evaluation level looks like in terms of student learning artifacts. Accreditors are interested in having a process in place that allows them to understand learning outcomes, ask questions of the data, and get useful information. How is this group of students doing in comparison to another group? A particular group of freshmen came in with these scores. Let's look at them two years later. There are all kinds of questions that can be asked.
Debbie: I have had access to the IU reports for assessment. But when faculty and students begin to use the reports, other questions emerge.
Janice: RINET reports demonstrated how students and instructors can use report data to initiate their own changes.
Nancy: There are all sorts of different reporting needs. The question is what can be done in Sakai.
Janice: How available could the IU reports be?
Lynn: Modification of the reports tool is the key. This work could be done by Chris Mauer or by another java developer.
Dave: VTech would be interested in doing that.
Lynn: Users of the IU reports would also have to conform to IU standards on report data. There are, however, other reports for status and for showing attachments to a cell do not require standardization. If you take a look at the sample report called Attachment Summary under the IU Sample Reports page, you can see a sample. The attachment links won't work right now, but the report lists all the students who have submitted attachments and links to them. It is a quick and dirty report.
Nancy: Which attachments, the ones added to a form or to a matrix cell?
Lynn: The attachments to the matrix cell. But once you have these reports, it would not be too difficult to use XML to write other reports.
Robin: I am earning about how to write XML from doing QA testing.
Lynn: Brian put this report together in less than an hour. He took a report on evaluation data and transformed it into a report to do cell attachments.
Dave: I could talk with Chris Mauer about Virginia Tech assisting with java programming before the next meeting.
Future Sakai reporting
Dave: Reporting thoughts have to be put in the process at the same time as the discussion of how to store the data. We have been alking about this locally. The whole data structure of Sakai needs to be rethought for portfolio data. We know what we did before, what worked, what didn't work.
Sean: One of the things I hear about Sakai OAE, is that we will expose the data so that you can whip up a widget to display it on the client side. This is useful for status reports, but not for data reports. There is an order of magnitude of thinking about manipulating data, with java scripts on the client side of the browser. We need to ask what sort of data reporting needs to be integrated into plans for Sakai OAE.
Robin: We can use the quick and dirty method as a way to get started and thereby defer this reporting to database commands and an interface.
Lynn and Dave: Sakai OAE will be using JackRabbit, which is not a conventional relational database.
Sean: Single institutions might have back end technologies that are SQL based. They can figure out a way, but others may in trouble.
Robin: Even if the database is not relational, there could be a way to offer information to users as a first step. We can use this method to convince decision makers that there is good data to be had.
Lynn: I have on my list of things to do to have a general talk with Clay about how best to present portfolio needs to management team. But development has slowed down considerably, so we could ask do we need to be in a rush. A lot of work in Q2 is refinements to Q1. The roadmap is ahead of the process. Big problems have arisen. A handful of users can bring the server down. Low level performance issues are being dealt with. Design and implementation of UI development has been proceeding at a much slower pace than intended. It is not urgent. People don't have time to listen right now. But there is no harm in getting our ducks in a row. We can figure out what we want to do and how to report on it.
Jacques: Reporting is not a UX problem, but a back end problem. For this group, we should not bypass the UX, but talk to the back end people. Linking with people who know how the database is set up would be good.
Lynn: Clay Fenlayson has been extremely busy. Even for the design team, he has not super available. Clay is more of a functional person than a developer. Clay understands more about portfolios than anyone else on the team (except Lynn). I will find out how best to work with the Sakai OAE team and how all subgroups in portfolio visioning can supply work to use as foundation of portfolio functionality in Sakai OAE. There are two people to interact with: Clay and Alan Marks. We need to work with the steering committee to become clearer about timeline for the roadmap.
Sean: We need to consolidate, come up with an original document which identifies the things we think are important in reporting and provides a good set of reports that we would be excited about. We need to reinterpret what is out there and put into one document.
Robin: Who is in a position to develop sketches of reports?
Nancy: This is something we can do but that needs further discussion. I will be gone next week, but it is on my to-do list.
Plans for the next meeting
Janice: How about December 6 for our next meeting? We can come up with sketches of reports by then. I will set up a space on Confluence for us to use in posting our own ideas for reports and reading each other's stuff. I will also set up a Doodle for a time that works for the group to meet next time, most likely during the first week in December. (Robin will start putting in times that work for when she leaves for England at the beginning of January.)