Notes from Scorecard BOF in Paris

Notes from Scorecard BOF in Paris

Attendees: Alan Berg, Seth Theriault Peter Knoop, Gilamesh Noteboos, Jean-Fracious Leveque, Aaron Zercoski, Linda Place, Chris Kretler, Mark Norton, Oliver Heyer, Clay Fenlason, Megan May
Absent: David Haines
Purpose: Formulate WG to define Scorecard (ratings) and structured description. This documentation will serve two purposes
1) Advise Release Mgmt team on what tools to bundle as a release
2) Advise institutions in making local decisions on bundles

Currently we have

  • contrib - in release/turned on

  • provisional - in release/turned off

  • core - in release
    This does not adequately speak to the quality of the tools and that is why we have convened.

Criteria should be SMART
Specific
Measurable
Attainable
Relevant
Timely

Stephen envisions 5 summary ranges (red orange yellow, green, gold) for different dimensions to these scorecards. Envision that we should have 3-7. Take the details in these, roll them up and get a top level score.

1) DIMENSIONS DEFINED
------------------------------------------------------------------------------------------
User Experience (usability, accessibility, internationalization)

  • Consistency / best practices - this must be moving forward. not the status quo

  • Usability

  • Accessibly

  • ease of use

  • user testing (ie placed in front of real users)

2) Technical (code review, unit tests, adheres to code standards)

  • Browser support

  • code review

  • Unit testing

  • functional regression testing

  • Integration testing

  • Performance testing

  • Internationalization

  • Licensing

  • Outstanding JIRA bugs

  • Packaging (code structure)

  • static code review

  • Validation/spec conformance

  • DB support

  • DB best practices

  • security review

  • technical

  • Event tracking

  • May be possible to further clump these (ie testing)

3) Descriptive - documented, does it have help

  • help (bundled)

  • Test plan

  • Javadocs

  • If required technical / architectural doc

  • Wiki/website

  • Deployment doc

  • End user external docs.

  • tracking tasks in issue tracking system ((ie JIRA)

  • Events documented

  • Licensing Documented

  • Sakai.properties

4) Community support

  • Team size

  • Team diversity (institution, dev/ux/qa)

  • Responsiveness (Average time to respond to JIRAS)

  • Production experience - length, scale, diversity

  • Communications and openness.

Talked a lot about internationalization b/c it's an issue that spans a number of dimensions. For instance, there are technical hurdles

  • Lot's of talk about who owns the process. It was suggested that this was to be lead by QA WG. Mark pointed out that there used to be a group that dealt with processes. Others mentioned that they weren't the owners of the process once defined. Some thought the community should own, others the project teams . Someone needs to lead the process but decision was deferred.

Mark suggested that we run through this experience on a few pilot projects. He're

  • Sousa

  • Resources (TBD)

  • Site Stats

  • Polls

  • Blogwow
    --> This lead to discussion onto handling services (content, search). Decision on how to handle these are TBD

Megan elected Stephen to lead this past the conference

Action Items
------------------------------

  • Crosscheck the list with existing documentation

  • Create Collab list (Megan will send in)

  • post notes and send message to community