Scorecard Evaluation

The following evaluation methods are proposed for the scorecard items selected in the Scorecard BOF at the Ninth Sakai Conference in Paris, France. Please feel free to edit in place, add comments in other colors, or append comments below.

Note that point scales for individual items vary considerably. This is not a problem if we develop a weighting schemes when scores are rolled up into the four evaluation dimensions.

See also Scorecard Item Definitions

No.

Scorecard Item

Points

Evaluation

Comments

1.0

User Experience

 

 

 

1.1

Consistency / Best practices

10

UI Checklist is needed.

 

1.2

Usability

10

Very subjective, how to evaluate?

MJN:Perhaps a rating system?

1.3

Accessibility

10

UI Checklist is needed.. Screen reader trial - 0/1. Accessibility should be a requirement, given University regulations.

Will the existing protocols and templates suffice? http://confluence.sakaiproject.org/confluence/x/3ok
MMM: I see no problem using the process we have in place. 

1.4

Internationalization

20

Number of languages supported (max of 20).

 

1.5

Ease of use

0

Same as usability and same problems.

MJN: Merge into usability.

1.6

User testing

10

Results scaled vs. 10 points.

 

2.0

Technical

 

 

 

2.1

Browser support

6

IE6 - 1, IE7 - 1, Firefox2 - 1, Firefox3 - 1, Safari - 1, Opera - 1, Others - 1

MJN: What is our policy?

2.2

Code review

10

Results graded on 10 points.

MJN: This is formal code review.

2.3

Unit testing

10

Percent coverage scaled to 10 points.

 

2.4

Functional regression testing

10

Percent coverage scaled to 10 points.

MJN: Requires a test plan.

2.5

Integration testing

10

Test harness - 5. Results - 5

MJN: What is tested here?

2.6

Performance testing

10

Test harness - 5. Results - 5

 

2.7

Internationalization

10

Checklist. Strings externalized into bundle - 0/1

 

2.8

Licensing

10

Percent files labeled scaled to 10 points.

 

2.9

Outstanding JIRA bugs

10

None - 10, Some - 5, Many - 0.

MJN: This is a bit vague.

2.10

Packaging (code structure)

10

Checklist is needed.

 

2.11

Static code review

10

Code review harness - 5. Results - 5.

MJN: This is an automated review.

2.12

Validation/spec conformance

10

Checklist is needed.

MJN: Wasn't this supposed to replace spec validation?

2.13

DB support

10

Hypersonic - 3, MySQL - 3, Oracle - 3, Others - 1

MJN: This could be better expressed.

2.14

DB best practices

10

Checklist is needed.

 

2.15

Security review

10

Checklist is needed.

 

2.16

Technical

0

Not sure what this means. Checklist is needed.

MJN: Drop this in favor of code reviews.

2.17

Event tracking

2

All - 2, Some - 1, None - 0.

 

3.0

Descriptive

 

 

 

3.1

Bundled Help

10

Coverage scaled to 10 points.

 

3.2

Test plan

10

Coverage scaled to 10 points.

 

3.3

Javadocs

10

Coverage - 5 points, Quality - 5 points.

 

3.4

Design Documentation

3

Good - 3, Fair - 2, Some - 1, None - 0.

 

3.5

Wiki/website

0

What would be on these pages?

 

3.6

Deployment doc

3

Good - 3, Fair - 2, Some - 1, None - 0.

 

3.7

End user external docs.

3

Good - 3, Fair - 2, Some - 1, None - 0.

 

3.8

Issue Tracking (Jira)

3

Good - 3, Fair - 2, Some - 1, None - 0.

 

3.9

Events documented

3

Good - 3, Fair - 2, Some - 1, None - 0.

 

3.10

Licensing Documented

0

How is this different than above?

 

3.11

Configuration (sakai.properties)

3

Good - 3, Fair - 2, Some - 1, None - 0.

 

4.0

Community Support

 

 

 

4.1

Team size

20

Number of regular participants (max of 20)

MJN: Big projects will get a boost because of this.

4.2

Team diversity (institution, dev/ux/qa)

4

Dev - 1, UX - 1, QA - 1, Others - 1

 

4.3

Responsiveness (Average time to respond to JIRAS)

10

Response time scaled to 10 points.

MJN: Possible, but a LOT of math.

4.4

Production experience - length, scale, diversity

3

Proposed - 0, New - 1, Mature - 2, Ancient - 3

 

4.5

Communications and openness.

10

Response scaled to 10 points

MJN: How to measure?