Scorecard Item Definitions

The following scorecard item definitions are proposed.

No.

Scorecard Item

Evaluation

Comments

1.0

User Experience

 

 

1.1

Consistency / Best practices

Follows the recommendations for Sakai UI conventions and practices as determined by a checklist.

Former user (Deleted): Which checklist is this?

1.2

Usability

Rates the usability of the application by stakeholders (instructors, students, admin, etc.) OR
Results of a formal usability review team. User Experience is evaluated against accepted Sakai and industry practices.

Former user (Deleted) Couldn't we use both if both are available?

1.3

Accessibility

Follows the recommended practices and support for accessibility as defined by a checklist.

Former user (Deleted): Which checklist is this?

1.4

Internationalization

Support for other languages and cultures. Initially, this is measured by the number of languages supported by this application. Max of 20.

Former user (Deleted): What is the point of a Max of 20?
Former user (Deleted) Shouldn't this item renamed Localization? The most basic mesurement could be a rate (supported languages)/(languages supported anywhere within Sakai). We could also use the rate of support for the language from the localization dashboard http://qa1-nl.sakaiproject.org/international/. The best measurement I can think of is a combination of the two (sum of the rate of support of supported languages)/(languages supported anywhere within Sakai). 24 different locales are referenced in the documentation (readme_i18n.txt).

1.5

Ease of use

Same as usability and same problems.

Former user (Deleted): I don't understand what is meant here...

1.6

User testing

Results of hands-on user testing by a formal user testing team (methodology to be defined).

 

2.0

Technical

 

 

2.1

Browser support

Support for web browsers supported by Sakai. Results of browser compatibility tests.

Former user (Deleted): How do we decide what "browsers supported by Sakai" means? Is the later meant to define the former?
Former user (Deleted) "browsers supported by Sakai" could be browsers with declared support from most/all core tool or browsers that have good compatibility test results from most/all core tools.

2.2

Code review

Results of a formal code review team. Code is evaluated against accepted Sakai and industry practices.

 

2.3

Unit testing

Unit tests generally are written against application service interfaces and implementations. Measured as percent coverage.

Former user (Deleted) Unit test coverage?

2.4

Functional regression testing

Functionality regression tests check the functionality of an application to ensure that it continues to function across changes. Results of functionality tests. Requires a written functionality description and test plan.

 

2.5

Integration testing

 

 

2.6

Performance testing

Performance testing ensures that the application will perform well in small, medium, large, and very large environments. Evaluation has two parts: existence of tests, and results.

Former user (Deleted): whats the definition of a small, medium, etc. environment?

2.7

Internationalization

Follows recommended practices for internationalization and localization as defined by a checklist (initially string externalization).

Former user (Deleted) We have http://confluence.sakaiproject.org/confluence/display/I18N/How+to+write+Internationalized+Tools+in+Sakai which goes further.

2.8

Licensing

License information is required in all code files and certain other places. Percent files labeled scaled to 10 points.

 

2.9

Outstanding JIRA bugs

Attempts to rate how many open bugs are present.

Former user (Deleted): Number of Unresolved Bugs? Number of Unresolved Tasks? Weighted by issue Priority? Weighted by lines of code?
Former user (Deleted) Number of bugs open in the latest major release group (latest major A.B.0 and minors A.B.?) not fixed in the latest release? Which Resolutions values do we include/exclude?

2.10

Packaging (code structure)

Follows recommended practices for Sakai application packaging and naming conventions ad defined by a checklist.

Former user (Deleted): Which checklist is this?

2.11

Static code review

Static (or automatic) code review checks for certain very common bugs, like swallowed exceptions. Has two parts: inclusion in the test scripts, and results.

 

2.12

Validation/spec conformance

Follows recommended practices and standards for Sakai applications as defined by a checklist.

Former user (Deleted): Which checklist is this?

2.13

DB support

This is the number of databases supported.

Former user (Deleted) Declared or tested?

2.14

DB best practices

Follows recommended practices for database support as defined by a check list.

Former user (Deleted): Which checklist is this?

2.15

Security review

Follows recommended Sakai security practices as defined by a check list.

Former user (Deleted): Which checklist is this?

2.16

Technical

 

MJN: This is covered by code reviews.

2.17

Event tracking

Support for event tracking.

Former user (Deleted): What does "support" mean in this context?
Former user (Deleted) Providing enough and useful events for its activity?

3.0

Descriptive

 

 

3.1

Bundled Help

Support for contextual help defined as percent coverage against functional description.

 

3.2

Test plan

(Megan to define)

 

3.3

Javadocs

Internal code documentation (javadoc) as a percent of methods defined.

 

3.4

Design Documentation

Existence and quality of technical documentation including software architecture and design.

 

3.5

Wiki/website

Confluence documentation (TBD)

 

3.6

Deployment doc

Existence and quality of instructions for installation and deployment. This could include release conversion notes, too.

 

3.7

End user external docs.

Existence and quality of user documentation - how to use the application from stakeholder viewpoint.

 

3.8

Issue Tracking (Jira)

Uses Sakai's Jira to track issues including bugs, features, changes, etc.

 

3.9

Events documented

Documentation of event tracking - event definitions, etc.

 

3.10

Licensing Documented

(see above)

 

3.11

Configuration

Existence and quality of documentation on configuration properties including system (sakai.properties), tool (tool configration), and Spring (components.xml).

 

4.0

Community Support

 

 

4.1

Team size

Size of the project team measured as the number of regular participants and contributors. Max of 20.

Former user (Deleted):what is the max of 20 about?
Former user (Deleted) Wouldn't 20 people participating or contributing to numerous projects give wrong information? How do we measure the "regular" part?

4.2

Team diversity

Participation by people with skills other than just programming. These include UX designers, QA support, stakeholder advisors, etc.

Former user (Deleted) Do we add a new item with "Team institutional diversity" for participation by people from distinct institutions or do we combine both?

4.3

Responsiveness

Responsiveness of the project team to reported problems.

 

4.4

Production experience

Experience of the development team in terms of longevity, etc.

 

4.5

Communications and openness

Response to questions, openness, etc.