2.3.0 PRR

2.3.0 Post Release Review - The Good, the Bad, the Ugly

Discussion Points

  • Including accessibility in the QA process (plz see comments from Mike below)
  • How to drum up OSP testing resources?
    • May be helpful if the sign up sheet showed a breakdown of OSP into smaller chunks; for example - OSP Portfolios, OSP Matrices, OSP Wizards, and Glossary. The setup for each of these areas is fairly extensive (except Glossary), and I think we would generate more interest in testing OSP. Since the testing needs to be done across site types (i.e. Portfolio, Course, and Project), this may make the whole idea of testing OSP in smaller portions more 'palatable'.

Possible sub-projects

  • Facilitate further work on the automation of functional testing using open source software option such as JMeter and Selenium.
  • Building up a set of data that mimics production quality data and encourage testing on local implementations for valuable feedback.
  • Integrate web services testing
  • Including code coverage within Sakai QA process
  • Continue refining the strategic plan for future release cycles
  • Redefine the priority scale used in categorizing JIRA tickets

Notes From Meeting

  • Including accessibility in the QA process - No one has looked at this.
  • Recruit OSP testing resources - shared Dawns proposal and this seems acceptable. Any other ideas? I (Megan) said that she thought a lot of people were intimidated - it's not an intuitive system. However, Dawn created test scripts that people can walk through. I've (Megan) used them a lot, can others look at them and comment? Clay said he'd give it a go.
  • Initial thoughts
    • GARIN: Really found the meetings to be beneficial. He's newer to the process, but really found the meetings were we review JIRA candidates to merge into the release to be beneficial.
    • (question): Toward the end, it was hard to keep everyone in the loop as to what was going on and there was a lot of confusion.
    • LANCE: Perhaps in the final days, have a daily meeting set up so the different parties can join in. Sometimes the meeting may not be necessary, but at least it will be there
    • MEGAN: Excellent idea and something we'll definitely do for 2.4
  • Redefine the priority scale used in categorizing JIRA tickets - Megan believes we need the priorities to better represent all stakeholders. Clearer definitions will help in
  • Discussed freezing release earlier.
    • Lance: this adds addition load on the release manager. Also, the developers aren't familiar with the Frankenstein that we'll be creating. For instance, a bug that exists in the branch may not not exist in trunk. Also to complicate things, hopefully the indiv. teams will become more responsible toward actually merging fixes in. Why is it we want to do this?
    • Megan: There is too much change occurring during the release. Most of the bug fixing is happening to the release - with so much change it is hard to ensure everything is thoroughly tested. Perhaps we can encourage more bug fixing prior to the freeze date. Also, most releases will be a lot larger. Perhaps the QA cycle should be longer? I've been talking with Chuck about a period 2 weeks prior to the code freeze where developers are very involved in testing. This would also be a good time for QA WG members to get in there.
    • (question): May want to increase the
    • CLAY: Another thing that could help- more testing by QA resources prior to code freeze. For instance, the new resources functionality had less problems due to the extensive testing that occurred prior to the code freeze. Perhaps the QA network could be used for this in between releases?
    • Lance: We'd need to create a build script so this wouldn't be labor intensive for the admins.
    • Megan: Believes this is a good approach. When the code has undergone testing prior, there tend to be less surprises in the final hour. For example, Resources or the GB.
    • ACTION: Ensure defining the testing schedule & reallocation of QA servers gets on the agenda for Atlanta meeting
  • FYI- there will be a meeting in Atlanta to discuss 2.4. This meeting is taking place in leau of Integration week to enable us to better plan for the release and ensure a more stable build once the code freeze occurs. this group is encouraged to attend.
  • Creating a cleaner release
    • CLAY: For instance, ensuring that test and sample code is not in release, unfinished code not in release. Not having to include OSP if you aren't deploying. Code is getting large. Seth mentioned this in a message to dev. Would mean that we'd need to revamp the release pages.
    • Test suites/anyone run
    • ability for hot deploy
    • Discussed a Sakai Web Project

Proposed Discussion Topics for Atlanta Meeting

  • After determining what is tentatively planned for 2.4, see if the schedule allows for adaquet testing time.
  • Discussion of reallocation of QA servers for reiterative testing
  • Sakai Web Project

Brainstorm on Including code coverage within Sakai QA process

Placing code coverage within Sakai QA - Brainstorm

Preamble

Code coverage via the open Source project Emma allows for data to be acquired real time and coverage reports to be generated within seconds without stopping the Sakai server. This process allows for the viewing of the exercising of code with X ray clarity and offers the opportunity of more efficient testing and debugging during the QA cycle.

To educate QA and developers in general to the potential of code coverage. I have written a simple to use Sakai tool that allows for easily create-able and viewable reports.

Once sold on the idea of code coverage within the QA cycle the natural question is how to integrate coverage into the normal QA process.

Possibilities

The following are a list of possibilities, not necessary complete in extent:

Physical

(a1) A simple recipe may be written on confluence explaining to interested parties how to enable code coverage from the command line or via the Sakai tool.
Note: A few extra utility scripts need to be written to ease the effort.
Alan can do this alone
(a2) Code coverage activation and deployment is somehow hooked into an extra maven build option in trunk to remove the complexity of (1)
Alan would need help on this
(a3) The Sakai tool may be hosted on a QA server which is rebuilt every night against trunk Alan can build this alone
(a4) The Sakai tool is expanded to a fully functional superset of options to include other parts of the code base and extra functionality.
Alan needs a good set of functional requirements from QA to build on

Potential Work flows

(b1) Developers build the source code with an extra option and thus later can watch how their code behaves live from the command line
(b2) Code is covered per QA tag on specific QA servers.
If a functional test fails. A before and after report is generated of the given tool and attached to the Jira bug.
(b3) Sanity check: Once per QA cycle the functional tests are applied and all coverage reports are generated and checked for good code exercising.

http://bugs.sakaiproject.org/confluence/display/QA/Walk+through http://bugs.sakaiproject.org/confluence/display/QA/Report+details

(b4) Learning tool for new developers.

Brainstorm on JIRA priority