Quality Assurance Proposal
Purpose
Testing activities need to be integrated into the entire development cycle in order to assure that quality is occurring in the software. Quality must be on the minds of all and built into our daily activities rather than occurring at the end as an afterthought. Empirical data needs to be gathered that demonstrates the level of quality the code currently is at. We're not going to have a bug free application but at the least we need to be able to speak to the level we're providing.
- Determine a well defined and common set of vocabulary related to quality
- How do we get from here to there?
determine a well defined and common set of vocabulary related to quality
Unit tests
What: A unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method; which may belong to a base/super class, abstract class or derived/child class. Ideally, each test case is independent from the others; mock objects and test harnesses can be used to assist testing a module in isolation. Unit testing is typically done by developers and not by end-users.
Who: Developers, ideally should be the person that wrote the class, API, ect . .
How: Junit is the recommended technology. This should be an automated process.
Why: Makes sure that the smallest unit works in isolation. This type of testing
1) Facilitates change. Reduces the problems you and others will encounter. Makes code change tolerant . . . feature improvements can be made without causing regressions.
2) Simplifies integration testing
3) Living piece of documentation on how code works.
Integration testing
What: the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing. Integration need to be at each level, going up and down the stack.
Who: Developers
How: It may be hard to determine who you depend on or who depends on you so it's recommended that you run the whole suite. the Test Harness, Sakai Mock objects, and now Test Runner are possibilities. There is a BOF to determine the best path forward.
Why: Make sure your dependencies are functioning as they are supposed to and that the changes don't break existing integrations.
Integration testing needs to be part of the criteria for having a high quality and reliable framework and thus should be performed prior to merging code into trunk.
System Testing
What: is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. This testing is black box testing and thus, no knowledge of how the code or system operates is nessecary. The focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the believed expectations of the customer.
Who: Use cases are provided by the UX designer. Testing is conducted by the QA WG.
How: Mostly a manual process. Given that automation at this level tends to be very brittle, pieces of functionality testing should be automated when the code is mature. Minimum set of documentation is use cases, wireframes are encouraged.
Why: Detects any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is part of having a high quality release.
There are a few subsets of testing in system testing
- Usability testing
- Accessibility testing - Criteria needs to be defined by an accessibility expert
- User acceptances tests - should occur at institutional level and be feed back into process
Performance/Load Testing
What: process of creating demand on a system or device and measuring its response.
Who:
How: Needs to be done generically on the Sakai platform with scripts that can be reused locally. Further scripts can them be contributed back to the community.
Two Tiers
- Stand alone tool test
- Testing usage profiles (these will vary by institution
- Why: Gives the ability to say how much a tool and/or system can scales
How to get from here to there
Let's draw the line in the sand. In the Framework/kernel extraction it is the expectation that this needs to include unit, integration and performance tests to ensure a high quality and reliable framework. Conversely, any new tool/functionality will include all four types of tests.
The bigger question is how do we address code already in existence?
There's two options here
1) Institutions pledge resources to work missing aspects of testing into critical pieces of the software.Â
2) Pressure is applied top down through continued system testing and by advancing performance testing so there is a generic model. Let's elaborate more than that. Â
- To begin with, bug fixes need to be accompanied with a unit test to be checked in. This will enable the community to slowly start building a suite of tests.Â
- Use cases for critical tools (functionality) will be defined by QA, Support, and UX. This will assist in defining and automating a data set to enable generic performance testing.
- The Performance testing on a generic build level is developed and incorporated into the foundation testing in time for use in the 2.6/3.0 release. The will not only reveal functional problems but load/performance problems. For performance testing to be successful, a generic data set needs to be defined and loaded to a test system.Â
-
- Define Data set
-
- Usage Patterns from universities
2.6/3.0 going forward, all components are evaluated on testing efforts in these four areas: unit, integration, system and performance.Â
|
Misc Notes.
Why there are not unit and integration tests?
- The component manager is difficult to work with. Performance and Laod testing will expose problems in the code.
- Either universities are going to have to commit QA resources or they agree that any bug fix that occurs is checked in with a unit test.
Performance/Load Testing
- need to determine appropriate load.
- Need to work on an appropriate test environment. Linda Place has been working with Alan Berg on this.
- It's not clear that folks know how to approach this. A Performance boot camp was suggested for the next conference.
- Further, there is not a tool that is agreed upon to use. Lance stated that he has not heard good things about JMeter from colleagues at Indiana. Mention that Kuali is in talks with Quest to get licenses. UMich uses LoadRunner, but Jason stated that he's found those tests tend to be brittle.
Topics for further discussion
- [What is a Blocker, Critical, Major, Minor or Trivial JIRA bug?| Not discussed
- Define Beta, Release Candidate, General Release criteria Not discussed