Grading criteria for the 4th iteration
Customer Feedback. You will again be graded by your customer. We will directly interact with the customer and ask them a set of questions. The answers from the customers will count for approximately 40% of the grade in this iteration.
Application functionality. Again, the most important deliverable awarded with the most points. The functionality in the application covers all requirements defined by functional and non-functional requirements. The provided functionality is tested and works. User can access the delivered functionality and test it's behaviour with minimal additional assistance. The functionality provided by the system reflects what is indicated in the release notes. Note that even though I am not grading requirements directly, I will still verify the correctness of the application via comparing the requirements and application, so I can only recommend to keep the requirements up to date.
Automated tests There are automated tests present. The tests verify the application's core functionality (system testing, not only unit testing). The tests must cover all core use cases. The tests are created in a reproducible way, most likely in a form of a script. The tests can be run without human intervention.
Continuous Integration. There is a CI environment present. There is a trace that the CI processes have been running throughout the iteration. The environment monitors the VCS in use, checks out the new code, builds the application, runs the automated tests and deploys the application on a server (if applicable) or makes the executable file available somewhere where customer and course organiser can access it. In case of the build errors, or in case the app is not passing some of the automated tests, these errors are reported to the team. As an additional requirement, in this iteration you should ensure that your CI environment is able to build, deploy and test the application, even in a machine where the application has never been deployed before. In other words, the CI process is able to handle all of your application's dependencies.
Internal accceptance testing. It sometimes happens that the application passes all the automated tests, but there are still obvious bugs that can be easily detected with a manual verification. Accordingly, you should conduct an additional manual verification (internal acceptance testing) prior to the release. You should write down in your Wiki which use cases were covered by the internal acceptance tests. Here you can see a document including a table with specific examples of test cases followed by some notes (plus a related use case specification). The included sample table and the notes serve only as a guide. You may use your own layout and ways to perform and represent your internal acceptance tests.
Non-functional requirements verification'. You demonstrate how you performed verification on at least two different non-functional requirement types. On most applications we recommend to select requirements of the performance and usability types, as follows.
- Teams who verify performance requirements should stress-test the application under load similar to it's real-life conditions. For example on web applications you should generate test data in database simulating real life situation and simulate the number of end users concurrently using the application. Simulating end users can done for example with the help of JMeter.
- Teams who verify usability requirements should monitor end users - each team should monitor at least five end users using their application without external support. Team should record the end users actions and verify whether it satisfied the usability requirements.
Release notes. Release notes for the iteration are present. Reading the requirements gives input for testing the application - from the requirements it is clear what has been added or modified to the application since the last release and what bugs are fixed. If delivered functionality contains known bugs, those issues are highlighted in release notes.
Response to the peer-review. If your project was peer-reviewed, you should analyze the peer-review you received and post a response to the review in your Wiki. The deadline for posting this response is the same as the 4th iteration deadline.
Collaboration infrastructure is in use regularly; the requirements are the same as in previous iteration.