Grading criteria's for the 3rd iteration
Customer Feedback. In this iteration, you will also be graded by your customer. The course coordinator will directly interact with the customer and ask a set of questions from him/her. The customer answers will count to approximately 30% of the grade in this iteration.
Application. Again, this is the most important deliverable, awarded with the most points. The functionality in the application covers all core use cases. The provided functionality is tested and works. User can access the delivered functionality and test it's behaviour with minimal additional assistance. The provided functionality is either bug-free or known bugs (which must not block the testing of the functionality) are registered in the issue tracker. The functionality provided by the system reflects what is indicated in the release notes.
Automated tests There are automated tests present. The tests verify the application's core functionality (system testing, not only unit testing). The tests must cover at least two core use cases. The tests are created in a reproducible way, most likely in a form of a script. The tests can be run without human intervention.
Continuous Integration (CI). There is a CI environment present. There is a trace that the CI processes have been running throughout the iteration. The environment monitors the VCS in use, checks out the new code, builds the application, runs the automated tests and deploys the application on a server (if applicable) or makes the executable file available somewhere where customer and course organiser can access it. In case of the build errors, or in case the app is not passing some of the automated tests, these errors are reported to the team. As part of the CI process, there should be an automated build process in place, which the mentors can use to build your application from the source code with minimal effort (even outside your CI environment).
Requirements. All requirements are identified, those completed during the course are all in their final form. Detailed requirements are in individual wiki pages/documents. Detailed requirements are accessible from the overall requirements list. Requirements dealing with UI interactions are complemented with UI prototype.
Scope. There is a clear picture of what will be ready after the course finishes. The tasks for the last iteration are planned and assigned in the issue tracker.
Release notes. Release notes for the iteration are present. Reading the requirements gives input for testing the application - from the requirements it is clear what has been added or modified to the application since the last release and what bugs are fixed. If delivered functionality contains known bugs, those issues are highlighted in release notes.
Collaboration infrastructure is in use on a regular basis (not just activity a few days before the deadlines). There are traces in the collaboration and development infrastructure about the work done. Namely:
- The version control system being used regularly and by all several team members. Commits are linked to a task registered in the issue tracker. The released version is tagged.
- Requirements in wiki or other documents are maintained and up-to-date on a regular basis. Minutes of meetings with customers are recorded. Changes in the requirements are being recorded, if such changes have occurred.
- The issue tracking system is used regularly and kept up-to-date. The completed issues are marked as such (when they are completed and not "in batch" towards the end of the iteration). New issues being added when they arise and assigned to project team members. Issues are linked to a detailed requirement (whenever this link makes sense). The set of issues is complete relative to the current snapshot of the requirements. The issues/tasks are at an adequate level of granularity. All issues are assigned to a team member. All issues have a deadline. The team members have written comments in the issues where applicable.