Automated Testing
We want voluntarily to be a high performance, secure, stable platform that is a pleasure to use. We need the code supporting the platform to be correct in two ways.
It does the right thing
It does it in the right way.
Automating testing ensures that these goals are not met just once but continually following every change to the system. They help avoid regressions - when changes for a new feature break existing features, and give confidence to developers that new work is based on a stable platform.
Doing the right thing
This means that each feature meets the functional requirements as expressed in the original stories. These reflect the goals of the person using the system. e.g as a volunteer I can register my interest in attending an event.
Functional Requirements can be tested in two ways:
Functional Tests
These tests run in a test environment on a developers computer or in the continuous integration (CI) system. They typically create a full representation of the application including a mock database and server API layer. They then exercise in some detail the functionality of a page which may include filling in forms, moving through a workflow and validating the responses to data entered.
These tests are in the code base at /__tests__ and include home.spec.js, OpDetailPage.spec.js and so on. An example test would be to load the page, click the edit button, fill in some form fields and save the result. verifying that the entered data is saved.
These tests are written in Javascript and use the AVA test runner and Enzyme React Test tools.
Current Coverage (Feb 2020) is 89% although this represent code coverage rather than functionality.
This group also includes API tests that create a prepared database and exercise the server-side application through web API calls.
Coverage of API calls is 94%.
End to End or Web Acceptance tests.
These tests run in a browser (or virtual browser) and emulate a person using the platform. They follow the journey of each type of person using the system and ensure that they can achieve the intended goals. They operate using the full extent of the platform with a prepared database and can validate flows across pages, the persistence of data, and can capture screenshots of pages for visual validation. They also check that the platform will perform correctly on different browsers and versions.
These tests are /systemtest folder of the codebase.
Currently, we have only a single test in this group as a proof of concept. It loads the landing page and performs a search.
Coverage - ~1%
Doing it in the right way.
This means that we verify that the code is written performs the intended task, That is without defects, errors, crashes, exceptions, unwanted side effects etc, enforces security and privacy, has the correct appearance on the screen and performs appropriately.
The Voluntarily platform is written in Javascript and comprises code that runs both in client browsers (front end) and on servers (back end). On the server-side APIs connect requests for pages and data to the underlying MongoDB database and codebase. On the client-side, we use javascript to deliver a rich user interface that includes information displays and forms based on the pages and data sent from the server.
Javascript is an interpreted language which means that code is not pre-compiled or exercised until it is run in the browser or server environment. This means that errors and defects cannot be found by static inspection or compiling of the code instead we use Unit and Functional tests to cause all the code to be exercised in ideally all possible scenarios.
Unit Tests
We use the same AVA test runner and test environment for Unit tests as for Functional tests. These tests tend to be simple as fast as they only have to create local data and run an individual function checking the results.
These tests are in the codebase in an adjacent __tests__ folder to the code module (unit) that they test.
for example components/Op/__tests__/OpCard.spec.js loads and verifies the Opportunity Card component.
Unit test coverage of Components is 89%
Performance Tests
Performance tests require an environment similar to that used by End to End tests. They exercise the code with high volumes of requests or data to identify the points at which and the ways the system starts to fail under load.
Currently, we have no performance tests.
Manual Testing
Although our goal is for a high level of automated testing there will always be a need for some manual testing. Manual testing is slow and expensive in comparison so we want to make sure this is not time wasted or repeated.
Developers will test as they work on new features that their work behaves as expected. This tends to be focused on the new feature and can often miss side effects where other parts of the system are impacted. It can also be rather inconsistent due to time contraints. Developers are also natural optimists and don’t always recognise how their new feature might be mis-used.
QA and Acceptance testing. When stakeholders request new features in the system they need assurance that the feature has been implemented and fulfils the requirement. While functional tests may validate the implementation someone still has to check that the work has been done and it works in real life. It's possible for a feature to be implemented as designed but for it to be disliked once evaluated in real use. QA Testers aim to have some consistency in their processes by writing test specifications and manuals prior to working through the system.
Visual Design Validation - Robots cannot check that the final screens look good and match the expectations. Hence a visual test must be performed that verifies the look and workflow of the product across different browsers including mobile devices. This testing can be supported by End to End Test procedures that output screen captures for comparison.
User Testing - ultimately the system must be evaluated with the people that will use the system. Their feedback must be collected consistently and the causes of problems or dislikes understood.