4. Testing

4.1. Introduction

Testing is a critical part of the development process at Savas Labs:

  • All projects should have automated tests
  • New features added to a codebase should almost always come with tests to verify their functionality
  • Bug fixes should almost always contain new tests or fixes to existing tests
  • We can manage technical debt through good practices in automated testing

Why do we write tests?

  • Define the feature you are building or the bug you are fixing
  • Improve our code by checking against different assumptions
  • Catch regressions
  • Help future-you and or co-workers with refactoring
  • A green :heavy_check_mark: on your GitHub pull request feels good

4.2. Types of tests

4.2.1. Linting

The simplest test is PHP linting. Running php -l {some_directory} will parse all PHP files and check for syntax errors.

All projects should include a lint check as part of the automated testing process.

4.2.2. Coding standards

All the custom code we produce must adhere to coding standards. For Drupal projects, this involves running the phpcs command with the --standard=Drupal option.

Example usage in a Makefile:

phpcs_config = --ignore=*.css,*.min.js,*features.*.inc,*.svg,*.jpg,*.json,*.woff*,*.ttf,*.md \

phpcs: ##@test Run code standards check.
  docker run --rm -v $$(pwd):/work skilldlabs/docker-phpcs-drupal phpcs --standard=Drupal \
  tests drupal/sites/all/modules/custom $(phpcs_config)

All projects should include a coding standards check as part of the automated testing process.

4.2.3. Behat

All our Drupal projects should incorporate Behat tests, using the DrupalExtension plugin to provide access to the Drupal API from within the Behat testing environment.

We have examples of client projects to draw upon for example configuration of Behat. Use Behat as a communication and discovery tool

Involve the client in defining the scenarios and features. This page contains a good overview of how you might do this. You should work with the client to define what the features/scenarios under development are before you write any code. Write the feature/scenario definition before you build the functionality

Following from the above, do not write a Behat feature/scenario after you’ve coded the implementation. Work with the client to agree on how the feature should work. After you have defined the feature and how it should work in different scenarios, write the code that matches the specification so that the Behat test passes. Prefer human-readable step definitions to reliance on selectors

Consider the following Behat step:

And I click on the element with XPath '//*[@id="edit-submit"]'

While we can read this and infer that edit-submit must be referring to the submit button on the page, it’s much better to have something like this:

And I submit the form

Then, in your FeatureContext.php file, you could write the code which clicks on the element with the XPath.

The idea is to have a terse description of what the feature should do to provide for a shared understanding between the client and the developer. In this case, the client doesn’t need to know about XPath or particular selectors; they will want to know what happens when someone submits the form. Avoid adding more Behat tests than necessary

It’s tempting to test everything with Behat. But it’s important to counter-balance this with the fact that Behat tests are more difficult to maintain and slower to run than unit tests. Consider whether a unit test is more appropriate before adding a new Behat test.

4.2.4. Unit testing

We should almost always include unit tests on custom development for Drupal 8 projects.

Unit testing vanilla Drupal 7 code is not possible. But by making use of the XAutoload module, one can write object-oriented, unit-testable code.

We use PHPUnit for unit tests. Consider using phpspec to design the code needed for a feature

The phpspec project assists developers in designing the specification for code. Code written using phpspec is going to be easier to unit test. Use unit testing in conjunction with Behat tests

Using Behat to capture every permutation of a feature is difficult and costly to do. It’s more efficient to use unit tests to handle testing the different inputs/outputs, and use Behat for a broader overview of the feature.

4.2.5. Manual testing

We should avoid manual testing as the primary means for verifying a feature’s functionality, or the correctness of a bug fix. This is because the process is error prone, time consuming, and often tedious.

That aside, it’s good practice in bug reports and pull requests to provide a summary of steps to test a feature or reproduce a bug. Manual verification can help developers and code reviewers confirm that no new, uncaught regressions have occurred. This is also a good opportunity for the code reviewer to step back and assess the feature or fix in the broader context of the project.

4.2.6. Not testing

Circumstances exist in which including tests in a feature or a bug fix is not possible. In these cases, the developer and project manager should confer with the CTO and/or the Principal Director.

4.3. Travis CI

We tie the above together using an automated testing tool called Travis CI.

Because we have standardized our development environments on a combination of Docker, AWS S3 for seed databases, and a Makefile for setup, configuring Travis CI is pretty straightforward, and does not differ in substance from local setup instructions.

All projects with tests should have those tests run on a per branch and/or per pull request basis.

4.4. Integration with project management

When creating issues in Redmine for completion of a feature/scenario, the best practice is to create a subtask for the test or tests required for that issue. That way, we can add more detail to the requirements for the test, as well as an estimate. Categorize the subtask as “Test” so that the issue queue is filterable by test related issues, and the total estimated/spent time on tests is visible.