Writing Unit Tests¶
Brownie utilizes the pytest
framework for unit testing. Pytest is a mature, feature-rich test framework. It lets you write small tests with minimal code, scales well for large projects, and is highly extendable.
To run your tests:
$ brownie test
This documentation provides a quick overview of basic pytest usage, with an emphasis on features that are relevent to Brownie. Many components of pytest are only explained partially - or not at all. If you wish to learn more about pytest you should review the official pytest documentation.
Getting Started¶
Test File Structure¶
Pytest performs a test discovery process to locate functions that should be included in your project’s test suite.
- Tests must be stored within the
tests/
directory of your project, or a subdirectory thereof.- Filenames must match
test_*.py
or*_test.py
.
Within the test files, the following methods will be run as tests:
- Functions outside of a class prefixed with
test
.- Class methods prefixed with
test
, where the class is prefixed withTest
and does not include an__init__
method.
Writing your First Test¶
The following example is a very simple test using Brownie and pytest, verifying that an account balance has correctly changed after performing a transaction.
1 2 3 4 5 6 7 | from brownie import accounts
def test_account_balance():
balance = accounts[0].balance()
accounts[0].transfer(accounts[1], "10 ether", gas_price=0)
assert balance - "10 ether" == accounts[0].balance()
|
Fixtures¶
A fixture is a function that is applied to one or more test functions, and is called prior to the execution of each test. Fixtures are used to setup the initial conditions required for a test.
Fixtures are declared using the @pytest.fixture
decorator. To pass a fixture to a test, include the fixture name as an input argument for the test:
1 2 3 4 5 6 7 8 9 10 11 | import pytest
from brownie import Token, accounts
@pytest.fixture
def token():
return accounts[0].deploy(Token, "Test Token", "TST", 18, 1000)
def test_transfer(token):
token.transfer(accounts[1], 100, {'from': accounts[0]})
assert token.balanceOf(accounts[0]) == 900
|
In this example the token
fixture is called prior to running test_transfer
. The fixture returns a deployed Contract
instance which is then used in the test.
Fixtures can also be included as dependencies of other fixtures:
1 2 3 4 5 6 7 8 9 10 11 12 | import pytest
from brownie import Token, accounts
@pytest.fixture
def token():
return accounts[0].deploy(Token, "Test Token", "TST", 18, 1000)
@pytest.fixture
def distribute_tokens(token):
for i in range(1, 10):
token.transfer(accounts[i], 100, {'from': accounts[0]})
|
Brownie Pytest Fixtures¶
Brownie provides fixtures that simplify interact with and testing your project. Most core Brownie functionality can be accessed via a fixture rather than an import statement. For example, here is the previous example using Brownie fixtures rather than imports:
1 2 3 4 5 6 7 8 9 | import pytest
@pytest.fixture
def token(Token, accounts):
return accounts[0].deploy(Token, "Test Token", "TST", 18, 1000)
def test_transfer(token, accounts):
token.transfer(accounts[1], 100, {'from': accounts[0]})
assert token.balanceOf(accounts[0]) == 900
|
See the Pytest Fixtures Reference for information about all available fixtures.
Fixture Scope¶
The default behaviour for a fixture is to execute each time it is required for a test. By adding the scope
parameter to the decorator, you can alter how frequently the fixture executes. Possible values for scope are: function
, class
, module
, or session
.
Expanding upon our example:
1 2 3 4 5 6 7 8 9 10 11 12 13 | import pytest
@pytest.fixture(scope="module")
def token(Token):
return accounts[0].deploy(Token, "Test Token", "TST", 18, 1000)
def test_approval(token, accounts):
token.approve(accounts[1], 500, {'from': accounts[0]})
assert token.allowance(accounts[0], accounts[1]) == 500
def test_transfer(token, accounts):
token.transfer(accounts[1], 100, {'from': accounts[0]})
assert token.balanceOf(accounts[0]) == 900
|
By applying a module
scope to the the token
fixture, the contract is only deployed once and the same Contract
instance is used for both test_approval
and test_transfer
.
Fixture of higher-scopes (such as session
or module
) are always instantiated before lower-scoped fixtures (such as function
). The relative order of fixtures of same scope follows the declared order in the test function and honours dependencies between fixtures. The only exception to this rule is isolation fixtures, which are expained below.
Isolation Fixtures¶
In many cases you will want isolate your tests from one another by resetting the local environment. Without isolation, it is possible that the outcome of a test will be dependent on actions performed in a previous test.
Brownie provides two fixtures that are used to handle isolation:
module_isolation
is a module scoped fixture. It resets the local chain before and after completion of the module, ensuring a clean environment for this module and that the results of it will not affect subsequent modules.fn_isolation
is function scoped. It additionally takes a snapshot of the chain before running each test, and reverts to it when the test completes. This allows you to define a common state for each test, reducing repetitive transactions.
Isolation fixtures are always the first fixture within their scope to execute. You can be certain that any action performed within a fuction-scoped fixture will happend after the isolation snapshot.
To apply an isolation fixture to all tests in a module, require it in another fixture and include the autouse
parameter:
1 2 3 4 5 | import pytest
@pytest.fixture(scope="module", autouse=True)
def shared_setup(module_isolation):
pass
|
You can also place this fixture in a conftest.py file to apply it across many modules.
Handling Reverted Transactions¶
When running tests, transactions that revert raise a VirtualMachineError
exception. To write assertions around this you can use brownie.reverts
as a context manager. It functions very similarly to pytest.raises
.
1 2 3 4 5 6 | import brownie
def test_transfer_reverts(accounts, Token):
token = accounts[0].deploy(Token, "Test Token", "TST", 18, 1e23)
with brownie.reverts():
token.transfer(accounts[1], 1e24, {'from': accounts[0]})
|
You may optionally include a string as an argument. If given, the error string returned by the transaction must match it in order for the test to pass.
1 2 3 4 5 6 | import brownie
def test_transfer_reverts(accounts, Token):
token = accounts[0].deploy(Token, "Test Token", "TST", 18, 1e23)
with brownie.reverts("Insufficient Balance"):
token.transfer(accounts[1], 1e24, {'from': accounts[0]})
|
Developer Revert Comments¶
Each revert string adds a minimum 20000 gas to your contract deployment cost, and increases the cost for a function to execute. Including a revert string for every require
and revert
statement is often impractical and sometimes simply not possible due to the block gas limit.
For this reason, Brownie allows you to include revert strings as source code comments that are not included in the bytecode but still accessible via TransactionReceipt.revert_msg
. You write tests that target a specific require
or revert
statement without increasing gas costs.
Revert string comments must begin with // dev:
in Solidity, or # dev:
in Vyper. Priority is always given to compiled revert strings. Some examples:
1 2 3 4 5 6 | function revertExamples(uint a) external {
require(a != 2, "is two");
require(a != 3); // dev: is three
require(a != 4, "cannot be four"); // dev: is four
require(a != 5); // is five
}
|
- Line 2 will use the given revert string
"is two"
- Line 3 will substitute in the string supplied on the comments:
"dev: is three"
- Line 4 will use the given string
"cannot be four"
and ignore the subsitution string. - Line 5 will have no revert string. The comment did not begin with
"dev:"
and so is ignored.
If the above function is executed in the console:
>>> tx = test.revertExamples(3)
Transaction sent: 0xd31c1c8db46a5bf2d3be822778c767e1b12e0257152fcc14dcf7e4a942793cb4
test.revertExamples confirmed (dev: is three) - block: 2 gas used: 31337 (6.66%)
<Transaction object '0xd31c1c8db46a5bf2d3be822778c767e1b12e0257152fcc14dcf7e4a942793cb4'>
>>> tx.revert_msg
'dev: is three'
Parametrizing Tests¶
The @pytest.mark.parametrize
decorator enables parametrization of arguments for a test function. Here is a typical example of a parametrized test function, checking that a certain input results in an expected output:
1 2 3 4 5 6 | import pytest
@pytest.mark.parametrize('amount', [0, 100, 500])
def test_transferFrom_reverts(token, accounts, amount):
token.approve(accounts[1], amount, {'from': accounts[0]})
assert token.allowance(accounts[0], accounts[1]) == amount
|
In the example the @parametrize
decorator defines three different values for amount
. The test_transferFrom_reverts
is executed three times using each of them in turn.
You can achieve a similar effect with the @given
decorator to automatically generate parametrized tests from a defined range:
1 2 3 4 5 6 | from brownie.test import given, strategy
@given(amount=strategy('uint', max_value=1000)
def test_transferFrom_reverts(token, accounts, amount):
token.approve(accounts[1], amount, {'from': accounts[0]})
assert token.allowance(accounts[0], accounts[1]) == amount
|
This technique is known as property-based testing. To learn more, read Property-Based Testing.
Testing against Other Projects¶
The pm
fixture provides access to packages that have been installed with the Brownie package manager. Using this fixture, you can write test cases that verify interactions between your project and another project.
pm
is a function that accepts a project ID as an argument and returns a Project
object. This way you can deploy contracts from the package and deliver them as fixtures to be used in your tests:
1 2 3 4 | @pytest.fixture(scope="module")
def compound(pm, accounts):
ctoken = pm('defi.snakecharmers.eth/compound@1.1.0').CToken
yield ctoken.deploy({'from': accounts[0]})
|
Be sure to add required testing packages to your project dependency list.
Running Tests¶
To run the complete test suite:
$ brownie test
Or to run a specific test:
$ brownie test tests/test_transfer.py
Test results are saved at build/tests.json
. This file holds the results of each test, coverage analysis data, and hashes that are used to determine if any related files have changed since the tests last ran. If you abort test execution early via a KeyboardInterrupt
, results are only be saved for modules that fully completed.
Only Running Updated Tests¶
After the test suite has been run once, you can use the --update
flag to only repeat tests where changes have occured:
$ brownie test --update
A module must use the module_isolation
or fn_isolation
fixture in every test function in order to be skipped in this way.
The pytest
console output will represent skipped tests with an s
, but it will be colored green or red to indicate if the test passed when it last ran.
If coverage analysis is also active, tests that previously completed but were not analyzed will be re-run. The final coverage report will include results for skipped modules.
Brownie compares hashes of the following items to check if a test should be re-run:
- The bytecode for every contract deployed during execution of the test
- The AST of the test module
- The AST of all
conftest.py
modules that are accessible to the test module
Interactive Debugging¶
The --interactive
flag allows you to debug your project while running your tests:
$ brownie test --interactive
When using interactive mode, Brownie immediately prints the traceback for each failed test and then opens a console. You can interact with the deployed contracts and examine the transaction history to help determine what went wrong.
- Deployed
ProjectContract
objects are available within their associatedContractContainer
TransactionReceipt
objects are in theTxHistory
container, available ashistory
- Use
chain.undo
andchain.redo
to move backward and forward through recent transactions
Once you are finished, type quit()
to continue with the next test.
See Inspecting and Debugging Transactions for more information on Brownie’s debugging functionality.
Evaluating Gas Usage¶
To generate a gas profile report, add the --gas
flag:
$ brownie test --gas
When the tests complete, a report will display:
Gas Profile:
Token <Contract>
├─ constructor - avg: 1099591 low: 1099591 high: 1099591
├─ transfer - avg: 43017 low: 43017 high: 43017
└─ approve - avg: 21437 low: 21437 high: 21437
Storage <Contract>
├─ constructor - avg: 211445 low: 211445 high: 211445
└─ set - avg: 21658 low: 21658 high: 21658
Evaluating Coverage¶
To check your unit test coverage, add the --coverage
flag:
$ brownie test --coverage
When the tests complete, a report will display:
contract: Token - 80.8%
Token.allowance - 100.0%
Token.approve - 100.0%
Token.balanceOf - 100.0%
Token.transfer - 100.0%
Token.transferFrom - 100.0%
SafeMath.add - 75.0%
SafeMath.sub - 75.0%
Token.<fallback> - 0.0%
Coverage report saved at reports/coverage.json
Brownie outputs a % score for each contract method that you can use to quickly gauge your overall coverage level. A detailed coverage report is also saved in the project’s reports
folder, that can be viewed via the Brownie GUI. See Viewing Reports for more information.
You can exclude specific contracts or source files from this report by modifying your project’s configuration file.
Using xdist
for Distributed Testing¶
Brownie is compatible with the pytest-xdist plugin, allowing you to parallelize test execution. In large test suites this can greatly reduce the total runtime.
You may wish to read an overview of how xdist works if you are unfamiliar with the plugin.
To run your tests in parralel, include the -n
flag:
$ brownie test -n auto
Tests are distributed to workers on a per-module basis. An isolation fixture must be applied to every test being executed, or xdist
will fail after collection. This is because without proper isolation it is impossible to ensure consistent behaviour between test runs.