Validation
Reproducibility
Better code structure
Better ability to refactor
Better code quality
Reduces frustration
Better documentation
Software validation is hard (really really hard)
Software involves a huge number of complex processes interacting with one another
Proving
A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools. - Douglas Adams
It is pretty easy to write down basic feature requirements for software (see HW4), it is really hard to write down all of the requirements without any ambiguity.
There is an impedance mismatch between spoken/written language and code - testing is a chance to explicitly show requirements instead of just describing (tell) them.
Your code will always have bugs, and we should try to minimize them as much as possible - but being careful isn't enough.
Necessary to plan for what to do if and when we find a bug (most important thing is that once we have made a mistake we want to make sure that we will not make it again).
Idealized workflow:
Find a bug
Write a test that detects the bug
Fix the bug
testthat uses a heirarchical structure for the organization of tests:
Individual tests are constructed using an expectation
, which runs code and compares it to an expected result.
expectations for the same or very similar functions are grouped together within a test
along with any scaffolding code (e.g. initialization and configuration).
Finally, collections of tests are grouped into a context
In testthat expectations are constructed using the expect_that
function which is given an object and some condition to test using that object. Conditions are specified using builtin condition functions.
expect_that(obj, cond())
is_true
: truth
is_false
: falsehood
is_a
: inheritance
equals
: equality with numerical tolerance
equals_reference
: equality relative to a reference
is_equivalent_to
: equality ignoring attributes
is_identical_to
: exact identity
matches
: string matching
prints_text
: output matching
throws_error
: error matching
gives_warning
: warning matching
shows_message
: message matching
takes_less_than
: performance
expect_that(1, equals(1)) expect_that(1, equals(2))
## Error: 1 not equal to 2 ## Mean relative difference: 0.5
m = matrix(1:4,2,2) expect_that(m, is_a("matrix")) expect_that(m, is_equivalent_to(1:4))
## Error: m not equal to expected ## target is numeric, current is matrix
expect_that(m, is_identical_to(1:4))
## Error: m is not identical to 1:4. Differences: ## Attributes: < target is NULL, current is list > ## target is numeric, current is matrix
v1 = 1:4 v2 = c(a=1L,b=2L,c=3L,d=4L) expect_that(v1, is_equivalent_to(v1)) expect_that(v1, is_identical_to(v2))
## Error: v1 is not identical to v2. Differences: ## names for target but not for current
expect_that(1, equals(1.01, tolerance=0.01)) expect_that(1, equals(1.01, tolerance=0.1))
expect_true
expect_false
expect_is
expect_equal
expect_equal_to_reference
expect_equivalent
expect_identical
expect_match
expect_output
expect_error
expect_warning
expect_message
expect_more_than
expect_less_than
expect_named
expect_null
expect_equal(1, 1) expect_equal(1, 2)
## Error: 1 not equal to 2 ## Mean relative difference: 0.5
m = matrix(1:4,2,2) expect_is(m, "matrix") expect_equivalent(m, 1:4)
## Error: m not equal to expected ## target is numeric, current is matrix
expect_identical(m, 1:4)
## Error: m is not identical to 1:4. Differences: ## Attributes: < target is NULL, current is list > ## target is numeric, current is matrix
v1 = 1:4 v2 = c(a=1L,b=2L,c=3L,d=4L) expect_equivalent(v1, v2) expect_identical(v1, v2)
## Error: v1 is not identical to v2. Differences: ## names for target but not for current
expect_equal(1.01,1,tolerance=0.01)
## Error: 1.01 not equal to 1 ## Mean relative difference: 0.01
expect_equal(1.01,1,tolerance=0.1)
totitle = function(x) { s = strsplit(x, " ")[[1]] paste(toupper(substring(s, 1, 1)), substring(s, 2), sep = "", collapse = " ") } test_that("string tests", { str1 = "Hello World!" str2 = totitle(str1) str3 = totitle(tolower(str2)) str4 = totitle(toupper(str3)) expect_equal(str1, str2) expect_equal(str1, str3) expect_equal(str1, str4) })
## Error: Test failed: 'string tests' ## Not expected: str1 not equal to str4 ## 1 string mismatches: ## x[1]: "HELLO WORLD!" ## y[1]: "Hello World!" ## ## .
totitle = function(x) { s = strsplit(x, " ")[[1]] paste(toupper(substring(s, 1, 1)), tolower(substring(s, 2)), sep = "", collapse = " ") } test_that("string tests", { str1 = "Hello World!" str2 = totitle(str1) str3 = totitle(tolower(str2)) str4 = totitle(toupper(str3)) expect_equal(str1, str2) expect_equal(str1, str3) expect_equal(str1, str4) })
The general approach is to group tests into logical subgroups:
a context is a description / label that is applied to all subsequent tests, and (maybe) shown by the reporter when running the tests
in general the preferences is for one context per test file (R script) - a single test file can be run using test_file
(like source but it doesn't pollute the global environment)
all test files are contained in single directory - these test files can be run (in alphabetical order) using test_dir
https://github.com/Sta523-Fa14/hw_examples/tree/master/hw4/tests
Travis CI is a hosted, distributed continuous integration service.
What does that mean?
Travis CI integrates with Github repositories (travis-ci.org for public repos, travis-ci.com for private)
Once enabled, a .travis.yml
file is added to repository that specifies how to test the code
Travis uses light weight virtual machines to run the code, .travis.yml
configures that machine
For R testing, this means we need to install R, any additional dependencies, and then run the testing code.
R is not a first class citizen on Travis, but r-travis handles most of the heavy lifting
Once Travis is enabled and knows how to test our code, any time you push to github Travis will now take the revised code, test it, and report back if the build passes all tests or not.
https://github.com/Sta523-Fa14/hw_examples/blob/master/.travis.yml
Above materials are derived in part from the following sources: