Академический Документы
Профессиональный Документы
Культура Документы
APPROACH
CODE QUALITY, UNIT TESTS, COMPONENT INTEGRATION TESTS, BDD AND TDD
TESTING COSTS
• For unit tests a raw estimate evaluate to an increment of development times between 10
and 30%
• Usually the reduction in issues is between 40 and 90%
• Summing up there is a reduction of nearly 50% in development costs
TOPICS
• Unit tests
• Code Quality
• Safe Refactoring patterns
• Components integration tests
• Code coverage
• Why unit tests are a waste (and confutation)
• References
UNIT TESTING
• Every bug that involves the source code must, first, be reproduced inside a failing unit test
• Every bug must be shipped with
• Actions to reproduce the bug (even from an End To End perspective)
• Current (wrong) outcomes
• Expected (correct) outcomes
• Every source code bug should produce a corresponding unit test named
[PRODUCT_ID]_[BUG_ID]
UNIT TEST: NEW FEATURES
• Analyst and/or Test team must ship some example data. This is even reinforced when dealing
with bare algorithms
• For every feature a directory will be created [PRODUCT_ID]_[FEATURE_ID], that will
contain all the unit tests for the given feature
• Inside the tests directory a test must exist for each non trivial class implementation in the
form [ClassName]Test
• Non trivial implementations consit in (but not limited to)
• Requirements (even trivial)
• Every artifact not mentioned in requirements that contains logic
UNIT TEST: CORRECTNESS
• You can't write any production code until you have first written a failing unit test.
• You can't write more of a unit test than is sufficient to fail, and not compiling is failing.
• You can't write more production code than is sufficient to pass the currently failing unit
test.
UNIT TEST: MICRO-ARCHITECTURE
• Every request written on the Requirements must have a corresponding test, even if the
implementation seems trivial. Those are the “business rules” and are of the uttermost
importance
• Every non trivial algorithm, functional to the implementation but not explicitely specified
in requirements must be covered by unit tests
• Getters and tester must not contain any business logic
• Unit tests must run on a computer even when disconnected from the network and
without any external server (DB,http etc) …or they become integration tests
UNIT TEST: BOY SCOUT RULE
• Unit tests implementation is facilitated by the programming best practices. Becomes kind
of “natural” for well written code
• Unit tests should become the drivers to push developers towards better coding and a
better product
• Code quality principles will become a facilitator for the unit tests implementation
CODE QUALITY
• Extract Method: Turns a code fragment into a method whose name describes its purpose.
Helps improve the callers' readability, and may help reduce duplication.
• Inline Method: Replaces calls to a method by copies of its body. Brings code into one
place in preparation for other refactorings.
• Rename Method: Can help client code become more readable. Often used after other
refactorings if the method's responsibilities are now better understood.
• Introduce Explaining Variable: Save the result of (part of) a complicated expression in a
temporary variable. Helps to improve an algorithm's readability.
COMMON REFACTORING PATTERNS-II
• Inline Temp: Replace uses of a temporary variable by its value. Often used to bring code
together in preparation for other refactorings such as Extract Method.
• Add Parameter: Passes extra information into a method. Often used during larger code
restructuring.
• Remove Parameter: Removes a parameter that is no longer used by a method. Helps to keep
the method's interface clean and readable for client code.
• Extract Class: A class has too many responsibilities, so you split out part of it. The new class's
name contributes to the code's domain language; testability and reuse may also be improved.
COMMON REFACTORING PATTERNS-III
• Introduce Parameter Object: When several methods take the same bunch of parameters,
creates a new object that represents the bunch. This is another way of finding new classes
and thus new domain concepts.
• Replace Magic Number with Symbolic Constant: Gives a name to a literal number.
Improves code readability, and can help to identify subtle dependencies between
algorithms.
COMMON REFACTORING PATTERNS-EMERGENCY
ORIGINAL REFACTORED
• Always a kind of unit test but with a larger scope, usually to verify the coordinators
• Require usually a more complex initialization, that can be composed of several steps
• These must be descriptive tests, a proposal is the Gherkin Syntax
• And can fall into BDD testing practices
• The developers running the Gherkin scripts generate the stubs for the various phases.
• Every Gherking line is associated with a parametrized function
• The functions must be filled by the developers. E.G. when the line tells that a line is
insert, the DAO mock should be initialized
• The functions and the parameters are then adapted with parameters based on regular
expressions and tables of possible values are added
CIT: BDD, TECHNICALITIES
• The spring context will be filled directly through the GIVEN functions, in a programmatic
way to keep the scope limited (see https://stackoverflow.com/a/14716542 )
• The various functions can be composed to create more complex ones, e.g. several given
can be coordinated to produce a bigger stratup
• When tester needs a new verification they can simply pick between the already prepared
functions and add the text (.feature) file with the new ordered GIVEN WHEN THEN
WHY MOST UNIT TESTING IS WASTE
• Delete tests that haven’t failed in a year. James argues that unit tests that haven’t
failed in a year provide no information, and can be thrown out. But if a unit test fails, it
fails as you are developing the code. It is similar to a compilation failure.You fix it
immediately.You never check in code where the unit tests are failing. So the tests fail, but
the failures are transient.
WHY MOST UNIT TESTING IS WASTE
• https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf [4.pdf]
• Keep regression tests around for up to a year — but most of those will be system-level tests rather than unit tests.
• Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is
ascribable business value.
• Except for the preceding case, if X has business value and you can text X with either a system test or a unit test, use a system test — context
is everything.
• Design a test with more care than you design the code.
• Testing can’t replace good development: a high test failure rate suggests you should shorten development intervals, perhaps radically, and
make sure your architecture and design regimens have teeth
• If you find that individual functions being tested are trivial, double-check the way you incentivize developers’ performance. Rewarding
coverage or other meaningless metrics can lead to rapid architecture decay.
WHY MOST UNIT TESTING IS WASTE-RESPONSE
• Delete tests that haven’t failed in a year. James argues that unit tests that haven’t
failed in a year provide no information, and can be thrown out. But if a unit test fails, it
fails as you are developing the code. It is similar to a compilation failure.You fix it
immediately.You never check in code where the unit tests are failing. So the tests fail, but
the failures are transient.
• Complete testing is not possible. In both the original and follow-up article, James
talks about how it is impossible to completely test the code. The state-space as defined
by {Program Counter, System State} is enormous. This is true, but it applies equally to
integration testing, and is thus not an argument against unit testing.
WHY MOST UNIT TESTING IS WASTE-RESPONSE
• We don’t know what parts are used, and how. In the example of the mapin the follow-up article,
James points out that maybe the map will never hold more than five items. That may be true, but when
we do integration testing we are still only testing. Maybe we will encounter more than five items in
production. In any case, it is prudent to make it work for larger values anyway. It is a trade-off I am willing
to make: the cost is low, and a lot of the logic is the same, whether the maximum usage size is low or
high.
• What is correct behavior? James argues that the only tests that have business value are those directly
derived from business requirements. Since unit tests are only testing building blocks, not the complete
function, they cannot be trusted. They are based on programmers’ fantasies about how the function should
work. But programmers break down requirements into smaller components all the time – this is how you
program. Sometimes there are misunderstandings, but that is the exception, not the rule, in my opinion.
WHY MOST UNIT TESTING IS WASTE-RESPONSE
• Refactoring breaks tests. Sometimes when you refactor code, you break tests. But my
experience is that this is not a big problem. For example, a method signature changes, so
you have to go through and add an extra parameter in all tests where it is called. This can
often be done very quickly, and it doesn’t happen very often. This sounds like a big
problem in theory, but in practice it isn’t.
• Asserts. James recommends turning unit tests into asserts. Asserts can be useful, but
they are not a substitute for unit tests. If an assert fails, there is still a failure in the
production system. If it assert on something that can also be unit tested, it is better to
find the problem when testing, not in production.
CODE COVERAGE
• Coverage must not be calculated on getter, setter and possibly constructors. All these kind of
methods must not contain any true logic
• The proposed framework is JaCoCo, easy to integrate with Maven, Jenkins and Atalassian
Bamboo
• The coverage can be pushed up to 60% BUT
• Should be adapted to the context
• May be even less but it must cover the functions carrying business value
• THE COVERAGE MUST NOT BE A FIXED METRIC, or some developer can try any kind of
trick to reach the coverage (done that, been there!!).
GOALS BY ROLE (OTHER THAN STANDARD ONES!)
• Analysts
• Propose sample data for each UseCase/Story/Algorithm
• Tester
• Define context, expected and wrong outcomes for every bug founded
• Write down the Gherkin scripts to ease regression and integration tests
• Developers
• Create unit tests for bugs and features
• Implement the Gherkin scripts for the testers
• Progressively refactor the old code (unit testing it!)
• Verify the coverage extension and meaningfulness
• Architects/Techinal Leaders/Coaches
• Support all teams to reach the variouos goals
REFERENCES
• Enrico Da Ros
• Linkedin: https://www.linkedin.com/in/enricodaros/
• Github: https://github.com/kendarorg
• WebSite: http://www.kendar.org