Contents
This page is in group Technology.
Intro
I have recently read myself up some on code coverage analysis. I have discovered something called MC/DC which stands for Modified Condition/Decision Coverage. Here’s an example of some of the different parts of code that need to be tested for coverage (from [1]):
if (Entree == Lobster) // BRANCH { Dessert = Pie; // STATEMENT } else if (Entree==Steak && Salad==Green) // MC/DC { Order->Dessert = Cake; }
In short, to test complex logical statement one can’t do all possibilities, so MC/DC has been invented so that a minimal subset of tests may be done for maximum testability. It’s like we learned in school, when doing derivation: we put the finger over the constant field to do a part of a derivation; and then did all the constant fields after each other. Here it’s about finding tests that would let a change in one input be visible in the output, in a forest of other inputs.
In [2] they explain that in some real-life aviation code they found a logical expression with 36 inputs. If doing 100 tests per second it would take 21.79 years to complete the necessary test. So they invented MC/DC to help in situations like this – and for much simpler logical expressions as well. «However, achieving MC/DC requires more thoughtful selection of the test cases.» and «Clearly, multiple condition coverage is impractical for systems such as these. MC/DC attempts to provide a practical alternative.»
[2] also shows that the and, or, xor and not gates all have so-called minimal test sets, where one don’t need to test every possible combination (according to Chilenski and Miller, referenced there). So, for a three-input and gate only four tests are needed, not 2exp3=8. All three high (1) and each low in turn (2,3,4). This is sufficient to find any error in the and gate. So it shouldn’t be necessary to use 21 years..Tools would know how to set up the minimal requirements. But I guess that tools might still find that run-time would be prohibitive and use other algorithms to try to do the best. Re-quoting the above: «MC/DC attempts to provide a practical alternative»
Aside and by the way, if you find && (logical-and or just and) strange and confuse it with & (bitwise-and or just bitand), even if you are a crooked C coder (like I admit I also have become after bygone years of honeymooning with Modula-2 and occam 2), then try:
#include <iso646.h> // In C standard libary since 1995!
But syntax?
I fast delved into more details. The below is from [3]. This first code shows two logical expressions setting up d1 and d2. Also the truth tables are seen, as well as a «thoughtful»(?) suggested test for it:
Code example 1
int myFunc (bool c1, bool c2, bool c3) { bool d1 = c1 or c2; bool d2 = d1 and c3; if (d2) return 1; else return ¬1; } c1 c2 d1 = c1 or c2 -- -- ------------- F F F F T T T F T T T T d1 c3 d2 = d1 and c3 -- -- ------------- F F F F T F T F F T T T One test could be {TFF, FTF, FFT, TTT} // (c1 c2 c3) = = = === C1 C2 C3 C1 C2 C3 C1 C2 C3 C1 C2 C3 T F F F T F F F T T T T ---- ---- ---- ---- T F T F F T T T ---- ---- ----- ---- F F F T
Then the logically equivalent code where d has been inlined, with a single, combined truth table, and the same test:
Code example 2
int myFunc (bool c1, bool c2, bool c3) { bool d2 = (c1 or c2) and c3; // d1 doesn’t exist, it’s inlined if (d) return 1; else return ¬1; } c1 c2 c3 d2 = (c1 or c2) and c3 -- -- -- ---------------------- F F F F F F T F F T F F F T T T T F F F T F T T T T F F T T T T Previous test suite does not satisfy MC/DC criteria anymore Previous test: One test could be {TFF, FTF, FFT, TTT} // (c1 c2 c3) = = = === C1 C2 C3 C1 C2 C3 C1 C2 C3 C1 C2 C3 T F F F T F F F T T T T ------- ------- ------- ------- F F F T
What surprised me was that they say that «Previous test suite does not satisfy MC/DC criteria anymore» when it looks like the test result is the same! I found an explanation in [2]:
(Quote) The next issue involves the scope of “within a decision”. For example, consider the following code statements:
A:= B or C; (statement 1) E:= A and D; (statement 2)
These two statements are logically equivalent to:
E:= (B or C) and D; (statement 3)
Statements 1, 2, and 3 all contain decisions, even though none of the statements are branch points such as an if statement. That is, a decision is not synonymous with a branch point. MC/DC applies to all decisions—not just those within a branch point. (End Quote)
In other words, MC/DC actually takes us into the code, line by line. And line by line I can fast see that these aren’t equal. [3] shows that a weak point of MC/DC actually is that it is syntax dependent:
«Paper states MC/DC criteria can be “cheated” and heavily depends on code structure»
«MC/DC is indeed highly sensitive to structure of implementation»
That’s all. I hope this has inspired you to read the references. I have copy/pasted enough now. But I learned from this. I don’t know what to do with this information, as my aquarium project at home probably won’t be scrutinized by code coverage analysis. But I always think it’s a good idea to learn some more than I think I need to know.
And I know that MC/DC is neither a rock band like AC/DC nor anything with Direct Current. It isn’t even almost any such But again, it «attempts to provide a practical alternative».
Aside about formal models and IEC 61508 etc.
However, formal modeling and verification, like CSPm and tools like FDR would visit all states (not entirely true, it does algebraic reduction to visit as few states as possbile. This enables FDR to check larger systems. See [7] for a good summary of several techniques). The tool also does semantic-preserving compressions (like MC/DC), but total run-time makes short-cutting no option [4]. Then it’s up to us to code like the model or have the code generated. If generated, I assume that all generated code is needed, and 100% is visited any way you see it. With a model of some system, at some level, all is formally verified. Even if some kind of coverage may be required I am not certain how that requirement will be handled. It’s like in the IEC 61508 Safety Critical standard where structural test coverage is required 100% by 1.) entry points, 2.) statements, 3.) branches and 4.) conditions and MC/DC. 61508 has «Highly Recommended» (HR) for the first in the list for all SIL levels but the last (MC/DC) only HR for SIL 4. 61508 also talks about formal methods, that’s what this Aside comment was supposed to be about. With formal methods used, I imagine that the coverage matters will have to be seen through some mirror. I am not certain what an assessor would say if I showed him that all the code were an implementation of a formal model that has been verified ok. And that I haven’t done any code coverage analysis, only run a black box testing of the system. May this be okayed by an IEC 61508 assessor? How would he interpret «Where 100 % coverage cannot be achieved (e.g. statement coverage of defensive code), an appropriate explanation should be given.» This is from IEC 61508 Part 3 and Part 7. In part 7 chapter C.5.8 Structure-based testing the word «code coverage» is used – but the whole Part 3 → Part 7 reference is «informative», not «normative». And then, MC/DC is only mentioned in Part 3, not Part 7. Hmm.. (I hope I understand this paragraph the next time I read it)
A firebrand
The fact that I include these quotes (from [5]) tells so much that I don’t need to comment further:
Few developers admit that they do only random or partial testing and many will tell you that they do complete testing for some assumed vision of complete. Such visions include notions such as: «Every line of code has been reached,» which, from the perspective of theory of computation, is pure nonsense in terms of knowing whether the code does what it should.
..
Another client of mine also had too many unit tests. I pointed out to them that this would decrease their velocity, because every change to a function should require a coordinated change to the test. They informed me that they had written their tests in such a way that they didn’t have to change the tests when the functionality changed. That of course means that the tests weren’t testing the functionality, so whatever they were testing was of little value.
..
When developers write code they insert about three system-affecting bugs per thousand lines of code. If we randomly seed my client’s code base — which includes the tests — with such bugs, we find that the tests will hold the code to an incorrect result more often than a genuine bug will cause the code to fail!
More uses of the «coverage» term
Another type of «coverage» isn’t (an attempt at) test coverage analysis as mentioned above, but such as used by a tool that takes a program through all possible paths of the code from A to B. (Of course, a formal method’s tool will do this through the formal language’s code, so this may be seen as a third kind of coverage analysis?) I’m thinking of the XMOS’ XTA Timing Analyser (Standard disclaimer. I receive no money, gifts, no ads, no nothing). From [6] I quote:
Our XMOS Timing Analyser (XTA) analyzes your application binary and reports the worst and best case timing paths through your code. It provides 100% coverage of your code without the need for a test bench, and eliminates the need to over specify processing power or timing. The XTA reports any slack in the timing so you can add more functionality, or reduce operating frequency to save power.
XTA is a unique tool that uses the determinism of the xCORE architecture to guarantee the performance and behavior of your code. The XTA API lets you add the tool into your build systems, and verify application timing requirements every time you build your application.
Update Sep2021: The XTA is not supported any more in the XTC Tools. See C plus lib_xcore black-boxes xC
References
Wiki-refs: MC/DC
- VectorCAST Interactive Tutorials, see http://manualzz.com/doc/6646122/vectorcast-interactive-tutorials
- A Practical Tutorial on Modified Condition/Decision Coverage (NASA/TM-2001-210876) by Hayhurst, Veerhusen, Collins and Rierson (2001), see https://shemesh.larc.nasa.gov/fm/papers/Hayhurst-2001-tm210876-MCDC.pdf
- The Effect of Program and Model Structure on MC/DC Test Adequacy Coverage in ICSE ’08: Proceedings of the 30th international conference on Software engineering by Rajan and Whalen, presented by Arno Fiva, see http://se.inf.ethz.ch/old/teaching/2009-S/0276/slides/fiva.pdf
- The CSP refinement checker, now with a new lightning-fast multicore refinement-checking engine, University of Oxford, see https://www.cs.ox.ac.uk/projects/fdr/
- Why Most Unit Testing is Waste by James O Coplien, see http://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
– Also, read this interview with James Coplien, by Javier Garzás in General, at: http://www.javiergarzas.com/en/2013/07/24/interview-to-jim-coplien-1/
– Also read this: TDD is dead. Long live testing by David Heinemeier Hansson. See http://david.heinemeierhansson.com/2014/tdd-is-dead-long-live-testing.htm. It was that note that led me «down the rabbit hole» to James Coplien - XTA: TIMING ANALYZER: guarantee timing closure, deterministic architecture, integrate with build systems, see http://www.xmos.com/products/tools#xscope – dead url Sept2021, see above. Instead search here: https://www.xmos.ai/?s=xta
- Formal Methods for System Development (a Master’s thesis) by Inge Fredriksen at http://www.teigfam.net/oyvind/work/studentprosjekt.html