The verification of software has always been one of the fundamental responsibilities of our profession. As computing resources grew, so did the complexity of our software systems. The complexity (i.e. sophistication) of the languages used to construct software systems grew as well. Unfortunately, it took a very long while for our development processes to catch up, particularly those processes related to system verification. Over the past few decades, we've seen dramatic changes in our approach to building quality software. Agile methodologies have brought forth the requirement for rapid iteration and comprehensive automated verification of behavior. Programming languages that have been created (or evolved) in this environment have incorporated these new requirements with differing levels of success.
At the beginning of a software development project, the question of which language to use is often one of the very first that is answered. Strengths and weaknesses of the options should be considered carefully. It is my experience that quite often, concerns about verifiability weigh heavily. In my opinion, the lack of a standardized method for program verificationis a significant barrier to adoption of C++ in many projects, only behind a relatively paltry standard library and the indecipherable nature of template error messages (both of which the committee has made significant headway on).
In order for agile processes to work well, the agile team must be able to iterate quickly without creating undesired regressions in the behavior of the product. This means that it is important to catch errors early and fix them quickly. In order to achieve that goal, automated verification of the product is vital.
In the book Succeeding with Agile, Mike Cohn introduced the concept of the test pyramid. The test pyramid is meant to convey the "ideal" distribution of tests across the different layers of a system, with a vast majority of them appearing at the "unit" level and very few appearing at the "user interface" level. It is a useful representation that I will reference in Section III, so I've included it here.
The following are some of the characteristics that a verification framework could include (not meant to be an exhaustive list).
There must be a way to signify that a particular portion of code represents a test that verifies the functionality of some component of the program. Attributes (or annotations) are often used for this purpose in languages that support those concepts. In some languages, there are reserved keywords for this purpose, while in many others, there is a registry of tests that each test must be added to in order to run.
The fundamental action in a test is to assert that the state of the code under test is correct. Typically, there will be a sophisticated set of assertion capabilities, including those that wrap the normal comparison operations. There is often the ability to provide a specific message to be recorded in case an assertion fails.
Tests that require the program to be in a certain state before (or after) an assertion is made are usually grouped with tests with similar requirements. The ability to reuse that code in multiple tests is sometimes provided in the form of fixtures or setup and teardown methods.
Providing tests to verify the behavior of some code can sometimes lead to dependencies on external systems (e.g. database or networks). This is undesirable since it increases the cycle time for a team and could also require the configuration of additional resources (sometimes manually). In order to avoid these costs, some languages or libraries will provide the ability to replace a real resource with a mock resource. This allows tests to be shifted from being a Service test to a Unit test, according to the test pyramid.
Many times, systems are designed to terminate upon some condition so as to prevent corruption of data or because of constraints imposed by the language. The ability to verify that these design choices are implemented correctly is quite useful.
Tests are only useful if their results are accessible and structured in a way such that test failures are easy to locate and understand. XML is a popular output format (sometimes with a well-defined schema), while others present the results as a part of an IDE or as compiler output.
The value of verification is highly dependent on the amount of code that is actually executed during the verification process. The ability to know what portions of code are currently covered by automated verification is an extremely useful tool in guiding development teams to apply their available test-writing time in the areas that need it the most.
I believe that there is a hidden layer to the test pyramid that was introduced in Section II, the compiler. Conceptually, the process of verification consists of taking a representation of the problem domain and comparing it against a set of well-known domain constraints. The compiler is the very first test suite that any code ever sees (outside of the humain brain).
If you've done something that violates a constraint
specified by the language or library (a very broad domain indeed), then it
fails. Indeed, a language that utilizes static typing has a clear advantage
in this layer, even more so for languages that allow for the definition of
further domain constraints (e.g.
static_assert). This is a way
of looking at the problem that suits the strengths of C++ and is ripe with
Following is a list of ideas to serve as a starting point for further discussion. They are ordered roughly in their perceived difficulty/impact on the language and tools.
std::test_exception (bikeshed) type to serve as a basis for...
My request and recommendation going forward would be that these topics be grouped under a new study group. I presume that some of the simpler ideas could have proposals written for them fairly quickly, but many of the other ideas are complex enough to require a dedicated group to thoroughly investigate them.
I'd like to thank my employer, Perceptive Software, for their continued support of my work to improve the C++ language. I also greatly appreciate the reviews provided by my associates: Chris Sammis, Matthew Dissinger, and Andrew Regier.