We highlight the essential interplay between continuous testing and effective debugging.
Designing for Testability
Before leaving the topic of development, let's discuss the issue of software that's hard to test. As I have stressed, effective debugging involves effective unit testing. But often, programmers not used to "infesting" their code with unit tests will run into situations where they believe there's just no way they can effectively test the behavior of their system.
When such cases occur, try to take a step back from the question, "How can I possibly test this kind of code?" Instead, ask, "How could I write this code in such a way that I can test it?" This shift in thinking will often result in the addition of many pieces of functionality that serve no purpose other than to facilitate the testing process. As long as the new functionality is necessary for testing, that's perfectly justified. I call this strategy test-oriented programming.
The time and effort spent writing tests and testing code pay off in dramatically reduced maintenance costs. However, unless you're careful, the effort involved in testing code can amount to several times the effort of writing the code in the first place! I've seen programmers make concerted efforts to fully cover their code with unit tests, and many of them end up disheartened at how much time it takes.
Fortunately, it doesn't have to be this way. With the application of a few basic principles that we'll discuss, it is possible to write code that is easy, and even fun, to test. Like any other set of coding principles, these principles are not meant to be unquestionable or unalterable dogma. There are times when it is necessary to break the rules. For this reason, it is important to understand the motivations behind each principle and to be able to determine when these motivations don't apply (or when they are usurped by more important concerns).
The following sections discuss these topics:
- Keeping code in the model, not the view.
- Using static type checking to find errors.
- Using mediators to encapsulate functionality across fault lines.
- Writing methods with small signatures and default arguments.
- Using accessors that do not modify memory state.
- Specifying out-of-program components with interfaces.
- Writing tests first.
Keeping code in the model, not the view
When writing GUI code, move as much code as possible out of the view of a GUI. The various GUI actions can then be simple method invocations on the model. Why would you want to do this? Because it is much easier to test functionality through method invocations than indirectly with GUI testers. An additional benefit is that it makes it easier to modify the functionality of the program without affecting the view.
Of course, there can be bugs in the view, too. Ideally, the tests for a program will check both the model and the view. The Model-View-Controller architecture, where the view and model are completely decoupled and a controller sets up the connections between the two at runtime, is a particularly effective way to allow for testing of both model and view. The respective tests on these components will be quite different in each case, so by separating them, the tests for each can be greatly simplified.
The DrJava IDE example (introduced in the section of Building Cost-Effective Specifications with Stories") provides a good example of how much functionality can be put into a model. The only contact that the view has with the model is through a specially designed interface we call the GlobalModel. This interface includes methods for every functional modification a user can make while using DrJava. In essence, it provides a handle that our tests can use to interact with DrJava in any way that a user can, except that the tests don't have to interact through the view.
Use static type checking to find errors
Types are your friends. Use the type system as much as possible to automatically check for errors. Doing so will save you from having to write a lot of extra tests just to check the invariants that type checking gives you for free.
Types can automatically catch a bug in your program before it is ever run. Without static type checking, a type error may linger in your program as a saboteur until just the right execution path happens to uncover it.
But the issue of how to use types to maximum advantage can be tricky. Often, a collection of data structures can be used together at one level of abstraction, or incorporated into a single, higher-level abstraction with a new associated data type. If we incorporate them into a higher-level abstraction, static type checking can ensure that they are used only in the manner allowed by that abstraction. Such checks can prevent errors, but they can also limit expressiveness. It can be difficult or impossible to use one of the data structures below the new level of the abstraction, even when we want to. In Java, we can do so only by inserting casts into the code.
In fact, the history of programming languages itself can be viewed as a gradual increase in the levels of abstraction at which one can program. Assembly language abstracted numerical opcodes into named mnemonics and numerical addresses into symbols. This was followed by abstractions such as records and functions, which were then followed by abstractions such as objects, classes, threads, and exceptions. At each higher level of abstraction, programming becomes simpler and more robust, but the expressiveness of the language is decreased.
In object-oriented languages (as well as other modern languages), the individual programmer is given a great deal of flexibility in devising abstractions. The level of abstraction at which to design a program then becomes a decision based on trade-offs, such as the added robustness provided by an abstraction level and the expressiveness (and sometimes performance) lost by not working at a lower level of abstraction.
In general, the added robustness and simplicity of higher levels of abstraction are seldom outweighed by other considerations.
Use mediators to encapsulate functionality across fault lines
By fault lines, I mean the interfaces between separate components that have little interaction compared with the internal interaction of their respective subcomponents. A classic example of such a fault line would be the interface between the view of a GUI and the corresponding model (as in the GlobalModel interface example described previously). Other examples include the interfaces between various phases of processing in a compiler or the interface between the kernel and user interface of an operating system.
The Mediator pattern is one of the design patterns discussed by Gamma, Helm, Johnson, and Vlissides (1994). A mediator is defined as "an object that encapsulates how a set of objects interact." It allows the client of that set of objects to more easily interact with all of them; the client only interacts directly with a single object.
Find the fault lines of your program, then use mediators with forwarding functions to quickly access aggregate components. To be sure, it may be easier on some occasions to test each component along a fault line in isolation. But if there are many objects exposed by each component, or if some of the objects you would like to test in a component are accessible only by following several nested references, testing can become quite tedious. Instead of testing in isolation, it often helps to have a single mediator object on which you call the various methods you want to test. This object can then forward these method calls to the appropriate places.
Along the same lines (no pun intended), it is useful to design interfaces to program components in tandem with the tests over them. This will focus your efforts on keeping these interfaces as simple as possible.
Write methods with small signatures and default arguments
Using small method signatures and overloading methods with default method arguments will make it much more pleasant to invoke these methods in your tests. Otherwise, you'll have to construct the extra arguments when testing the methods. If the denotations of the arguments are large, this can quickly lead to code bloat. Even worse, it can tempt you into writing fewer tests than you otherwise would.
Use accessors that do not modify memory state
Use accessors that do not modify the state of memory to check the state of objects in your tests. By accessors, I mean methods that retrieve some view of the state of an object (most commonly, the value of a field).
Remember the analogy of tests as scientific experiments: they both attempt to verify that particular hypotheses hold. But this is much harder to do if the very act of inspection alters the state of the world. Unlike the state of a particle in quantum mechanics, the state of a computer process can be checked without modifying that state. Use this to your advantage.
One example of an accessor that modifies state is the next method in java.util.Iterator. It is natural to call this method to retrieve the next element while iterating over a Collection, but if it's called more than once in a single iteration, an element may be lost. It would have been better to separate next() into two methods: one to retrieve the current element, and one to move forward by one element.
Specify out-of-program components with interfaces
Using interfaces to specify outside program components allows for easy simulation of those components in the test cases.
This principle can save a tremendous amount of time, especially if the implementation of the outside component isn't complete. All too often, the most essential components aren't available on time. If you can't test your own code without these components in place, you are headed for disaster. Your customers won't care that you only had two hours to integrate a component that was two weeks late. All they know is that the integrated product is late and that it's broken.
Write the tests first
This is standard practice in extreme programming, but it is always tempting to ignore it. Nevertheless, every time I succumb to this temptation, I regret it. Given that you're trying to produce correct code, the time you appear to save by postponing the writing of tests is nothing but an illusion.
This doesn't mean that you should write the entirety of the tests in one shot, followed by the entirety of the implementation. It is better to write a few tests, implement them, write a few more, implement those, and so on, integrating at each iteration. In this way, the design evolves; oversights are caught during the implementation phase and corrected in the next set of tests.
It is also less daunting to write the tests in this way. And by continually integrating, you'll reduce the chance of conflicts with other modifications to the code base.
Related Online Articles:
- Bug Patterns in Java - The Run-On Initialization
- Bug Patterns in Java - Platform-Dependent Patterns
- Bug Pattern in Java - Debugging & the Development Process
- Bug Pattern in Java - The Double Descent
- Bug Pattern in Java - Other Obstacles to Factoring Out Code.
- Design Patterns for Debugging - Maximizing Static Type Checking
- Bug Pattern in Java - Null Pointers Everywhere
- Bug Patterns in Java - The Split Cleaner
- Bug Pattern in Java - Agile Methods in a Chaotic Environment
- Bug Pattern in Java - The Broken Dispatch
No comment yet. Be the first to post a comment.