Bug Pattern In Java - Agile Methods In A Chaotic Environment - Online Article

We discuss the context in which modern software is developed, and identify some of the shortcomings of the old approaches to development and debugging.

Examining Trends in Software Design, Implementation, and Maintenance

Over the past few years, new trends have emerged that are drastically affecting the way software is designed, implemented, and maintained:

     
  • Increase in demand for safe, secure systems. 
  • Recognition of the limitations of traditional software engineering. 
  • Availability of open-source software projects. 
  • Demand for languages with platform-independent semantics.

Let's look at each of these issues in detail.

Increase in demand for safe, secure systems

The demand for safe, secure systems has grown tremendously. The terms "safe" and "secure" mean different things depending on who you ask, By "safe," I mean systems that don't break even when a user makes a mistake. By "secure" I mean systems that don't break even when they are deliberately attacked. For example, consider a web service for airline reservations. Suppose a user accidentally tries to get reservations for an impossible date (say, February 30). If the system is safe, it won't crash; instead, it will display an error message and allow the user to try again (or, better yet, the UI itself will prevent the user from ever entering this erroneous date). Now suppose that an attacker tries to snoop and play back someone else's credit card information when booking his ticket. If the system is secure, it will have measures in place to prevent him from doing so. To be sure, these two notions aren't completely distinct. A sufficiently inept attacker will be thwarted simply by a safe system. A sufficiently dopey user might trip over a sequence of actions that are prevented only by security measures. Once, in my early days on Unix, I accidentally put into my startup script for shell windows a command that started up yet another shell window. As soon as I opened one shell window, more and more windows were created, sucking more and more resources from the network, and bringing down several servers. This continued until a sys admin, who believed his network was under attack, shut down the account responsible (i.e., mine). It took awhile before he let me log in again.

With the growth in Internet commerce, the increasingly ubiquitous use of networked computing services, and the progress in embedded systems (many of which are intended for users with little or no computer experience), it is easy to see why demand for safety and security is on the rise.

Recognition of the limitations of traditional software engineering

There is growing recognition that the traditional methods of software engineering-focused on heavy up-front design and a strict separation of programmer roles into separate individuals (such as architect, coder, tester, etc.)-are often of limited use. Inevitably, designs change as a development team learns more about the advantages and disadvantages of various approaches. Such knowledge can be gained only through the process of actually building a system; a priori modeling is no substitute. The result is that many aspects of up-front designs are thrown away, and the effort spent in constructing them is wasted.

Additionally, the separation of programmers into various roles tends to impede knowledge transfer. Even if the coders discover that a design is flawed, it may still be quite difficult to convince the architects, and the testers may not be familiar enough with the code to test it thoroughly. The result of this division is poorly designed, poorly tested software that nobody's happy with. And, by the way, it doesn't work.

Even under ideal conditions in which deadlines are both reasonable and flexible, ample programmers of great proficiency are available, and software requirements are perfectly specified (and never change), traditional software engineering doesn't work very well. But real-world software is never developed under ideal conditions. It's developed in the presence of all sorts of complications, such as:

     
  • Business realities. Business realities often result in systems that are rushed to market and are neither well documented nor well tested. When a product is already two weeks late and everything has been done except test it, you can bet that testing will be given short shrift.
  •  
  • Lack of design experience. The developers of many new systems often have no experience in designing similar systems. This may not be a result of poor management, but an inevitable consequence of the massive shortage in experienced software developers.
  •  
  • Difficult communication with users. The customers of software typically are not able to define their requirements well, often because they're not yet sure what their requirements are. In most cases, this isn't because they are "bad" customers; usually, it is because many software projects are the first systems of their kind to be put in place, and the customers have no real-world experience to help them determine which features would be of most use.

The real world of software development is one of chaos. As a result, the traditional rigid processes of software engineering are beginning to give way to more agile, adaptive methods such as extreme programming (XP). Agile methods hold promise in helping to meet the demand for increased reliability. In particular, an increased emphasis on unit testing can dramatically improve the reliability of a software system.

Availability of open-source software projects

Another trend that is affecting reliability is the growing body of freely available open-source software projects. The open-source concept and the code produced by the open-source community is challenging the business models of traditional software companies, providing competing products at no cost and (due to the openness of the source code) of often superior quality. The most striking example is the Linux operating system, which provides an alternative to existing commercial operating systems that is more stable and robust than many of its competitors.

Demand for languages with platform-independent semantics

There has been a surge in demand for languages with platform-independent semantics, along with virtual machines to execute them. The most famous language in this category is Java. Java programmers' compiled binaries enjoy an unprecedented level of portability; this portability can drastically decrease the cost of software development, since developers don't need to maintain separate source code or compile separate binaries for each target platform.

Learning in a Fast-Paced World

The trends in agile computing and open-source projects can help to meet some of the increased demand for reliable software. But the quality and safety of the software produced inevitably depends on the skill and experience of the programmers involved. And the available supply of experienced programmers is smaller than the demand for them.

Tip 

Teaching developers to recognize bug patterns is a way to leverage the experience of many programmers to improve the effectiveness of each.

To address this shortage, we need ways to quickly convey to new programmers more than just the theoretical knowledge traditionally taught in computer science classes. We need to convey the kind of practical skill in developing robust systems that is normally gained through many years of experience.

Some of this experimental knowledge consists of various design patterns that have proven themselves to work in a variety of contexts and are now taught regularly in computer science curricula, alongside basic algorithms and data structures. But quite a bit of the working knowledge gained by experienced developers isn't demonstrated by such design patterns.

Part of this working knowledge consists of the ability to efficiently diagnose and fix bugs exhibited by a software system-in other words, effective debugging.

Effective debugging is far from a trivial skill. In fact, tracking and eliminating bugs constitutes a significant portion of the development time in a software project. If this task can be made more efficient, the resulting software will be more reliable and it will be developed more quickly.

While educating new developers, both in industry and academia, I've noticed some general tendencies in the ways they learn to debug software. Beginning programmers often work against themselves when considering the potential causes of a bug. They tend to:

  1. Blame the underlying system (as opposed to their own code) far too quickly;
  2. Convince themselves that the bug they are witnessing can't possibly occur (an idea that could never be anything but a delusion); and
  3. Thrash about, randomly changing code until the symptoms of the bug go away.

All of these tendencies diminish with experience. If novice programmers are made aware of them, they can learn to avoid them before they start real-world programming. But there is more to becoming a good debugger (which is a component of becoming an agile, effective developer) than just overcoming bad tendencies.

Dissecting Bug Patterns: Why It's Useful

Just as good programming skills involve the knowledge of many design patterns (which can be combined and applied in various contexts), good debugging skills include knowledge of the common causes of bugs and how to fix them. In my column for IBM developerWorks, I first referred to these common causes as bug patterns.

Bug patterns are recurring relationships between signaled errors and underlying bugs in a program. Knowledge of these patterns and their symptoms helps the programmer to quickly identify new occurrences of the bug, as well as to prevent them from occurring in the first place.

Bug patterns are related to anti-patterns, which are patterns of common software designs that have been proven to fail time and again. Such negative examples of design are an essential complement to traditional, positive design patterns. But while anti-patterns are patterns of design, bug patterns are patterns of erroneous program behavior correlated with programming mistakes. The concern is not with design at all, but with the coding and debugging process.

The problem is that it can take many years for programmers to learn to recognize these patterns through experience alone. If we identify such patterns and teach them explicitly, we can leverage the experiences of many programmers to improve the effectiveness of each.

This concept is not novel to programming. Medical doctors rely on similar types of recurring relationships when diagnosing disease. They learn to do so by working closely with senior doctors during their internships. Their very education focuses on learning to make such diagnoses.

Once a beginning programmer can recognize these patterns, he is able to diagnose the cause of a bug and correct it more quickly. Furthermore, by explicitly identifying and communicating these correlations, developers can mutually benefit from each other's experience in debugging, thereby acquiring proficiency far more quickly than they would have otherwise.

About the Author:

No further information.




Comments

No comment yet. Be the first to post a comment.