If you look at your three definitions, the distinction is subtle, but they essentially mean the same thing.
What it all amounts to is providing what the class needs (its dependencies) through parameters in it's constructor. That's all. There are numerous Dependency Injection frameworks out there that seek to formalize this process, but they all amount to the same thing.
Dependency Injection always seeks to provide only those dependencies that are needed, when they are needed.
The term
The term antibugging or anti-bugging is not widely used:
around 2000 Google occurrences (part of them being related to devices for spyware removal !) compared to 33 millions
for debugging !
It was first used by Ed Yourdon, the software engineering pioneer, in his book
"Techniques of Program Structure and Design" published in 1975.
It's strange that it didn't gain the popularity of other ideas promoted by Yourdon, such as structured analysis and structured design or Yourdon/DeMarco dataflow modeling.
But despite its seldom use, the term gains to be known.
Debugging
The goal of debugging is to catch and correct errors, especially in an early stage, and provide tools to support
bug finding, should they happen later in production. Debugging is primarily performed in relation with coding. For example:
- assertion checks of pre- and post-conditions at begin or ending of functions (if they fail, the execution aborts),
- logging in order to be able to analyse cause of bugs if they happen,
- extensive testing to find potential bugs,
- overnight coffee driven activity til exhaustion or the bug exterminated.
Except some general purpose code to support debugging (especially logging), debugging wouldn't directly influence software structure IMO.
Antibugging
The goal of antibugging is to prevent bugs from happening. This activity performed throughout the whole
development process. For example:
- design that prevents conditions of bugs
- defensive coding that prevent error propagation and that ensure that the special case is properly handled
- automatic error correction or recovery strategies (e.g. relaunching a service that aborted but is needed)
This kind of prevention should be undertaken from the start of the development and should be rooted in the design and the software structure (e.g. API design, exception management). It could hence influence the software architecture. And it includes also traditional defensive programming, to ensure offer an alternate path of execution to gracefully handle error conditions.
So debugging and antibugging have clear boundaries.
But where does it end ?
IMO it is much more difficult to distinguish antibugging and sound software design practices. For instance, is
encapsulation antibugging, just because it will prevents bugs through reduction of unexpected side effects ? Is
a state design pattern antibugging, because it prevents behavior to happen which is not in line with an object's
state ? ... and so on.
In practice I'd therefore keep focus on prevention of concrete and probable bugs -- error situation that could happen--, instead of viewing it too broadly to prevent bugs of code that will never be written, keeping sound design principles as something distinct from antibugging.
Further reading:
Best Answer
Both describe the consistency of an application's behavior, but "robustness" describes an application's response to its input, while "fault-tolerance" describes an application's response to its environment.
An app is robust when it can work consistently with inconsistent data. For example: a maps application is robust when it can parse addresses in various formats with various misspellings and return a useful location. A music player is robust when it can continue decoding an MP3 after encountering a malformed frame. An image editor is robust when it can modify an image with embedded EXIF metadata it might not recognize -- especially if it can make changes to the image without wrecking the EXIF data.
An app is fault-tolerant when it can work consistently in an inconsistent environment. A database application is fault-tolerant when it can access an alternate shard when the primary is unavailable. A web application is fault-tolerant when it can continue handling requests from cache even when an API host is unreachable. A storage subsystem is fault-tolerant when it can return results calculated from parity when a disk member is offline.
In both cases, the application is expected to remain stable, behave uniformly, preserve data integrity, and deliver useful results even when an error is encountered. But when evaluating robustness, you may find criteria involving data, while when evaluating fault-tolerance, you'll find criteria involving uptime.
One doesn't necessarily lead to the other. A mobile voice-recognition app can be very robust, providing an uncanny ability to recognize speech consistently in a variety of regional accents with huge amounts of background noise. But if it's useless without a fast cellular data connection, it's not very fault-tolerant. Similarly, a web publishing application can be immensely fault-tolerant, with multiple redundancies at every level, capable of losing whole data centers without failing, but if it drops a user table and crashes the first time someone registers with an apostrophe in their last name, it's not robust at all.
If you're looking for scholarly literature to help describe the distinction, you might look in specific domains that make use of software, rather than broadly software in general. Distributed applications research might be fertile ground for fault-tolerance criteria, and Google has published some of their research that might be relevant. Data modeling research likely addresses questions of robustness, as scientists are particularly interested in the properties of robustness that yield reproducible results. You can probably find papers describing statistical applications that might be helpful, as in climate modeling, RF propagation modeling, or genome sequencing. You'll also find engineers discussing "robust design" in things like control systems.
The Google File System whitepaper describes their approach to fault-tolerance problems, which generally involves the assumptions that component failures are routine and so the application must adapt to them:
This project for a class at Rutgers supports a "component-failure" oriented definition of "fault tolerance":
There are loads of papers on "robust modeling XYZ", depending on the field you investigate. Most will describe their criteria for "robust" in the abstract, and you'll find it all has to do with how the model deals with input.
This brief from a NASA climate scientist describes robustness as a criteria for evaluating climate models:
This paper from an MIT researcher examines wireless protocol applications, a domain in which fault-tolerance and robustness overlap, but the authors use "robust" to describe applications, protocols, and algorithms, while they use "fault-tolerance" in reference to topology and components: