As Mikey mentioned, writing bugless code is not the goal. If that is what you are aiming for, then I have some very bad news for you.
The key point is that you are vastly underestimating the complexity of software.
First things first--You're ignoring the bigger picture of how your program runs. It does not run in isolation on a perfect system. Even the most basic of "Hello World" programs runs on an operating system, and therefore, even the most simple of programs is susceptible to bugs that may exist in the operating system.
The existence of libraries makes this more complex. While operating systems tend to be fairly stable, libraries are a mixed bag when it comes to stability. Some are wonderful. Others ... not so much ... If you want your code to be 100% bug free, then you will need to also ensure that every library you run against is completely bug free, and many times this simply isn't possible as you may not have the source code.
Then there are threads to think about. Most large scale programs use threads all over the place. We try to be careful and write threads in such a way where race conditions and deadlock do not occur, but it simply is not possible to test every possible combination of code. In order to test this effectively, you would need to examine every possible ordering of commands going through the CPU. I have not done the math on this one, but I suspect that enumerating all of the possible games of Chess would be easier.
Things go from hard to impossible when we look at the machine itself. CPU's are not perfect. RAM is not perfect. Hard drives are not perfect. None of the components within a machine are designed to be perfect--they're designed to be "good enough". Even a perfect program will eventually fail due to a hiccup by the machine. There's nothing you can do to stop it.
Bottom line: Can you write "Bug free software"?
NO
Anyone who tells you otherwise is clueless.
Just try to write software that is easy to understand and maintain. Once you've done that, you can call it a day.
EDIT: Some people commented about an excellent point that I had completely overlooked: the compiler.
Unless you are writing in assembly, it is entirely possible that the compiler will mess up your code (even if you prove that your code is "perfect").
A list of bugs in GCC, one of the more commonly used compilers: http://gcc.gnu.org/bugzilla/buglist.cgi?product=gcc&component=c%2B%2B&resolution=---
The formal answer is you misunderstood agile, agile does not dictate requirements, stakeholders do. The core of agile is not to carve your requirements in stone but rather have them emerge as you go, in close contact with your client, benefiting from progressive insights.
But that's all theory. What you have witnessed is indeed a common trait of many software production lines that adopted an agile way of working.
The trouble is, listening to the customer and swiftly responding to the customer's needs often soon ends up in not doing any thinking about the product or doing any design at all. What used to be a pro-active process fed by vision and expertise can and often will deteriorate into a passive, entirely reactive process fed by the customer's wishes. This will lead to making just the bare necessities that "will do the job".
The automobile would never have been invented if manufacturers at the time would have been "agile" because all the customers were asking for was a faster horse.
This does not make agile bad though. It is a bit like communism. A great idea that hardly ever works out well because people are just people, doing people things. And the method/ideology/religion lulls them into the idea that they are doing well as long as they are going through the motions and/or following the rules.
[edit]
Slebetman:
It is ironic then that agile evolved out of the automative industry
(namely Toyota).
Remember the golden rule of automation? "First organize, then automate". If you automate a broken process, the best that could happen is that you accelerate everything that goes wrong. The people at Toyota were not idiots.
The typical reason for adopting any new methodology is that things are not going well. Management acknowledges it, but they may not understand the core problems. So they hire this guru that gives a resiliant speech about Agile and Scrum. And everyone loves it. For their own reasons.
The developers may think "Hey, this might work. We would be more involved with business issues and we could provide input for filling this backlog. This could be an oppotunity to make sales and customer service understand what we do, why it is necessary, and we would have them out of our hair while we are transparently burning down what we agreed on." No more "stop what you are doing, this needs to be done now" by some dude you do not want to put off popping up at your desk.
Sales, customer service or the owner on the other hand may see it as a way to gain (back) control over this black box of a department that is presumably doing stuff that is necessary. They do not see what is happening in there but they are pretty sure the core of the problem is burried somewhere in there. So they introduce Scrum, install a product owner of their choice and all of a sudden they have all control, all the strings are in their hand. Now what?... Ehrr...
The real problem is often that the shop was not organized well in the first place and this has not changed. People have been assigned resposibilities they cannot handle, or perhaps they can but Mr. Boss is constantly interfering and ruining what they did, or (most often in my experience), crucial responsibilities have not been recognized or assigned to anyone at all.
Sometimes over time an informal organization will emerge in between the formal lines. This may then partly compensate for the lack of a formal structure. Some people just end up doing what they are good at, whether they have a business card to prove it or not. The blunt introduction of Agile/Scrum may ruin that instantly. Because people are now expected to play by the rules. They feel what they used to do is not appreciated, they get yellow little papers with little stories on it instead, the message will be: "whatever you were doing, no one cared". Needless to say this will not be particularly motivating on those individuals. They will at best start waiting for orders and not take any initiative anymore.
So things get worse and the conclusion will be that Agile sucks.
Agile does not suck, it is great for maintenance projects and can even be good for new developments if applied carefully but if the wrong people do not understand it or adopt it for the wrong reasons, it can be most destructive.
Best Answer
Fixing bugs before writing new code is actually one of the twelve points of Joel test. Joel also explains why this is a must-have:
You have a choice:
Either you implement a highly requested feature and delay fixing a bug, which will inevitably increase the cost of fixing it,
Or you fix the bug right now, given that customers will be disappointed that you're so slow at delivering the feature they need so much.
If the bug is not very important, while the feature is, management will be inclined to ask to implement the feature first, then fix the bug. Business-wise, this is a perfectly valid choice, as far as the management clearly understands the consequences, i.e. that it would be more difficult to fix the bug later than now.
Sticking to "no new features until all bugs are fixed" may not be the best business choice. You already mentioned its limitations, so there is no need to explain it.
This being said, the risk of letting very important features be implemented before fixing minor bugs has a risk: where to put the limits? Is a feature requested by 1 000 customers is more important than a bug encountered by 100 customers? How to evaluate whether a given feature should be done before fixing a given bug?
Without strict rules and if management doesn't understand the development process very well, you may see yourself in a few years with a backlog full of bugs which were considered not important enough in order to be fixed before just another fancy feature.