It looks to me like you're experiencing cognitive dissonance, trying to believe two contradictory ideas and accept both as valid. The way to resolve it is to understand that one (or possibly both) must be incorrect, and find out which it is. In this case, the problem is that those edicts are based on a false premise, which Uncle Bob repeats several times a few lines further down:
However, think about what would happen if you walked in a room full of
people working this way. Pick any random person at any random time. A
minute ago, all their code worked.
Let me repeat that: A minute ago all their code worked! And it doesn't
matter who you pick, and it doesn't matter when you pick. A minute ago all their code worked!
That's the shining promise of TDD: test everything, make it so all your tests pass, and all your code will work.
Problem is, that's a blatant falsehood.
Test everything, make it so all your tests pass, and all your tests will pass, nothing more, nothing less. That doesn't mean anything particularly useful; it only means that none of the error conditions that you thought to test for exist in the codebase. (But if you thought to test for them, then you were paying enough attention to that possibility to write the code carefully enough to get it right in the first place, so that's less helpful than it might be.)
It doesn't mean that any error you didn't think of is not present in the codebase. It also doesn't mean that your tests--which are also code written by you--are bug-free. (Take that concept to its logical conclusion and you end up caught in infinite recursion. It's tests all the way down.)
To give an example, there's an open-source scripting library that I use whose author boasts of over 90% unit test coverage and 100% coverage in all core functionality. But the issue tracker is almost up to 300 bugs now and they keep coming. I think I found five from the first few days of using it in real-world tasks. (To his credit, the author got them fixed very quickly, and it's a good-quality library overall. But that doesn't change the fact that his "100%" unit tests didn't find these issues, which showed up almost immediately under actual usage.)
The other major problem is that as you go on,
every hour you are producing several tests. Every day dozens of tests.
Every month hundreds of tests. Over the course of a year you will
write thousands of tests.
...and then your requirements change. You have to implement a new feature, or change an existing one, and then 10% of your unit tests break, and you need to manually go over all of them to discern which ones are broken because you made a mistake, and which are broken because the tests themselves are no longer testing for correct behavior. And 10% of thousands of tests is a lot of unnecessary extra work. (Especially if you're doing it 1 test at a time, as the three edicts demand!)
When you think of it, this makes unit testing a lot like global variables, or several other bad design "patterns": it may seem to be helpful and save you some time and effort, but you don't notice the disastrous costs until your project becomes big enough that their overall effect is disastrous, and by that time it's too late.
It is now two decades since it was pointed out that program testing
may convincingly demonstrate the presence of bugs, but can never
demonstrate their absence. After quoting this well-publicized remark
devoutly, the software engineer returns to the order of the day and
continues to refine his testing strategies, just like the alchemist of
yore, who continued to refine his chrysocosmic purifications.
-- Edsger W. Djikstra. (Written in 1988, so it's now closer to
4.5 decades.)
It says you can't write production code unless it's to get a failing unit test to pass, not that you can't write a test that passes from the get-go. The intent of the rule is to say "If you need to edit production code, make sure that you write or change a test for it first."
Sometimes we write tests to prove a theory. The test passes and that disproves our theory. We don't then remove the test. However, we might (knowing that we have the backing of source control) break production code, to make sure that we understand why it passed when we didn't expect it to.
If it turns out to be a valid and correct test, and it isn't duplicating an existing test, leave it there.
Best Answer
No, because it is possible to write a test that inadvertently passes when it should actually fail.
That's why you must make it fail first, so that you can demonstrate transitioning from a failed state to a passed state where you're testing the actual functionality that you want, rather than having a bogus test that passes and makes you think that your code actually works, when in fact it doesn't.