There are quite a few, but the advantages far outweigh the disadvantages.
There's a steep learning curve.
Many developers seem to expect that they can be efficient with test-first programming right from day one. Unfortunately it takes a lot of time to gain experience and program at the same speed as before. You can't get around it.
To be more specific, it's very easy to get wrong. You can very easily (with very good intentions) end up writing a whole bunch of tests which are either difficult to maintain or testing the wrong stuff. It's difficult to give examples here - these kind of issues simply take experience to solve. You need to have a good feel of separating concerns and designing for testability. My best advice here would be to do pair-programming with someone who knows TDD really well.
You do more coding up front.
Test-first means you can't skip tests (which is good) and means you'll end up writing more code up front. This means more time. Again, you can't get around it. You get rewarded with code that's easier to maintain, extend and generally less bugs, but it takes time.
Can be a tough sell to managers.
Software managers are generally only concerned with timelines. If you switch to test-first programming and you're suddenly taking 2 weeks to complete a feature instead of one, they're not gonna like it. This is definitely a battle worth fighting and many managers are enlightened enough to get it, but it can be a tough sell.
Can be a tough sell to fellow developers.
Since there's a steep learning curve not all developers like test-first programming. In fact, I would guess that most developers don't like it at first. You can do things like pair-programming to help them get up to speed, but it can be a tough sell.
In the end, the advantages outweigh the disadvantages, but it doesn't help if you just ignore the disadvantages. Knowing what you're dealing with right from the start helps you to negotiate some, if not all, of the disadvantages.
Firstly, you should not use profiling to determine performance. Profiling is intended to be used in order to identify the parts of your program which take the most time. When measuring performance, we only care about the speed of the whole program taken together. Profiling in that case only skews results. Instead, performance should be tested by benchmarks. The program processes some task, and we just measure how long it takes.
Secondly, you don't pass or fail performance tests. What we want to do is identify when something has made the program run slower. By running a benchmark against different revisions, you can easily obtain performance data for each revision. From there is would be fairly simple to automatically raise a warning on significant performance degradations.
http://speed.pypy.org, has done something like this. You can see the performance of PyPy as charts which show its performance. You can look at each benchmark and how it has changed over various revisions. You can actually see some revision which markedly decreased performance but were later fixed.
Best Answer
You pretty much nailed the most important ones. I have a few minor additions, plus the disadvantage of tests actually succeeding - when you don't really want them to (see below).
Development time: With test-driven development this is already calculated in for unit tests, but you still need integration and system tests, which may need automation code as well. Code written once is usually tested on several later stages.
Skill level: of course, the tools have to be supported. But it's not only your own team. In larger project you may have a separate testing team that writes tests for checking the interfaces between your team's product and other's. So many more people have to have more knowledge.
Tooling needs: you're spot on there. Not much to add to this.
Failed tests: This is the real bugger (for me anyways). There's a bunch of different reasons, each of which can be seen as a disadvantage. And the biggest disadvantage is the time required to decide which of these reasons actually applies to your failed test.
Non-failed tests: These are a disadvantage too and can be quite bad. It happens mostly, when you change things and comes close to what Adam answered. If you change something in your product's code, but the test doesn't account for it at all, then it gives you this "false sense of security".
An important aspect of non-failed tests is that a change of requirements can lead earlier behavior to become invalid. If you have decent traceability, the requirement change should be able to be matched to your testcode and you know you can no longer trust that test. Of course, maintaining this traceability is yet another disadvantages. And if you don't, you end up with a test that does not fail, but actually verifies that your product works wrongly. Somewhere down the road this will hit you.. usually when/where you least expect it.
Additional deployment costs: You do not just run unit-tests as a developer on your own machine. With automated tests, you want to execute them on commits from others at some central place to find out when someone broke your work. This is nice, but also needs to be set up and maintained.