Some will say otherwise but I would suggest that you separate TDD and Unit Testing. TDD is quite a mental shift and unit testing appears initially to take time. If you consider them to be one item, there's a risk that you won't see enough benefit immediately and there will be a temptation to simply drop TDD and Unit Testing with it.
First thing is to write some Unit Tests. They don't have to be perfect at first. Just teach yourself how to test small units of code and how to use mocking to isolate components.
This is the biggest time-taker but has by far the biggest payoff. Once you notice that you no longer have to page through 14 web pages to get to the one you want to test, you'll know what I'm talking about.
For me, the big Eureka moment was a Windows app where I was trying to test a regex which required I fill in two forms before I could get to it. I installed NUnit and wrote a test around that method and saw how quickly I saved hours of testing time. Then I added more tests to deal with the edge cases. And so on.
Then learn to write unit tests well. Learn the balance between brittle tests which are quick to write and writing many many individual tests. This is fairly easy. The lesson is that ideally each test only tests one thing, but you quickly learn how long that takes, so you start bending a bit on the rule until you write a test which breaks on every code change, then you move back towards the right balance (which is closer to the former than the latter).
TDD is, as I said, a major mental shift in the way you work. However, it won't add much time to your development process once you're already writing tests anyway. And you will, I promise, see your coding style improve before your very eyes. Or rather, if you don't then drop it, it's not for you.
One last thing to bear in mind is that TDD is not limited to unit tests. Acceptance test driven design is every bit a part of TDD. Another good reason not to mix them up in your mind.
I would abstract away from the hardware dependencies at the earliest possible step, and build the system on software emulation/test harnesses, enabling all sorts of test frameworks. Often my developement PC was used to test as much as 95% or more of the complete system. The cost of the extra overhead (another layer of abstraction) was easily won back by the cleaner code generated as a result of that abstraction.
The testing of the truely baremetal parts of an embedded system is usually a seperate application (Unit test?) that hammers the firmware well beyond what the applications can even hope to achieve. Automation can be done - at a cost, but is not typical.
Unless, that is, you have the budget to build a unit test hardware harness including full ICE. This is absolutly fine as generally the functional tests are small.
Best Answer
Apart from Luc's suggestion to test one functionality per test (method), I would suggest not to use unity for setting up the unit under test. Strictly speaking, this is no unit test anymore, because it tests both your repository and the configuration of unity.
This might not be a problem for now, but it can easily become one in the future. Consider, for example, the situation that someone extends you language repository by some online-lookup functionality. The respective request interface would probably be retrieved though DI. In consequence, your tests may suddenly start to fail at random, because of occasional connection timeouts.
From my experience I prefer to be strict about dependencies in unit tests, i.e., every input is replaced manually by a respective mock. This is the only way you can make sure that a test failure actually relates to some problem with the unit (or the expectations your tests express).