It depends a bit on how much abstraction you need. Everything is a compromise; for example, EF and NHibernate introduce great flexibility for representing the data in interesting and exotic models - but as a result they do add overhead. Noticeable overhead.
If you don't need to be able to switch between database providers, and different per-client table layouts, and if your data is primarily read, and if you don't need to be able to use the same model in EF, SSRS, ADO.NET Data Services, etc - then if you want absolute performance as your key measure you could do far worse than look at dapper. In our tests based on both LINQ-to-SQL and EF, we find that EF is significantly slower in terms of raw read performance, presumably due to the abstraction layers (between storage model etc) and materialization.
Here at SO, we are obsessive-compulsive about raw performance, and we're happy to take the development hit of losing some abstraction in order to gain speed. As such, our primary tool for querying the database is dapper. This even allows us to use our pre-existing LINQ-to-SQL model, but simply: it is heaps faster. In performance tests, it is essentially exactly the same performance as writing all the ADO.NET code (parameters, data-readers etc) manually, but without the risk of getting a column name wrong. It is, however, SQL based (although it is happy to use SPROCs if that is your chosen poison). The advantage of this is that there is no additional processing involved, but it is a system for people who like SQL. Which I consider: not a bad thing!
A typical query, for example, might be:
int customerId = ...
var orders = connection.Query<Order>(
"select * from Orders where CustomerId = @customerId ",
new { customerId }).ToList();
which is convenient, injection-safe, etc - but without tons of data-reader goo. Note that while it can handle both horizontal and vertical partitions to load complex structures, it will not support lazy-loading (but: we're big fans of very explicit loading - fewer surprises).
Note in this answer I am not saying that EF isn't suitable for high-volume work; simply: I know that dapper is up to it.
Best Answer
What you mentioned about Entity Framework happens only if you're model is up to date (in data-driven design). If not, you won't get a compile-time error, and they would definitely defer to become run-time errors.
Consider this scenario. You create a database, with one table called
Users
which has 3 columns:Username
,Password
,IsActive
.You create a project, and add an Entity Framework
.edmx
file to it, updating it with the schema of your database. Great till here.Now what happens if someone change the data type of the
IsActive
column fromBoolean
toint
, and you don't know about it?You simply build your project (which builds successfully) and run it, and then you get some errors.
In model-first and code-first development models, the scenario is even worst, as there is no direct and auto-generated mapping between your domain model and database schema.
Thus by the experience I've got till now, I can't say that Entity Framework definitely helps you get mapping problems at compile time. The only case it can help, is to let it generate the model itself from database, and get updated.