The main reason for choosing a NoSQL database the last years have been Availability. For companies like Amazon, Google and Facebook an hour of downtime or so isn't acceptable. To achieve high availability you need to reduce single-point-of-failure, that means you need to use a distributed system with multiple computers in case a computer crashes, the service is still available.
Traditional Relatione databases isn't very good in a distributed multi-master setup. That's why NoSQL has been so popular lately. So if you need high availability you may choose a NoSQL database like Riak, Cassandra, HBase, S3 or BigTable.
There is a good blog post about Amazon's Dynamo that is a good introduction to distributed NoSQL databases.
Now, the NoSQL term is very broad so there are many NoSQL databases that aren't distributed. But they solve other problems. E.g. Neo4j - a graph database are good on a type of queries that traditional RDBMS aren't optimized for. Or as in your case a document database, where you don't have to change the schema if you want to add some fields for some documents. In other words a document database is good when most posts (documents) has different fields so a relational table with predefined columns isn't usable.
However, most of the NoSQL databases are not as flexible as traditional RDBMS databases are, so it's a good choice to use a traditional RDBMS database until it can't solve your problems anymore.
The decision between choosing a relational model over a de-normalized model is typically one of scale and the type of database operations that you anticipate most likely to occur.
A relational database is typically easier to query on and is more efficient for transactional heavy applications while a denormalized schema will be more appropriate if you plan on storing a large warehouse of data that you plan to run analytics or reports on.
If time is your bigger concern and you don't believe this site will have much traffic over the long term then by all means choose schema 1, but I recommend documenting the reasoning behind your eventual decision in the chance that someone else might be maintaining your work in the future and may be struggling with a feature that is at odds with your schema decision.
Myself, I would take the time to make it as relational as possible, but I am a perfectionist.
ProTip: Consider adding VIN number as a natural key to the vehicle table. It will help you identify individual vehicles and it relates to a real easily identifiable attribute of a real vehicle.
Best Answer
Berkeley DBs are not that distant from SQL databases. For instance, ISAM systems (which are similar to Berkeley DB) have been used to build relational databases (for instance, MySQL's MyISAM storage backend, warts et al.).
Basically, a Berkeley DB can store table rows. You can use Berkeley DB indexing to implement relational indexes on the stored rows- limit and offsets are implemented easily on top of that. Serialization is not a great concern (it is insignificant compared to IO).
The big difficulty is implementing joins and a decent query planner- the query planner is a crucial part of a relational database- it analyzes a query and decides how should it be executed; which tables need to be queried, in which order and using what indexes- and then the query needs to be executed in an efficient fashion (a naive approach will probably choke on the first join on two large tables- the naive approach is to materialize the full outer join, and that can have an untractable amount of rows). It doesn't sound like something very difficult, but there are lots of combinations and finding the best combination is difficult- and the different between the best combination and a "decent" one can be enormous.