I am going to orient this answer as if the question was "what are the advantages of chef-solo" because that's the best way I know to cover the differences between the approaches.
My summary recommendation is in line with others: use a chef-server if you need to manage a dynamic, virtualized environment where you will be adding and removing nodes often. A chef server is also a good CMDB, if you need one. Use chef-solo if you have a less dynamic environment where the nodes won't change too often but the roles and recipes will. Size and complexity of your environment is more or less irrelevant. Both approaches scale very well.
If you deploy chef-solo, use a cronjob with rsync, 'git pull', or some other idempotent file transfer mechanism to maintain a full copy of the chef repository on each node. The cronjob should be easily configurable to (a) not run at all and (b) run, but without syncing the local repository. Add a nodes/ directory in your chef repository with a json file for each node. Your cronjob can be as sophisticated as you wish in terms of identifying the right nodefile (though I would recommend simply $(hostname -s).json. You also may want to create an opscode account and configure a client with hosted chef, if for no other reason than to be able to use knife to download community cookbooks and create skeletons.
There are several advantages to this approach, besides the obvious "not having to administer a server". Your source control will be the final arbiter of all configuration changes, the repository will include all nodes and runlists, and each server being fully independent facilitates some convenient testing scenarios.
Chef-server introduces a hole where you use the "knife upload" to update a cookbook, and you must patch this hole yourself (such as with a post-commit hook) or risk site changes being overwritten silently by someone who "knife upload"s an obsolete recipe from the outdated local repository on his laptop. This is less likely to happen with chef-solo, as all changes will be synced to servers directly from the master repository. The issue here is discipline and number of collaborators. If you're a solo developer or a very small team, uploading cookbooks via the API is not very risky. In a larger team it can be if you don't put good controls in place.
Additionally, with chef-solo you can store all your nodes' roles, custom attributes and runlists as node.json files in your main chef repository. With chef-server, roles and runlists are modified on the fly using the API. With chef-solo, you can track this information in revision control. This is where the conflict between static and dynamic environments can be clearly seen. If your list of nodes (no matter how long it might be) doesn't change often, having this data in revision control is very useful. On the other hand, if you're frequently spawning new nodes and destroying old ones (never to see their hostname or fqdn again) keeping it all in revision control is just an unnecessary hassle, and having an API to make changes is very convenient. Chef-server has a whole features geared towards managing dynamic cloud environments as well, like the name option on "knife bootstrap" which lets you replace fqdn as the default way to identify a node. But in a static environment those features are of limited value, especially compared to having the roles and runlists in revision control with everything else.
Finally, recipe test environments can be set up on the fly for almost no extra work. You can disable the cronjobs running on a server and make changes directly to its local repository. You can test the changes by running chef-solo and you will see exactly how the server will configure itself in production. Once everything is tested, you can check-in the changes and re-enable the local cronjobs. When writing recipes, though, you wouldn't be able to use the "Search" API, meaning that if you want to write dynamic recipes (eg loadbalancers) you will have to hack around this limitation, gathering the data from the json files in your nodes/ directory, which is likely to be less convenient and will lack some of the data available in the full CMDB. Once again, more dynamic environments will favor the database-driven approach, less dynamic environments will be fine with json files on local disk. In a server environment where a chef run must make API calls to a central database, you will be dependent on managing all testing environments within that database.
The last can also be used in emergencies. If you are troubleshooting a critical issue on production servers and solve it with a configuration change, you can make the change immediately on the server's repository then push it upstream to the master.
Those are the primary advantages of chef-solo. There are some others, like not having to administer a server or pay for hosted chef, but those are relatively minor concerns.
To sum up: If you are dynamic and highly virtualized, chef-server provides a number of great features (covered elsewhere) and most of the chef-solo advantages will be less noticeable. However there are some definite, often unmentioned advantages to chef-solo especially in more traditional environments. Note that being deployed on the cloud doesn't necessarily mean you have a dynamic environment. If you can't, for example, add more nodes to your system without releasing a new version of your software, you probably aren't dynamic. Finally, from a high-level perspective a CMDB can be useful for any number of things only tangentially related to system administration and configuration such as accounting and information-sharing between teams. Using chef-server might be worth it for that feature alone.
Best Answer
Let's use their respective web pages to find out what are all these projects about. I'll change the order in which you listed, though:
Chef: Chef is an automation platform that transforms infrastructure into code.
This is a configuration management software. Most of them use the same paradigm: they allow you to define the state you want a machine to be, with regards to configuration files, software installed, users, groups and many other resource types. Most of them also provide functionality to push changes onto specific machines, a process usually called orchestration.
Vagrant: Create and configure lightweight, reproducible, and portable development environments.
It provides a reproducible way to generate fully virtualized machines using either Oracle's VirtualBox or VMWare technology as providers. Vagrant can coordinate with a configuration management software to continue the process of installation where the operating system's installer finishes. This is known as provisioning.
Docker: An open source project to pack, ship and run any application as a lightweight container
The functionality of this software somewhat overlaps with that of Vagrant, in which it provides the means to define operating systems installations, but greatly differs in the technology used for this purpose. Docker uses Linux containers, which are not virtual machines per se, but isolated processes running in isolated filesystems. Docker can also use a configuration management system to provision the containers.
OpenStack: Open source software for building private and public clouds.
While it is true that OpenStack can be deployed on a single machine, such deployment is purely for proof-of-concept, probably not very functional due to resource constraints.
The primary target for OpenStack installations are bare metal multi-node environments, where the different components can be used in dedicated hardware to achieve better results.
A key functionality of OpenStack is its support for many virtualization technologies, from fully virtualized (VirtualBox, VMWare), to paravirtualized (KVM/Qemu) and also containers (LXC) and even User Mode Linux (UML).
I've tried to present these products as components of an specific architecture. From my point of view, it makes sense to first be able to define your needs with regards to the environment you need (Chef, Puppet, Ansible, ...), then be able to deploy it in a controlled fashion (Vagrant, Docker, ...) and finally scale it to global size if needs be.
How much of all this functionality you need should be defined in the scope of your project.
Also note I've over-simplified mostly all technical explanations. Please use the referenced links for detailed information.