I don't really see it as a problem, as cookies aren't that big and the server doesn't have to send them back in the response. So while yes, one or more cookies set by example.com will "balloon" the request to static.example.com, static.example.com doesn't have to send it back in the response, so the server's response is completely unaffected.
Further, if you have content that you know to be static (images, CSS, JS, etc.), you should be setting proper cache control headers, especially an Expires header for some appropriate time in the future (e.g. +1 day, +1 week, +50 years, whatever makes sense for how often your static content changes). With this done, those cookies will only be sent one time, and then future need for those files will be handled by the browser's cache.
If you do still feel like this is too much (I can't really see how it could be, honestly), you can possibly mitigate it by using paths. For example, if your web app only needs cookies at example.com/app, set the cookie-path to be /app, and then requests to static.example.com/images/some-awesome-image.jpg won't send that cookie because the path doesn't match.
You can further mitigate it by reducing the number of cookies you use -- for most purposes, one (and only one) cookie storing a single session ID only adds a few bytes (less than 1KB in every implementation I've seen/used), and gives the server ample information to look up everything it needs to know about that client server-side.
Honestly, if you're worried about short-URL domains making your site non-future-proof because of cookies, then I think you have an architectural problem with too many/too big cookies, and you should re-evaluate your strategy from that point of view.
Sorry to say, but I would grab a profiler if one exists for PHP / MySql and optimize. Regardless how I cut the numbers, this site is somethign that should be able to run on an Atom processor with cores happily. 180.000 visitors per day is not THAT much for a well programmed site. For the disc wait - get a proper RAID controller or ZFS and put in 1-2 SSD as caching. Plus get hard iscs - fast many. Datbases are not something you put with performance on a normallow end server. Just to give you an idea - i have a 800gb databas server and I am using 10 discs - 8x Velociraptor ion a Raid 10, 2 SSD in mirror for logs. Disc waits will happen with badly designed subsystems for any database.
So, again, if I were you I would:
Start optimizing my PHP code, put in some accelerators. I remember dealing with 400.000 visitors on a dating site year ago on a dual pentium. In an hour during a TV show. With ASP - not compiled.
Start laying out a better IO subsystem.
Note: the later may require new hardware. Anyhow. SuperMicro rules here- they have server cases with up to 72 drive bays in 4 rack units height. 24 discs in 2 rack units, all on a SAS backplane. I use one of those (20 discs now total) and it really rocks.
Best Answer
If you're using nginx, then you're talking just a few KB of overhead per active connection. If you're using something like Apache, you'll have one thread per connection, which means hundreds of KB or even megabytes per connection.
However, nginx does not support asynchronous disk IO on Linux (because async disk IO on Linux is basically horribly broken by design). So you will have to run many nginx worker processes, as every disk read could potentially block a whole worker process. If you're using FreeBSD, this isn't a problem, and nginx will work wonders with asynchronous disk and network IO. But you might want to stick with Apache if you're using Linux for this project.
But really, the most important thing is disk cache rather than the web server you choose. You want lots of free RAM so that the OS will cache those files and make reads really fast. If the "hot set" is more than say 8 GB, consider getting less RAM and an inexpensive SSD instead, as the cost/benefit ratio will likely be better.
Finally, consider using a CDN to offload this, and getting a really cheap server. Serving static files is what they do, and they do it very fast and very cheaply. SimpleCDN has the lowest prices, but MaxCDN, Rackspace, Amazon, etc. all are big players at the low end of the CDN space.