No, because it's more effective to scale horizontally than vertically
Meaning, it's better to design an architecture where you can spread requests across a number of less powerful machines; as opposed to, building one super powerful machine running code that has been optimized at a lower level.
The drive behind horizontal scaling is due to the 'nature' of the web. Websites are very IO heavy not compute heavy. There are exceptions, such as YouTube where they're encoding massive amounts of data but that can be dealt with by setting up compute nodes and an effective task management system.
For instance, You could implement a reverse-proxy server that does nothing but distributes incoming requests to other servers to fetch content. It doesn't matter where the data is returned from as long as there is a common origin for requests (and even that can be extended using advanced DNS).
At the very minimum, it's common to split the 'data layer' (ex database) from the 'web layer' (http server). Even StackOverflow has a clear separation between the two by implementing C# on the 'web layer' and a separate REST API (also C#?) to access the database.
Even when you need to tap into the raw computing power of C, many 'glue languages' like python have the built-in capability to implement C/C++ extensions.
There are a lot of ground-breaking being made in the field of scalability and fortunately, the people who are breaking new ground like to share the specifics their approach.
Basically, it comes down to:
Architecture trumps raw computing power when it comes to the web
HighScalibility.com is my favorite site to read about scalable architecture development but there are plenty more to be found on the web.
Aside:
One of the greatest bottlenecks to handling web requests is the architecture of the HTTP server. Traditional servers like Apache fork a sub-process for every request. It works but every sub-process that's added to the stack needs it's own pool of memory and will inevitably increase the overall task-switching overhead on the server.
A DDOS attack is directed to take advantage of this weakness by overloading the HTTP server to the point where it can't physically handle the load any longer and crashes.
Multi-threading has been introduced as a stopgap measure but writing 'good' multi-threaded software is hard to write and multi-threaded bugs are notoriously hard to pinpoint.
Another 'school of thought' that has become popular recently is event-driven webservers like Node.js and nginx. They rely on a simpler single-threaded model and a programming style that mitigates waiting on a function returns by implementing a fire-and-forget model. It's more difficult to program in such a manner but as long as heavy compute tasks are passed on to servers specialized for it, the numbers (ie hits/sec) they can handle (on even commodity hardware) are impressive.
Best Answer
Best case: A single ID that relates to all the other information you need, which in turn is stored in a database.
There are times when it makes sense to put some other information in there, but they are rare. You always need to ask yourself why, at least five times.
SSL will protect your users from session hijacking but, even then, never store unencrypted sensitive information in a cookie. It is, essentially, stored in plain text on the harddrive.
Finally, and most importantly, protect your user against XSS and CSRF attacks.
XSS protection is generally as simple as being careful where you include Javascript from, because Javascript on another server could be changed without your knowledge, and this Javascript has access to cookie data. So if you're using Evil Corp's content-delivery network to serve your jQuery script, they can suddenly add code to send them your users' cookies. You wouldn't know; your users wouldn't know.
Either download scripts and serve them from your own server or use very well-trusted CDNs such as Google or Yahoo.
CSRF protection is usually done by having a random value in a hidden field in a form. The value is kept in the session so that when the form is resubmitted, you can verify it came from the same computer.
Most web frameworks now have very simple techniques for including that token.