Yes, it is possible, it is even fairly standard functionality. You don't provide specifics about your server environment, unfortunately...
You can distinguish between sites on many parts of the request; but the classic is of course to distinguish by the Fully Qualified Domain Name of each site, i.e. sitename1.somedomain.com, sitename2.somedomain.com, sitename3.com et cetera.
With mod_perl and Squid it sounds like you're on Unix. Apache's mod_rewrite is perhaps the most common / oldest way of achieving this on Unix. Note, mastering mod_rewrite to set up secure proxies takes some work. (If you set up an insecure proxy it will allow proxying request on the 'outside' leg of the proxy, and spammers can misuse this to do form submissions etc from your IP address.)
Another common approach on Unix is to have a 'light-weight' HTTP server/proxy on the public IP (i.e. as receiving the requests from the users), and then let the 'light-weight' server fan out requests to 'heavier' Apache/PHP/Perl/Python instances. The benefits are more performant handling of many open connections, and reduced RAM use on the server. nginx (EngineX) is one popular server for this, and it also has a rewriting module.
Regarding handling HTTPS/SSL on the 'first' webserver; yes, this is a good solution. You would just set up SSL as normal, and have the web server proxy requests for that hostname to backend servers. Edit: It's common to have the frontend 'HTTPS accelerator' add HTTP headers (X-Forwarded-Proto) to the backend request, so that backend applications can know that the original request came via a encrypted connection
What you're looking at here is a sub-case of HTTP load balancing; i.e. you're installing a HTTP proxy on a single machine, and using it to fan out requests to other HTTP servers on the same machine. There are many good software load balancers for Unix systems.
If you're on Windows, then Microsoft's new "Application Request Routing" is pretty good. Version 2 is in betatest right now, and much improved over version 1 - find it over at the iis.net site. Edit: It should be said that never versions of IIS are quite fast and scalable. Most IIS users don't bother to set up a HTTP proxy for a single IIS server, they just create virtual hosts with the proper application model on the IIS server itself.
wget, curl or file save as this file?
Do other large files work?
If there's a support contract, I would consider requesting the vendor debug this collaboratively.
Best Answer
By default, it will cache all static content, the predicate
hierarchy_stoplist cgi-bin ?
Is on by default, and is designed to prevent dynamic pages from being cached.
You can create an acl like the following:
Then apply like this:
Not tested..