You need a URL or Domain Name set describing the target.
Then, you need an Access Rule.
- Allow
- From: Internal (or whatever network the users are on)
- Protocols: HTTP & HTTPS (or just HTTPS)
- To: Target URL Set
- Users: Whomever you want. If setting All Users works, it's an authentication problem.
Caveats:
- Authentication must be working in order to do it by user.
- the target URL Set for HTTPS needs to be https://domain.com
- possibly http://domain.com (yes, even for SSL, I don't have one to test with but remember some funkiness around that)
- HTTPS targets in URL Sets cannot include a path, because the browser doesn't share that with the proxy. Domain Name Sets would work for this too.
- When testing, remember the Two Minute Rule.
- the Two Minute Rule is that ISA will take up to two minutes for the change to kick in, so wait until then before deciding something hasn't worked.
You'll need to post more details about how you're going about it, and the problems you're having - from your description, it could be anything.
You can do this only in very specific and specialised circumstances.
The problem is that the URLs visited, and exactly what went on in them, in an HTTPS connection is all protected by SSL. In order to look inside, you've got to break through that protection.
The only way to do that (short of becoming wildly famous by breaking a crypto algorithm or two) is to perform a man-in-the-middle attack on your users. You have to setup the proxy server with an SSL certificate for every domain in the world -- either a global wildcard certificate (one that has AltNames of *
, *.*
, *.*.*
, and so on), or something that can generate SSL certificates on the fly (with your own local CA). You won't get a certificate like this from an established, "globally trusted" CA (well, maybe Comodo...), so you'll have to do everything self-signed, and then configure all devices that will use this proxy to trust that local CA. This trust issue is why you can only do this in "very specific and specialised circumstances".
Once you've got that setup, you can decrypt all of the HTTPS traffic as it comes through, log it, and then re-encrypt it using the end-site's own SSL certificate.
Before you jump up, click your heels and shout "yippee!", there are some things to note. The dangers and caveats in this process are considerable.
You need a proxy server (or farm) that's got enough grunt to terminate SSL for every connection that anyone who uses the proxy makes. For a small office, that's not too hard, but if you're working at any sizeable company you have a significant scaling problem ahead of you.
The security risks aren't trivial, either. This proxy farm of yours is now an incredibly high value target for anyone who wants to get their hands on some juicy credentials. Take note of the saying "don't put all your eggs in one basket" -- or more usefully in this case, "put all your eggs in one basket, just make sure it's a really strong basket". Don't think that people won't find out about it; it's not hard to identify these sorts of things if you know what you're looking for, and since you won't be able to deploy this in an organisation without people finding out (one way or another), anyone internal who might harbour naughty thoughts is likely to be very, very tempted. It might not be worth breaking into a dozen desktops to harvest credentials, but it's far more valuable -- and less work and risk -- to pop a central HTTPS proxy and watch everything go by.
As far as exactly how to do this in Squid or any other available proxy solution, rather than typing everything out by hand, I'll refer you to the Squid documentation for SSL bump and Dynamic SSL certificate generation which, together, will give you access to the encrypted traffic without generating undue browser security warnings. Once you've got that, logging the connections is trivial.
Best Answer
Yes, HTTPs will put a damper on network caching.
Specifically because caching HTTPs requires doing a man in the middle type attack - replacing the SSL certificate with that of the cache server. That certificate will have to be generated on the fly and signed by a local authority.
In a corporate environment you can make all PCs trust your cache server certificates. But other machines will give certificate errors - which they should. A malicious cache could modify the pages easily.
I suspect that sites that use large amounts of bandwidth like video streaming will still send content over regular HTTP specifically so it can be cached. But for many sites better security outweighs the increase in bandwidth.