A SQL Server cluster is not a scalability solution, is strictly a high-availability solution. In a SQL Server cluster only one node is active and handles load, all the other nodes are just passive stand-by, waiting in case the active node has a hardware failure. As such froma SQL Server cluster you obtain, at best, the performance of a single SQL Server instance. You'll find sometimes references to a so called 'Active-Active' deployment, but that is not a SQL Server cluster. The so called 'Active-Active' is two separate clusters that use each other's node in reverse roles (one cluster in active on node A and passive on B, the other is active B and passive on A).
If your application is already partitioned and each customer has a separate database, then you'd be much better continuing to scale out. The problems of administration and maintenance can be addressed, once you automate the administration of 4 servers (ie. now) you can automate the administration of 1000 servers. So are the problem of versionning, deployment and account provisioning.
While right now you may be attracted by a perceived ease of administration, troubleshoot and tunning from a scale up solution, you are going to have new problems, more difficult to solve, culminating with a limit on how high can you scale up.
Updated After you edited the OP with explanation on concerns related to scale out.
Indeed scale up and clustering offer a high availability solution. There are basically two HA solutions for SQL Server: clustering and database mirroring. Since you plan for thousands of individual databases, this pretty much rules out mirroring. What you have to keep in mind though is that SQL Server clusters are supported only on a restricted list of hardware combinations:
Microsoft server clusters are only
supported on cluster solutions that
are listed in the Windows Server
Catalog under Cluster Solutions. To
view the Windows Server Catalog, visit
the following Microsoft Web site:
http://www.windowsservercatalog.com/
That doesn't mean you cannot run a cluster on any hardware pair/group that has a shared SCSI buss. It means you won't be able to request support from Microsoft CSS if you have an issue on a hardware that is not approved.
SQL Server clustering offers protection against hardware failures, except media failure (since the media is shared between nodes). What it doesn't offer protection from is human administration errors, which unfortunately are the primary reason for down-time. With one thousand customer databases on a single cluster, you'll have a very large omelette cooking in one basket...
As for you question about how many databases can a cluster run, ultimately a cluster has only one active instance at any time, so the limits of a single instance apply to a cluster as well. Having one thousand databases attached is not a big thing, the real question is the load, how many users will be running queries simultaneously. To get a ballpark region of numbers, consider that the TPC-C benchmarks published for SQL Server are for some 1.2 mil transactions per minute on a 64 way Superdome, which means roughly 16k transactions per second. You anticipate about 5k clients connected, that gives you a margin of 3 queries per client per second, and to reach that you need a pretty beeffy hardware and an exceptionally well tunned application.
Proper network security states that DMZ servers shouldn't have any access into the 'Trusted' network. The Trusted network can get to the DMZ, but not the other way around. For DB backed web-servers like yours this can be a problem, which is why database servers end up in DMZs. Just because it is in a DMZ doesn't mean it HAS to have public access, your external firewall can still prevent all access to it. However, the DB server itself doesn't have access to inside the network.
For MSSQL servers, you probably need a 2nd DMZ due to the need to talk to AD DC's as part of its normal functioning (unless you're using SQL accounts rather than domain-integrated, at which point this is moot). That second DMZ would be home to Windows servers that need public access of some kind, even if it is proxied through a web-server first. Network Security people get squinty when they consider domained machines experiencing public access getting access to DCs, which can be a hard sell. However, Microsoft doesn't leave much choice in this matter.
Best Answer
I have set up platforms for large scale enterprises and normal practice is to ensure your databases are on a different VLAN from your web servers with a firewall sat between these routing traffic to the database server port only as well as a firewall in front of your web servers. Typically your front firewall will forward port 80 (HTTP) and port 443 (HTTPS) to your web servers. The firewall sat between the web server and the database server will forward traffic from the web servers to the port used by your database (typically port 1433 if using Microsoft SQL Server).
For increased security:
If your database is the MI6 or CIA database then your network admins are probably right, but too me it sounds like they are overreacting.
If the database does contain data that absolutely cannot be exposed to a public network but the data your database needs is not that sensitive could you look at replicating the tables your web site requires to a database that is in your hosting environment?
I'd ask them the question: