The basic rule of thumb is to separate files onto different volumes to avoid contention, however the anmount of performance gain you get varies wildly by I/O subsystem and workload. For instance, multiple files on a single physical spindle is going to suck as far as performance goes, but the same arrangement with the volume being on a SAN LUN with several hundred drives from RAID 10 arrays may be just fine. Disk queue length counters are your friend as the simplest way to tell if you've got an I/O bottleneck.
You're looking at the I/O patterns on the databases - read-only, read-mostly, read-write, write-mostly, write-only - and basing things on that. You also need to choose the right RAID level and make sure your disk partition offsets, RAID stripe size, and NTFS allocation unit size are correctly set. Some people like to separate nonclustered indexes into a separate filegroup , but performance gains here vary just as I've explained above.
As well as performance, you should consider manageability and recovererability. Having a single, monolithic data file for a 100GB database means that your unit of restore is that file. Having it split into 4 25GB filegroups means you can use partial database availability and piecemeal restore to only have to restore a single filegroup in the event it gets damaged. By partitioning tables and indexes in multiple filegroups you can also limit which parts of the database are affected by maintenance operations (e.g. index fragmentation removal).
Tempdb is a whole special case, and I'll point you at a blog post of mine that explains all about why and how to split up tempdb - there are lots of misconceptions out there.
Without giving you a 'sweeping generalization' recommendation here, I'll point you at a bunch of whitepapers and blog posts for you to read:
Hope this helps you!
I would imagine that if you checked most of those connections will be on port 80 and in a TIME_WAIT
state.
TIME_WAIT is a state
where connections could conceivably be re-used without
making a new TCP
between servers, so it can help reduce load.
So basically your router is saving a tcp socket for later use to the same site.
Best Answer
A .NET app should clean it self up when the garbage collector runs on the app server. This should be automatically running every few minutes.
Can you query the SQL Server and see that these connections are still open on the SQL Server?
If you run netstat on the app server can you see all the socket connections open? (Each spid being used on the SQL Server will have a socket connection on the app server.)
If you do see all the ports in use on the app server then the app server is definitely not closing the connections as SQL doesn't close connections unless requested to. The .NET code may be expecting this to happen automatically, and it isn't. You probably don't have the issue in your test environment because the usage is much lower, and you probably to releases to the test environment more often which causes all the ports to be closed when IIS is restarted.
This definitely sounds like an app code issue to me.