Disk Subsystem:
Here's an article from Microsoft re: partition alignment in SQL Server 2008: http://msdn.microsoft.com/en-us/library/dd758814.aspx
The theory explained in the article is why I'm giving you the link, not 'cuz I think you'll be running SQL Server. The workload of a file server is less apt to be as touchy about partition alignment as SQL Server, but every little bit helps.
NTFS:
You can disable last access time stamping in NTFS with:
fsutil behavior set disablelastaccess 1
You can disble short filename creation (if you have no apps that need it) with:
fsutil behavior set disable8dot3 1
Think about the best NTFS cluster size for the kinds of files you're going to be putting on the box. In general, you want to have as large a cluster size as you can get away with, balancing that against wasted space for sub-cluster-sized files. You also want to try and match your cluster size to your RAID stripe size (and, as was said above, have your stripes aligned to your clusters).
There's a theory that most reads are seqential, so the stripe size (which is typically the minimum read of the RAID controller) should be a multiple of the cluster size. That depends on the specific workload of the server and you'd need to measure it to know for sure. I'd keep them the same.
If you're going to have a large number of small files you may want to start with a larger reserve for the NTFS master file table (MFT) to prevent future MFT fragmentation. As well as talking about the fsutil command above, this document describes the "MFT zone" setting: http://technet.microsoft.com/en-us/library/cc785435(WS.10).aspx Basically, you want to reserve as much disk space for the MFT as you think you'll need, based on a predicted number of files you'll have on the volume, to try and prevent MFT fragmentation.
A general guide from Microsoft on NTFS performance optimization is available here: http://technet.microsoft.com/en-us/library/cc767961.aspx It's an old document, but it gives some decent background nonetheless. Don't necessarily try any of the "tech stuff" it says to do, but get concepts out of it.
Layout:
You'll have religious arguments with people re: separating the OS and data. For this particular application, I'd probably pile everything into one partition. Someone will come along and tell you that I'm wrong. You can decide yourself. I see no logical reason to "make work" down the road when the OS partition fills up. Since they're not separate RAID volumes, there's no performance benefit to separating the OS and data into partitions. (It would be a different story if they were different spindles...)
Shadow Copies:
Shadow copy snapshots can be stored in the same volume, or on another volume. I don't have a lot of background on the performance concerns associated with shadow copies, so I'm going to stop there before I say something dumb.
Legacy Answer; Updates from the future below
If already have some Linux/Unix machines in your environment and are comfortable with that format, I'd recommend using Syslog. There are a number of products that will forward your logs to a syslog server for you.
If you're just looking for log collection for legal/compliance reasons, anything will do, really.
Splunk is fairly popular log tool (I think it's based on syslog) that can do a lot of reporting for you. If you want analytics built in, it's a good place to start evaluating. It has a limited free version, but can pay to break out of those limitations.
You can also use Nagios to assist you with your Log Management, especially with some of the plugins and sidecar applications, but I'll warn that it's not trivial to set up.
UPDATE:
If you're not afraid of scripting, there are a lot of examples of Logging Scripts at the Microsoft Script Center Repository. (Fulfilling the down-n-dirty requirement...)
UPDATE 2015:
If you're not using Splunk, you should use ELK (ElasticSearch, Logstash, & Kibana) as your logging mechanism. While F/OSS like Syslog, it gives you so much more feature-wise. As far as shipping logs, you should use NXLog. It handles Windows Event Logs, and ships them as objects (viewable as JSON, which is how they're stored in ElasticSearch). While each log is slightly larger over the wire, you don't need to write long, painful, and brittle RegEx statements to parse the fields (like you do in order to make use of Syslog, or syslog-formatted logs sent to ELK).
Best Answer
Where are you logging to? An SQL Database? CSV File?
If it's going to an SQL Database, I used to just create a single job and let it log all day long, forever. Then you query the SQL server to get the data for the range that you're after. You can do this in Excel, or whatever other report builder you have.
I say "used to" because a few years ago I did away with the de-centralised Performance Logging and installed a central Zabbix installation that keeps and tracks all this information forever, providing trends and granular reporting.
In terms of load, I never saw any noticable performance penalty by running the performance logging. Given the number of times each server gets polled by Zabbix each minute to collect all the datapoints, it's really a trivial operation.