I can't speak for Windows instances, but I will presume that their base characteristics are fairly similar to Linux instances.
Your estimate for bandwidth usage is 100 simultaneous video downloads (I am not sure if you mean downloading the file or streaming the video - I will assume the latter). If we take a stream rate of 512kbps, you need about 51Mbit/s or 6.5MB/s.
EC2 instances differ in their I/O performance (which includes bandwidth). There are 3 levels of I/O performance: low, moderate, and high. Keep in mind, though, that disk I/O (i.e. from EBS volumes) also is bandwidth dependent. You can only really consider bandwidth within the EC2 network (as it will be completely variable over the Internet).
Some typical numbers to quantify 'low', 'medium', and 'high' (different sources quote different numbers for theoretical values, so they might not be completely accurate).
High:
Theoretical: 1Gbps = 125MB/s;
Realistic (source): 750Mbps = 95MB/s
Moderate:
Theoretical: 250Mbps;
Realistic (source, p57): 80Mbps = 10MB/s
Low:
Theoretical: 100Mbps;
Realistic (from my own tests): 10-15Mbps = 1-2MB/s
(There is actually a 'very high' level as well (10Gbps theoretical) but that applies only to cluster compute instances only).
A further point of mention is the degree of variation. On smaller instances, there is more variability in performance as the physical components are shared between more virtual machines. Regardless, you can expect around +/-20% variation in your performance (sources: 1, 2, 3). In your case (as per the assumptions/calculations at the top), you may need peak bandwidth of 13MB/s (double 6.5MBps, since disk I/O is also network limited). If you are transferring lower bandwidth content, you should be able to use an instance with 'moderate' I/O performance (see the instance types page), if your calculations result in a higher bandwidth requirement, you will need an instance with 'high' I/O performance. Simply streaming the data should not be CPU or memory bound, but sustaining 100 simultaneous connections will probably require at least a medium sized instance - and if bandwidth is a concern, based on the above, a large instance would be a safer bet).
I would recommend benchmarking the servers you launch to see if they meet your (calculated) needs. Launch two instances (of the same type) and run iperf
on each using the instances' private IP addresses - you will need to open port 5001 in your security group if you run it with the default settings). Additionally, most tests outside of the EC2 network show results of between 80-130Mbps (large instances) - although such numbers are not necessarily meaningful.
A CDN would be better suited to your needs, if your setup permits it. S3 appear to have a limit around 50MB/s for bandwidth (at least from a single instance) as per this article, but that is higher than what you should require (S3 does not support streaming). Cloudfront would be better suited to your task (as it is designed as a CDN) and supports 1000Mbps=125MB/s by default (source) with higher bandwidth available on request and can stream content as well)
It would depend, to me, on whether there are any servers or other shared resources on-site in each remote office.
If there aren't servers or shared resources in each remote office that could be used independently of a VPN failure then there isn't much point in putting a DHCP server in the remote office. If the VPN has failed then not getting DHCP leases is probably the least of your problems.
If there is some capability for each office to function independently because of on-site servers or other shared resources then I'd strongly consider putting a DHCP server in each remote office. That gives the remote office some ability to function in the face of VPN failure.
Having a single DHCP server also implies a single point of failure but, also, a single point of administration. You'll have to weigh the pros-and-cons of that yourself. For ISC DHCPd or Windows DHCP servers I don't particularly care whether I'm administering one or several. Your feelings may vary.
As long as your VPN hardware (or some other device on each remote office network) supports DHCP relaying you'll have no trouble using a single central DHCP server from a technical feasibility perspective. The address assigned to the network interface of the device receiving / relaying the DHCP requests gets placed into the relayed DHCP request and will allow the DHCP server to serve the request out of the proper scope / subnet.
Best Answer
In theory yes you could do this. As has been discussed on ServerFault before this doesn't mean you should.
The first issue you are bound to run into will be around latency. The last time I checked there isn't an Amazon Data Center anywhere near Brazil so you are looking at delayed logon requests and responses as you pray that the VPN tunnel stays up and then deal with the overall latent connection. Typically any sort of remote AD setup often results in long logon delays and overal user unhappiness as there is a noticeable period of time from hitting enter to the desktop popping up.
If the problem revolves around a support company who provides the server either find a new vendor or run the server yourself. As a full blown solution you could spin up the master AD box with Amazon for resiliency reasons and then have the university run a Read-Only DC on their network to make all of their requests against. This secures the data on the server and somewhat eliminates the VPN load back to Amazon.
Overall though I would recommend running the DC boxes locally whenever possible for the best performance. Users will thank you for it.