You don't start out chosing a particular frequency. That eventually falls out of other requirements. Just the frequency spec accross a broad range of processors is pretty meaningless.
The real spec is some minimum performance or latency reqirement the processor has to meet. In general for any one microcontroller, processor performance is proportional to clock speed. A large portion of the power current is also proportional to clock speed, so that is one reason to not make it wildly higher than needed in power-sensitive applications. For high end general computing processors, performance is not necessarily proportional to clock speed because there are issues like cache hit percentage, memory latency, etc. Small microcontrollers intended for self-contained embedded applications don't usually have these kinds of advanced architectures and performance is pretty much linear with clock speed.
However, clock speed is a poor indicator of performance accross different microcontroller architectures. Some microcontrollers, like low end PICs for example, require 4 clock cycles per instruction cycle, some 2, and some even just 1. Then there are differences in what each architecture can accomplish in a instruction cycle. Comparing clock frequency between anything other than related processors in the same family is largely meaningless.
Another issue is that some micros have fancy internal clock chains including PLLs and dividers. The purpose is so that they can run at a variety of speeds from easy to use and find crystals. 8-16 MHz is a nice frequency for a crystal. You can certainly use crystals well outside that range, but 8 MHz is about the limit where really small packages become available, and having the external clock be otherwise as slow as reasonable is a good thing. That then brings up the question as to what the "clock speed" really is. Is is the external clock frequency you actually feed into the chip, or what the chip derives from that inside before using it otherwise? Each of these are relevant in different ways.
In short, focusing on microcontroller "clock speed", whatever that really means, is like obscessing about piston displacement and turbo boost overpressure when all you really want to know is horsepower and fuel economy. You have little reason to care how they got there, only what the result is.
It depends on the kind of jtag interface that you have. In my experience, what I've noticed (happens on MSP430 and Atmel.ARM7TDMI) is that when you have watches on variables or breakpoints, or even any kind of control via the debugger, the core is halted periodically to run the boundary scan and all that. This will mess quite extensively with timing. If you have a free timer available, I'd suggest using its interrupt to toggle a pin every few microseconds and see whether this is happening and to what degree. Minimizing the number of breakpoints and watches may help, butI can't be sure of that. In fact, I have a feeling itll be target and IDE dependent also.
Timing issues such as this (RAM access) I'd suggest you investigate with an oscilloscope instead. Jtag is better used with slower events, algorithms, and places where the code can be safely halted.
In a word, yes.
As a rule of thumb, a debugger will slow down the target chip. The more expensive real time debugger/ICE tools reduce this but you will still get a measurable slow down.
The slow down is typically because the debugger sticks extra code into your program for breakpoints, RAM monitoring etc.
How much slow down is rather a "how long is a piece of string" question. Your best bet is to measure it.
Word to the wise
Always test your code comprehensively without a debugger. It is entirely possible to write embedded code that only works when the debugger is plugged in (the debugger slow down fixes timing issues inadvertently).