Mostly, microcontrollers have replaced discrete ICs. I find that even if I could design a circuit with a 555, it's likely that the same circuit will need to be tweaked in a few weeks to do something else, and a micro preserves that flexibility.
But there are a few exceptions.
Discrete logic is still faster than most microcontrollers. The propagation delay and switching times for discrete logic are in the 1-10 ns range. To match that with a microcontroller, you have to be able to implement whatever logic you need in 1 instruction, and have a clock in the 100 MHz to 1 GHz range. You can do that, but maybe not on a breadboard in your garage.
A good example of this is the HCTL2020 quadrature decoder. It takes in two series of pulses and tells you which way your motor is spinning. It's implemented as a nonprogrammable chip for the sake of speed.
Another interesting area where both digital logic and and microcontrollers fail is in signal filtering. If you have an analog signal that you want to filter digitally, you have to sample it at some rate. No matter how fast you sample it, noise in the signal that appears at frequencies more than half your sampling frequency will get aliased down to lower frequencies, where it may interfere with your signal. You can solve this problem with a low-pass filter, made of a cap and a resistor, before your sampling occurs. After the sampling, you're screwed. (Of course, it's frequently the case that the noise won't overlap your signal in frequency, and then a digital filter will work great.)
First, some microcontrollers DO have D/A converters. However, these are far less common than A/D converters.
Aside from the technical issues, the main reason is market demand. Think about it. What kind of application would require a real D/A? It is quite rare to want a micro to produce a reasonably high speed analog signal unless the point is signal processing. The main market for that however is audio, and that needs a lot more resolution than you can build with the same process used to make the digital microcontroller. So audio will use external A/Ds and D/As anyway. DSPs that are intended for such applications have communication hardware built in to talk to such external devices, like I2S.
Otherwise for ordinary control applications, the strategy is to convert to digital as early in the process and then keep things digital. This argues for A/Ds, but D/As are useless since you don't want to go back to analog.
Things that microcontrollers typically control are controlled with PWM (PulseWidth Modulation). Switching power supplies and class D audio inherently work on pulses. Motor control, solenoid control, etc, is all done with pulses for efficiency. You want the pass element to be either fully on or fully off because a ideal switch can't dissipate any power. In large systems or where input power is scarce or expensive (like battery operation), the efficiency of switching systems is important. In a lot of medium cases the total power used isn't the issue, but getting rid of wasted power as heat is. A switching circuit that dissipates 1 W instead of 10 W may cost a little more in electronic parts than the 10 W linear circuit, but is a lot cheaper overall because you don't need a heat sink with associated size and weight, possibly forced air cooling, etc. Switching techniques also are usually tollerant of a wider input voltage range.
Note that PWM outputs, which are very common in microcontrollers, can be used to make analog signals in the unusual cases where you need them. Low pass filtering a PWM output is the easiest and nicest way to make a analog signal from a micro as long as you have sufficient resolution*speed product. Filtered PWM outputs are nicely monotonic and highly linear, and the resolution versus speed tradeoff can be useful.
Did you have anything specific in mind you wished a micro had a D/A converter for? Chances are this can be solved with low pass filtered PWM or would need a external D/A for higher resolution*speed anyway. The gap between filtered PWM and external is pretty narrow, and the type of applications that actually need such a signal is also narrow.
Best Answer
Those peripherals are necessary for most real-world applications of microcontrollers, Not all of them, but leaving out any subset would decrease the market for the microcontroller. For example, the Scenix microcontroller family which was very fast but had very limited hard peripherals was a resounding market failure. That's really bad news for those of us charged with specifying microcontrollers- a complete redesign in order just to keep your products going (okay, maybe good news if you're brought in to replace the person who specified the oddball micro and subsequently paid to clean up someone the mess they left, but that's not great fun either). Much of the area on the chip is taken up by the memory and the bonding pad/drivers and the CPU so those little hardware peripherals are pretty minor.
If you need more processing power, leave the world of 8-bit micros behind and move to one of the 32-bit ARM cores which are generally used in microcontroller-like situations but have more of the chip area devoted to the processor and often to the memory. Or a DSP or FPGA can offer orders of magnitude more processing power, suitable for video processing, high end audio, high end instrumentation and data acquisition etc. As it is, the processing power of modern 8/16 bit micros is not all that bad, and often we 'waste' it by using a high-level language to gain other advantages (faster development and prototyping, use of commercially available libraries such as protocol stacks) rather than tediously hand-crafting bespoke code in assembly.