First, when you talk about the "speed" of a signal in optical fiber, that's ambiguous. You should be clear about whether you're interested in the latency (the time it takes a signal to travel from one end of the fiber to the other) or the bit rate. In this case, it seems most likely you're interested in the latency, or propagation delay.
In my opinion if the speed of the wave is dependent on the refractive index which will be the same in the same fibre then the speed of both wavelengths will have the same speed. Is it true?
No. This is not true. The index of refraction of a material varies (at least slightly) depending on the wavelength of the light being considered.
In addition, in a dielectric waveguide like optical fiber, as the wavelength changes a different proportion of the signal power travels in the core and in the cladding, leading to (at least small) changes in the effective index of the fiber.
In fact, dispersion can be either negative or positive (also called anomolous and normal dispersion), depending on the wavelength and the design of the fiber. We can also engineer the dispersion properties of the fiber in some cases to optimize the fiber for different applications.
But all of that is irrelevant to answering the question, because the total effect is summarized in the dispersion parameter.
When you specify the dispersion as you did, D = -100ps/nm•km, you're saying we already know the effect of all those variations, and that effect is that the propagation delay through 1 km of fiber changes by -100 ps for every nanometer of change in the wavelength of the signal light.
So you don't need to worry about the physical mechanism. You just need to apply the definition of the dispersion parameter to decide whether a longer or shorter wavelength travels faster through this fiber.
Fibre optic cables are fairly lossy. As such a signal can only be transmitted so far down a piece of "glass" before they must be received by a transceiver or relay device that decodes the optical signal, regenerates it and transmits it out again. This is a span.
The "cable length" itself could go from north America to Europe. A span may be only a kilometer.
ADDITION: That does not only apply to fibre-optics though. Any high frequency communication system, other than point to point micro-wave, needs the same "pass-the-bucket" handling.
Best Answer
Rather than worrying about a research paper that's pushing things to the limit first start by understanding the stuff sitting in front of you.
How does an SATA 3 hard drive in a home computer put 6 Gbits/s down a serial link? The main processor isn't 6 GHz and the one in the hard drive certainly isn't so by your logic it shouldn't be possible.
The answer is that the processors aren't sitting there putting one bit out at a time, there is dedicated hardware called a SERDES (serializer / deserializer) that converts a lower speed parallel data stream into a high speed serial one and then back again at the other end. If that works in blocks of 32 bits then the rate is under 200 MHz. And that data is then handled by a DMA system that automatically moves the data between the SERDES and memory without the processor getting involved. All the processor has to do is instruct the DMA controller where the data is, how much to send and where to put any reply. After that the processor can go off and do something else, the DMA controller will interrupt once it's finished the job.
And if the CPU is spending most of its time idle it could use that time to start a second DMA & SERDES running on a second transfer. In fact one CPU could run quite a few of those transfers in parallel giving you quite a healthy data rate.
OK this is electrical rather than optical and it's 50,000 times slower than the system you asked about but the same basic concepts apply. The processor only ever deals with the data in large chunks, dedicated hardware deals with it in smaller pieces and only some very specialized hardware deals with it 1 bit at a time. You then put a lot of those links in parallel.
One late addition to this that is hinted at in the other answers but isn't explicitly explained anywhere is the difference between bit rate and baud rate. Bit rate is the rate at which data is transmitted, baud rate is the rate at which symbols are transmitted. On a lot of systems the symbols transmitted at binary bits and so the two numbers are effectively the same which is why there can be a lot of confusion between the two.
However on some systems a multi-bit encoding system is used. If instead of sending 0 V or 3 V down the wire each clock period you send 0 V, 1 V, 2 V or 3 V for each clock then your symbol rate is the same, 1 symbol per clock. But each symbol has 4 possible states and so can hold 2 bits of data. This means that your bit rate has doubled without increasing the clock rate.
No real world systems that I'm aware of use such a simple voltage level style multi-bit symbol, the maths behind real world systems can get very nasty, but the basic principal remains the same; if you have more than two possible states then you can get more bits per clock. Ethernet and ADSL are the two most common electrical systems that use this type of encoding as does just about any modern radio system. As @alex.forencich said in his excellent answer the system you asked about used 32-QAM (Quadrature amplitude modulation) signal format, 32 different possible symbols meaning 5 bits per symbol transmitted.