The synchronization block is used to capture a signal that is not synchronous with the system clock (clkio in that example) -- it is required for any synchronous logic and as far as you're concerned it is transparent other than the fact that it delays the signal seen by the edge detector by 2 clock cycles. It prevents an "illegal" (not clearly 1 or 0) from entering the core where it can cause havoc. If you're really curious I can explain metastability and synchronizer chains but for this specific question I think that's overkill.
The edge detector block is simpler. The trapezoidal shape is a MUX, as Michael Karas mentioned. It allows the block to find either rising edges or falling edges. The flip flop is sampling the output of the MUX every clock cycle and essentially "remembering" the last value (1 or 0). The final AND gate is comparing the last value remembered by the flip flop and the inverted current value and will ONLY be high for 1 clock cycle if the signal changed state.
Look at how the output of the flip flop delays the input of the flip flop by 1 clock cycle:
IN: 0000111100001111000011110000...
OUT: 0000011110000111100001111000...
Now take a look at signal that is the bottom input of the AND gate and the output of the FF above:
INV. FF INPUT: 1111000011110000111100001111...
FF OUTPUT: 0000011110000111100001111000...
Take a look at the logic of an AND gate:
A | B | Y
---+---+---
0 | 0 | 0
1 | 0 | 0
0 | 1 | 0
1 | 1 | 1
One of those inputs is the output of the FF, the other is the output of the inverter... What do you see:
INV. FF INPUT: 1111000011110000111100001111...
FF OUTPUT: 0000011110000111100001111000...
--------------------------------------------------
AND GATE OUT: 0000000010000000100000001000...
You get one short pulse every falling edge of the input to the edge detector block (assuming the MUX is routing the non-inverted signal to you). If the MUX selects the inverted signal, you will get a short pulse every rising edge of the input.
The variation in delay you're seeing is immaterial. I2C is a synchronous protocol, so the only thing that matters is whether or not the SDA line is still low on the next rising edge of SCL. As you can see in your first diagram, they all pull SDA low before that happens, so they are behaving correctly.
If this is causing problems on your I2C master, then it is implemented incorrectly.
In fact, the rising edges you've circled are caused when the master stops driving the SDA low itself, and have nothing to do with the slave devices' activity. They haven't yet driven the line at all.
(The timing variation you see is probably due to the asynchronous nature of your logic analyzer's sampling.)
Best Answer
Non-monotonic edges on the I2C SDA line are rarely a problem. During the main part of any transfer, the data is clocked by the SCL line, and this occurs only when the SDA line is stable.
The only time a falling edge on SDA is significant is when it is used to signal the I2C "start" condition — falling edge on SDA while SCL is high.
There is one situation in which this could present a problem. Some devices require a "repeated start" condition — a "start" that is not preceded by a "stop" — in order to properly implement certain read operations.
A glitch during such a repeated start could be interpreted by the device as a "stop" followed by a "start", which would leave it in the wrong state.
The glitch you show is really tiny, and as Wouter says, many I2C devices incorporate Schmitt triggers (hysteresis) in order to mitigate glitches like this.
Any sort of low-pass filtering will also help. A low-value series resistor (on the order of a few tens of ohws) located near the master device, in conjunction with the bus's distributed capacitance, will form such a filter. Experiment to find the best value for your application.