Electronic – CAN bus – Priority/collision

can

I have a little hard time to understand how a CAN bus actually can work. I looks to me like a scenario where everyone is talking at the same time.

The CAN bus is simply two wires that everyone is connected to, CAN-H and CAN-L, right? If we have lets say four nodes on it, with ID's 1, 2, 3, and 4, just to make it simple. The priority of the messages are arranged from highest id to lowest, 4, 3, 2, 1, right? CAN-H is always oposite of CAN-L, when CAN-H is high, CAN-L is low, and oposite, right?

But how do CAN bus avoid collision on the bus? Lets say node 3 is done talking, how do they decide who is next? If node 1 and 4 decides at the same time to transmitt data to the bus, it would result that data that are transmitted to the bus is corrupt, since two nodes are transmitting at the same time?

If node 1 is transmitting binary 00100110 and node 3 is transmitting 11011001 at the same time, it would result that both CAN-H and CAN-L is high at the same time?

Can someone example to me how CAN avoids situations like this?

Best Answer

Funny that with so many correct answers, I still feel like something is amiss or not clear enough. Even most complete answer by @Nick does not correct some wrong assumptions in the question. So, I'll try to make it simpler.

CAN-H is always opposite of CAN-L, when CAN-H is high, CAN-L is low, and opposite, right?

Wrong. CAN physical layer is unique within many differential buses because it uses wired-AND signalling. While most of them indeed pull data lines in two opposite directions, CAN drivers work as open drain (CAN-L) and open-source (CAN-H). So CAN-L can be either low or high-Z, and CAN-H can be either high or high-Z (well... technically, the transceivers include weak biasing resistors pulling common mode to mid-supply). This prevents electrical collisions, since nodes either actively pull lines in the same direction or let them go and allow termination resistors to equalize the voltage between the lines.

The downside of this, of course, is that slew rate of the dominant-to-recessive transition cannot be increased beyond certain point, effectively limiting the bus speed.

If we have lets say four nodes on it, with ID's 1, 2, ...

CAN nodes do not have an ID. The IDs usually introduced by higher CAN-based protocols, such as CANopen. But keep reading...

The priority of the messages are arranged from highest id to lowest, 4, 3, 2, 1, right?

Wrong. The priority of the messages is defined by arbitration field, which includes message ID (either 11 or 29 bits) and RTR bit. I believe CAN FD includes even more bits into arbitration but I am a little bit fuzzy on that new standard.

The bit arbitration is done by CAN controller by monitoring the bus while they are sending. If a node detects a dominant level when it is sending a recessive level itself, it will immediately quit the arbitration process and become a receiver.

Since the dominant bit is logically 0, it follows that the message with numerically lowest arbitration field will win the arbitration, i.e message with ID=1 has priority over message with ID=4.

Now, back to node IDs. Some CAN-based protocols include either sender or target node ID as part of their message ID format. This will effectively create node hierarchy, so in this case node IDs will affect the priority. But again, this is not done on CAN data link layer.

how do CAN bus avoid collision on the bus?

The answer is - it does not. Or, more precisely, not completely. We already established that electrical collisions are avoided by wired-AND signalling.

Logical collisions can be avoided completely by making source node ID a part of arbitration field and enforcing node ID uniqueness. However it is rarely done in practice.

More often message IDs are carefully mapped by their priority in particular application and further distributed between nodes with different functions so that each node can only send messages within its own unique range. This approach further reduces chances of collision.

If node 1 and 4 decides at the same time to transmit data to the bus, it would result that data that are transmitted to the bus is corrupt, since two nodes are transmitting at the same time?

This scenario does not necessarily make data corrupt. If the messages have different arbitration fields the first node losing arbitration will stop transmitting, allowing the other node to send complete correct message.

This, of course, does not eliminate collisions entirely. If two nodes trying to send messages with identical arbitration fields they both will remain active after the arbitration and might collide in the data fields. If this happens, CRC will be used to detect frame error and the message will be discarded by the receivers, prompting re-transmitting by the senders.

In short, ACK confirmation bits, Error frames and CRC validation are used to ensure data integrity and deal with consequences of logical collisions.

Lets say node 3 is done talking, how do they decide who is next?

There is nothing in CAN data link layer to help with this. Any node can start sending as soon as it detects idle bus.

Although, there is one important detail in the CAN specification - the transmitting node is required to send 8 recessive bits after 3 bits intermission at the end of last frame, for 11 bits total. The nodes with pending messages wait for 7 recessive bits EOF and 3 bits intermission for 10 bits total before they can attempt to transmit. This guarantees that same node cannot send more than one remote or data frame if there are other nodes waiting for the bus to become idle.

However there are several methods to improve this behavior in higher layer protocols. For example, variable delay between bus going idle and starting the transmission can be introduced, making sure nodes have equal opportunities to start talking.

More complicated mechanisms include round-robin scenarios or centralized bus management nodes that orchestrate communication.