Electronic – Ring Topology Ethernet Backplane – 3 Port 1GbE Switch

ethernetNetworksystem

I'm doing a thought experiment/napkin design for low cost cluster processor system. Some low cost, high MIPs CPUs chained together on a backplane. My first thought for a backplane is ethernet, because its ubiquitous and fairly easy to implement and run over short/medium distances and has good throughput. I am aware of other backplane technologies, some perhaps better suited but this approach I am trying currently.

Each processor node would be a small credit card sized PCB that slots vertically into a long backplane PCB. The backplane would provide power and whathaveyou to the nodes, and would connect together with a 1GbE link. This would be achieved with a T-bar style 3 port 1GbE switch IC.

          A        B        C
          ^        ^        ^
1GbE IN > X >----> X >----> X > ...

A 1GbE link would feed from one node and into the next, and each node would connect into the main 1GbE link from the switch at 100MbE.

The question is, where can one find a 3-port 1GbE switch IC? I've had a look around at some of the major networking IC companies and they don't seem to exist. I want a low-low cost single chip, not something capable to 5-16 ports. For 100MbE, there are plenty of chips but I think it will be too slow, especially when running in a ring as I intend.

Is there a reason the parts don't exist, i.e. it's a fundamentally flawed concept or it's too niche an application?

What is the likelihood of being able to get an FPGA to do the job, and if so how expensive an FPGA is it likely to need?

I could route a dedicated port for each node on the backplane, and then a master switch at the end; but this will be messy when there are a lot of nodes on a single backplane. Because the nodes are going to be small, I was hoping to get from 16 to 32 of these all connected in a narrow long backplane PCB. 32 nodes would then require 32 * 4 diff pairs, and need more layers for the backplane stackup. The proposed ethernet ring could probably be done on a 2-sided board, perhaps 4 depending on how the impedance control is done.

And to avoid any obvious responses, I am fully aware that running the ethernet ports in a ring will reduce throughput. I am not intending to supply a dedicated 1GbE link, or even 100MbE link to each node. This design is a compromise between speed, complexity and cost. The overriding factor is cost at the moment. I am happy enough with 32 nodes sharing a 1GbE link. The only aspect I am opening up to question is the backplane data link between nodes (and the outside world).

EDIT: Perhaps an ARM core with 2x1GbE ports would be an elegant solution, if in software I could emulate a 3 port switch. Not entirely sure on the feasibility/performance requirements of this.

Best Answer

To create loop topology you will need the appropriate PHY port and chip to support it assuming you can afford the interface with high volume.

E.g. http://www.broadcom.com/products/Physical-Layer/Gigabit-Ethernet-PHYs/BCM5421xE-Family

Consider stub bus topology with a differential 100 Ohm impedance backplane such as Flexbus only does 10MHz while Canbus is only 1MHz. if you want low cost this is it.

Considering the complexity of 1GHz signal, loop topology will have reliability issues if any repeater is bad and any T-bus is limited due to reflections of the stub lengths about 50MHz per metre of bus, this will not be easy unless you have successfully done 100MHz backplanes.

For 1 GHz rates, the PHY supplies the clock and the peripheral must synchronize to it.

Related Topic