Electronic – Rather complicated sensor network

communicationmicrocontrollersensorwirelessxbee

I was working on a project recently and it was the first one which was involved enough to make the sensor networking complicated. In the end, I think the communication was the bottleneck in terms of overall performance and I'm wondering how more experienced people would have solved this problem. This is a long read, but I think it's pretty interesting so please stick with it. The problem was to design an autonomous blimp capable of navigating an obstacle course and dropping ping pong balls into brown box targets. Here goes:

Sensors

  • 4D Systems uCAM-TTL camera module – UART interface
  • HMC6352 Digital Compass – I2C interface
  • Maxbotix Sonar ez4 – 1 pin analog interface

Actuators

  • 2x L293D motor drivers (connected to simple hobby motors) – These were used to drive 6 motors bidirectionally. They required PWM inputs in order to vary the speed. Now 3 of our motors were always doing the same thing (the ones that controlled up/down movement) so they only required 2 PWM outputs from our controllers to control all 3 motors. The other 3 motors which controlled lateral movement all needed individual control (for omni-directional movement) so that was another 6 PWM outputs required from our controllers.
  • Servo motor – PWM interface

Controllers

For reasons that will become clear later, we ended up using 2x ATmega328Ps. We used an Arduino Uno to program them (we didn't have access to an ISP) but we fab'd a custom PCB so we didn't have to use arduino boards since that would just add unnecessary weight to our blimp. As for why we chose the ATmega328P, I was very familiar with the arduino environment and I think that made the code development much quicker and easier.

Communication & Processing

  • 2x Xbee Basic
  • 2x ATmega328P
  • Desktop computer running C++ w/ openCV

So as you can tell from the camera module, most of our project relied on computer vision. The blimps could only carry so much weight and we didn't feel comfortable implementing computer vision on a microcontroller. So what we ended up doing was using XBee's to relay the image data back to a desktop computer. So on the server side we received image data and used openCV to process the image and figure stuff out from it. Now the server side also needed to know height information (from the sonar) and compass information.

The first wrinkle was we were not able to have the camera controlled by a microcontroller for a couple reasons. The main issue was internal memory on the uP couldn't handle storing an entire frame. There might have been ways around this through clever coding but for the purposes of this question let's pretend it was impossible. So to solve this problem, we had the server side send camera commands through the XBee transceiver and the XBee receiver (on board the blimp) had its output wired to the camera's input.

The next wrinkle was that there are not enough PWM's on a single ATmega328P to control all the motors BECAUSE the I2C interface uses one of the PWM pins (damn them…). That is why we decided to use a 2nd one. The code actually lent itself perfectly to parallel processing anyway because the height control was completely independent of the lateral movement control (so 2 micros was probably better than one attached to a PWM controller). Therefore, U1 was responsible for 2 PWM outputs (up/down) and reading the Sonar. U2 was responsible for reading the compass, controlling 6 PWM outputs (the lateral motors), and also reading the Sonar. U2 also was responsible for receiving commands from the server through the XBee.

That led to our first communication problem. The XBee DOUT line was connected to both the microcontroller and the camera. Now of course we designed a protocol so that our micro commands would ignore camera commands and camera commands would ignore micro commands so that was fine. However, the camera, when ignoring our micro commands, would send back NAK data on its output line. Since the command was meant for the micro we needed someway to turn off the camera output to the XBee. To solve this, we made the micro control 2 FETs which were between the camera and XBee (thats the first FET) and also between U2 and the XBee (thats the second FET). Therefore, when the camera was trying to send info back to the server the first FET was 'on' and the second FET was 'off'. Unfortunately there appeared to be some cross talk with this method and sometimes when the server was trying to receive height data for example, it would read a NAK from the XBee.

So to give you an idea of how this worked here are a few examples:

  1. Server requests a picture – PIC_REQUEST goes through XBee and arrives at U2 and camera. U2 ignores it and camera sends back image data.
  2. Server just finished processing a picture and is sending motor data to tell blimp to turn right – MOTOR_ANGLE(70) gies through XBee and arrives at U2 and camera. U2 recognizes as a micro command and thus turns off Camera's FET (but perhaps the camera already responded with a NAK?? who knows…). U2 then responds to the command by changing motor PWM outputs. It then turns Camera's FET back on (this was the default setting since image data was most important).
  3. Server realizes we've come to a point in the obstacle course where our default hover height now needs to be 90 inches instead of 50 inches. SET_HEIGHT goes through XBee and same thing happens as in example 2. U2 recognizes the SET_HEIGHT command and triggers an interrupt on U1. U1 now comes out of it's height control loop and waits to receive serial data from U2. That's right, more serial data. At this point the U2's FET is on (and camera's FET is off) so the server receives the height that U2 is also sending to U1. That was for verification purposes. Now U1 resets its internal variable for height2HoverAt. U2 now turns off it's FET and turns the camera FET back on.

I definitely left out a good amount of information but I think that's enough to understand some of the complications. In the end, our problems were just synchronizing everything. Sometimes there would be data left over in buffers, but only3 bytes (all our commands were 6 byte sequences). Sometimes we would lose connection with our camera and have to resync it.

So my question is: What techniques would you guys suggest to have made the communication between all those component more reliable/robust/simpler/better?

For example, I know one would've been to add a delay circuit between the on board XBee out and the camera so that the micro had a chance to turn off the camera's talk line before it responded to micro commands with NAKs. Any other ideas like that?

Thanks and I'm sure this will require many edits so stay tuned.


Edit1: Splicing the camera's UART data through one of the micros did not seem possible to us. There were two options for camera data, raw bit map, or JPEG. For a raw bitmap, the camera just sends data at you as fast as it can. The ATmega328P only has 128 bytes for a serial buffer (technically this is configurable but I'm not sure how) and we didn't think we'd be able to get it out of the buffer and through to the XBee fast enough. That left the JPEG method where it sends each package and waits for the controller to ACK it (little handshaking protocl). The fastest this could go at was 115200 baud. Now for some reason, the fastest we could reliably transmit large amounts of data over the XBee was 57600 baud (this is even after we did the node/network pairing to allow the auto-resend capability). Adding the extra stop in our network (camera to micro to XBee as opposed to just camera to XBee) for the micro simply slowed down the time it took to transfer an image too much. We needed a certain refresh rate on images in order for our motor control algorithm to work.

Best Answer

I understand that you wanted to choose a development environment that you were familiar with such that you can hit the ground running, but I think the hardware/software trade off may have boxed you in by sticking with Arduino and not picking a part that had all the hardware peripherals that you needed and writing everything in interrupt-driven C instead.

I agree with @Matt Jenkins' suggestion and would like to expand on it.

I would've chosen a uC with 2 UARTs. One connected to the Xbee and one connected to the camera. The uC accepts a command from the server to initiate a camera read and a routine can be written to transfer data from the camera UART channel to the XBee UART channel on a byte per byte basis - so no buffer (or at most only a very small one) needed. I would've tried to eliminate the other uC all together by picking a part that also accommodated all your PWM needs as well (8 PWM channels?) and if you wanted to stick with 2 different uC's taking care of their respective axis then perhaps a different communications interface would've been better as all your other UARTs would be taken.

Someone else also suggested moving to an embedded linux platform to run everything (including openCV) and I think that would've been something to explore as well. I've been there before though, a 4 month school project and you just need to get it done ASAP, can't be stalled from paralysis by analysis - I hope it turned out OK for you though!


EDIT #1 In reply to comments @JGord:

I did a project that implemented UART forwarding with an ATmega164p. It has 2 UARTs. Here is an image from a logic analyzer capture (Saleae USB logic analyzer) of that project showing the UART forwarding: analyzer capture

The top line is the source data (in this case it would be your camera) and the bottom line is the UART channel being forwarded to (XBee in your case). The routine written to do this handled the UART receive interrupt. Now, would you believe that while this UART forwarding is going on you could happily configure your PWM channels and handle your I2C routines as well? Let me explain how.

Each UART peripheral (for my AVR anyways) is made up of a couple shift registers, a data register, and a control/status register. This hardware will do things on its own (assuming that you've already initialized the baud rate and such) without any of your intervention if either:

  1. A byte comes in or
  2. A byte is placed in its data register and flagged for output

Of importance here is the shift register and the data register. Let's suppose a byte is coming in on UART0 and we want to forward that traffic to the output of UART1. When a new byte has been shifted in to the input shift register of UART0, it gets transferred to the UART0 data register and a UART0 receive interrupt is fired off. If you've written an ISR for it, you can take the byte in the UART0 data register and move it over to the UART1 data register and then set the control register for UART1 to start transferring. What that does is it tells the UART1 peripheral to take whatever you just put into its data register, put that into its output shift register, and start shifting it out. From here, you can return out from your ISR and go back to whatever task your uC was doing before it was interrupted. Now UART0, after just having its shift register cleared, and having its data register cleared can start shifting in new data if it hasn't already done so during the ISR, and UART1 is shifting out the byte you just put into it - all of that happens on its own without your intervention while your uC is off doing some other task. The entire ISR takes microseconds to execute since we're only moving 1 byte around some memory, and this leaves plenty of time to go off and do other things until the next byte on UART0 comes in (which takes 100's of microseconds).

This is the beauty of having hardware peripherals - you just write into some memory mapped registers and it will take care of the rest from there and will signal for your attention through interrupts like the one I just explained above. This process will happen every time a new byte comes in on UART0.

Notice how there is only a delay of 1 byte in the logic capture as we're only ever "buffering" 1 byte if you want to think of it that way. I'm not sure how you've come up with your O(2N) estimation - I'm going to assume that you've housed the Arduino serial library functions in a blocking loop waiting for data. If we factor in the overhead of having to process a "read camera" command on the uC, the interrupt driven method is more like O(N+c) where c encompasses the single byte delay and the "read camera" instruction. This would be extremely small given that you're sending a large amount of data (image data right?).

All of this detail about the UART peripheral (and every peripheral on the uC) is explained thoroughly in the datasheet and it's all accessible in C. I don't know if the Arduino environment gives you that low of access such that you can start accessing registers - and that's the thing - if it doesn't you're limited by their implementation. You are in control of everything if you've written it in C (even more so if done in assembly) and you can really push the microcontroller to its real potential.