Electronic – How to Efficiently Decode Non-Standard Serial Signal

designembeddedmicrocontrollerserialusb

I'm an undergraduate member of a research team working on a project that involves an RF-transmitting ASIC, and its wireless receiver which should ultimately send data to a PC.

The receiver outputs a fast, continuous, asynchronous, non-standard serial signal (i.e. not SPI, I2C, UART, etc) so my job is to write microcontroller software to interface the receiver to the computer. Currently my approach is to use edge-triggered interrupts to place the data in a circular buffer and do the whole bit-by-bit decoding process in the main loop. The microcontroller must simultaneously output this data using USB (virtual com port) to the computer.

Here is a problem I'm having, and one I'm anticipating:

  1. I can not process the buffered data fast enough even with my quite powerful 72 MHz ARM Cortex M3 processor. The bitrate is 400 Kbps (2.5 us / bit). For reference that leaves only 180 cycles per bit (including the decoding AND the ISR, which has ~30 cycles of overhead ouch!). The MCU also has to handle a lot of other tasks which it polls for in the main loop.

  2. The USB virtual com port driver is also interrupt based. This makes me almost certain that the driver will eventually have the processor interrupted for so long that it misses the 2.5 microsecond (180 cycle) window in which a bit may be transmitted. I am unsure how interrupt conflicts/races like this are normally resolved.

So the question is simply, what might one do to resolve these issues or is this not the right approach at all? I'm willing to consider less software-centric approaches as well. For example, using a dedicated USB chip with some kind of hardware state machine for the decoding, but this is unfamiliar territory.

Best Answer

Another answer: Stop using interrupts.

People jump to use interrupts too easily. Personally, I rarely use them because they actually waste a lot of time, as you are discovering.

It's often possible to write a main loop which polls everything so rapidly that's it's latency is within spec, and very little time is wasted.

loop
{
    if (serial_bit_ready)
    {
        // shift serial bit into a byte
    }

    if (serial_byte_ready)
    {
        // decode serial data
    }

    if (enough_serial_bytes_available)
    {
        // more decoding
    }        

    if (usb_queue_not_empty)
    {
        // handle USB data
    }        
}

There might be some things in the loop which happen far more often than others. Perhaps the incoming bits for example, in which case, add more of those tests, so that more of the processor is dedicated to that task.

loop
{
    if (serial_bit_ready)
    {
        // shift serial bit into a byte
    }

    if (serial_byte_ready)
    {
        // decode serial data
    }

    if (serial_bit_ready)
    {
        // shift serial bit into a byte
    }

    if (enough_serial_bytes_available)
    {
        // more decoding
    }        

    if (serial_bit_ready)
    {
        // shift serial bit into a byte
    }

    if (usb_queue_not_empty)
    {
        // handle USB data
    }        
}

There might be some events for which the latency of this approach is too high. For example, you might need a very accurately timed event. In which case, have that event on interrupt, and have everything else in the loop.