Many people highly prefer fixed-length commands.
If you need to have variable-length commands, you need to think about:
- Is it possible for accidental noise (or malicious pranksters) to generate something that will overflow my receive buffer?
- If a little bit of noise flips the high bit of the "length" field of a short packet (making it appear to be a long packet), does my system properly recover?
- bad: the microcontroller stops everything, waiting for the rest of what it thinks is a "long" packet. Several days later, after many frantic attempts to send it messages saying "deploy the airbags", "lower the control rods", "Open the pod bay doors, HAL", etc., it eventually gets all the bytes it expected, sees that the checksum fails, throws away this "long packet", and starts working normally again.
- good: after a minute or some other reasonable time, the microcontroller times-out, erases everything in the command buffer (possibly discarding a few "deploy the airbags" messages after the length-corrupted message), and starts working normally again.
- good: The microcontroller immediately recognizes there is some error in the header (which includes the start byte, the length field, and the header's checksum), and immediately throws away that header and begins searching for the start byte again. The microcontroller immediately starts working normally again, recognizing the first valid command (the header's checksum is good, and also the data's checksum is good) after the corrupted command.
- Does the microcontroller immediately send one ACK after it sees that the (final) checksum of the packet is good, and then execute the command, and finally send back a response to that command? Or is it better to send only the ACK or only the response? Which one makes it easier to debug?
general protocol design tips
There are many more or less simple protocols listed in the "Serial Programming" Wikibook.
If you're lucky, perhaps one of them is already perfect for your application.
Or at least close enough that it only requires a little tweaking to fit.
Pretty much everyone who successfully develops a new protocol goes through these phases:
- 1 I may not know much, but reading other people's protocols gives me a headache. It looks like a bunch of unnecessary complexity. Let's dump all that stuff and build a nice, simple protocol.
- 2 It's not working. Maybe if I add [feature 1] then it will work.
- 3 Better, but still not working very well. Perhaps add [feature 2]?
- 98 Finally it's working. I better write down who does what in my simple protocol so that if I need to switch to a different microcontroller or want to write a better program on the PC, I'll remember how it goes.
- 99 Funny how my simple protocol takes so many pages to describe.
It sounds like your question is, how do you go from a UART device to something that can be plugged into a microphone jack.
What protocol does a headphone jack use? should help you understand what is being done on the existing device that you are talking about.
There is still a link missing for you though, that is the ability of turning UART into the audio itself. The easiest way to do this is to buy a microcontroller that you will place between the UART device and the Android device. All the microcontroller will do is read in any UART data and then convert it to what ever form you want the headphone jack to receive. You may also need to implement some handshaking between the microcontroller and the phone in order to know that the device you think is plugged in.
There is also the issue of getting power to the device. A headphone jack is in no way designed to be used as a power source. The easiest method would be to just slap a battery on the device. If you wanted to, you could potentially do some clever tricks with playing audio at full volume for some period of time and have your device charge up a capacitor. You could then run your device off of the power stored in the capacitor. This sounds rather tricky to me though and would personally just go with a battery.
Both of your first two links are, simply, wrong. A UART is a piece of hardware which can implement a number of different protocols which are used to frame asynchronous data streams. The U is an acronym for "Universal", and while it is effectively correct there is no reason a protocol could not be used which confounds the present population of UARTs - other than the fact that it's not worth the effort.
The different protocols handled use different numbers of bits for detecting start and stop conditions, presence or absence of a parity bit (and its polarity), and frame data lengths. Typically you can specify 5,6,7 or 8 data bits per frame. If someone were to insist that his/her data must be formatted into 4-bit frames, no existing UART chip would be able to handle it.
In part, this is a matter of definition. Merriam-Webster, for instance defines protocol (for this context) as
Note that the hardware implementation is not part of the definition.