The feasibility of transferring live video over Bluetooth from an Android mega is low but not zero, being constrained by the following:
- Bluetooth practical throughput limitations:
- Bluetooth 1.2 = ~ 700 Kbit / sec
- Bluetooth 2.0+EDR = ~ 2.1 MBit / sec
- Bluetooth 3.0+HS, 4.0: These use a separate wireless path (e.g. 802.11, like WiFi) for the high speed data, so not considering these for now.
- Low-resolution (VGA 256 color) live video needs at least 200 KBPS, HD needs 2 MBPS or more, sustained bandwidth. That's the reason there aren't many live video streaming Bluetooth gizmos for smartphones yet.
- Work-around: Use a WiFi shield instead of Bluetooth for communication.
- Arduino Mega limitations:
- Capturing, processing and compressing live video in real-time, even at VGA resolution (640 x 480 pixels) is going to be quite a challenge for the ATmega2560 microcontroller, if it can be done at all
- Memory (RAM, Flash, whatever) will be another challenge: A single frame at VGA 256 color resolution requires over 300 kB for just the frame buffer, twice that for higher color depth. For MJPG or other encoding / compression, a minimum of 2 x Frame Buffer Size would be needed for processing. This necessitates an external memory solution added to the Arduino Mega.
- Work-around: Perhaps an external shield with video capture and compression, with an on-board DSP and frame buffer RAM, could be used, if you find any such.
- In which case, the Arduino Mega isn't really needed any more.
- Are there Bluetooth modules, XBee / ZigBee modules, or shields, which can sustain the maximum throughput rates noted above? If there are, that would be interesting to know.
- Android Phone constraints:
- Does the current Android OS release support video endpoints via Bluetooth yet? If not, low-level code will be required at the Android side, just to retrieve the video stream data.
- The processing requirements for displaying such incoming raw Bluetooth video streams would require hefty batteries, or permit very short operating duration unless docked to a charger.
- Work-around: Use WiFi, stream from Arduino using a standard streaming video protocol, use a standard Android video player with streaming support to play the stream.
As is evident from the points above, the requirement is feasible, as long as constraints are accepted: Very low resolution, low color depth, low frame rate video, OR... all video processing offloaded to a DSP daughterboard more powerful than the Arduino itself, with its own on-board wireless connectivity.
That last is the work-around the question asks for.
Whether this is the practical approach at all, is up to debate.
Assuming you can sustain the maximum datarate for the nRF24L01 (2 megabits persecond), then that means you can move — in a perfect world — 200 kilobytes per second (assuming no overhead).
So given this, and your desired minimum of 24fps, you can calculate just how many bytes you need each image to be: 200K / 24 = 8.53K
per frame
Now you haven't said what resolution you want, but the maximum resolution of the ov7670 is 640x480, and it uses 16bits per pixel (it is a little more complicated than that, I invite the curious to read the Data Sheet).
As our calculators all know 640 * 480 * 2 = 614,400
Bytes — 72 times that 8.53K per frame. In fact it would take in the ballpark of 3 seconds per frame (6 seconds if running at 1mbps).
So to answer your first two questions: The nRF24L01 isn't up to the task of transmitting live 640x480 video.
So this leaves us with your third question: How do reduce the size of the cameras data?
There are (not mutually exclusive) three ways of doing this:
- Compress the images
- Compress the data
- Send smaller images
Let us break each of these down:
Compress the images
You could, for example, send the images as a M-JPEG stream. This would certainly make decoding the images on the phone side much easier, and would reduce the size of the images sent quite a bit.
But there is one problem: You need to be able to hold the whole image in memory in order to do JPEG (and thus M-JPEG) compression. Your ST32F103RET6 has 64K of RAM (IIRC), so there is no way it is going to fit. And I am not sure of a lossy compression scheme you could use that doesn't need the whole image at once.
Compress the data
Now there are a number of options you could do here: Huffman, Run Length Encoding, LWZ, etc. Unfortunately none of these are going to produce a predictable amount of compression. It is going to depend on the images you send.
But I think it is safe to say you aren't going to get the 8.53K you would need for 24fps.
Send smaller images
The OV7670 is rather flexible when it comes to resolutions. So let us take a look at some other resolutions you could use:
- QVGA (320x240):
320 * 240 * 2 = 150K
per frame. At this rate you could send just over 1fps
- QQVGA (160x120):
160 * 120 * 2 = 37K
per frame. This is the first image size you could store entirely in RAM
- QQQVGA (80x60):
80 * 60 * 2 = 9.38K
per frame. With compression you should be able to do 24fps video
- QQQQVGA (40x30): `40 * 30 * 2 = 2.35K per frame. At this (postage stamp) of a size you could steam at 30fps 1mbps! I believe this is the lowest resolution supported by the camera.
QQVGA may be possible if you do very lossy JPEG compression, and then do some compression on the data stream as well. You are going to have to experiment to be sure.
A Postscript
You are going to be hard pressed to find a faster wireless technology than the nRF24L01 (and its ilk), without going to wifi (for example the Adafruit CC3000 Module). With that, a microcontroller with a LOT of ram, and compression, you should be able to stream 24-30fps.
Alternately there are camera driver chips that do the JPEG compression for you — the vc0706 for example. With that attached to a camera, and using the vc0706's SPI link you should then be able to use even a trivial microcontroller to transmit the data.
I have yet to meet a camera module with vc0706 that exposes the SPI pins, they are all serial. One may exist, but I haven't found it yet. So if you go down this route, you may have to do it yourself...
Best Answer
I often wonder why people don't read the datasheet. You will find that the al422 has 384kb of ram (384*1024)<(640*480*2). You need to get the module to output raw bayer data so that (384*1024)>(640*480) and use a demosaicing algorithm to get a full color image. You have plenty of ram and a fast processor to do such.