Electronic – maximizing IR range and peak current

digital modulationinfraredModulationphotodiodephototransistor

I ordered a bunch of infrared detector diodes (aka IR diode) and infrared transmitters. luckily the detectors are blue (and don't respond to normal room light).

I am planning to order a bunch of infrared detector transistors (aka IR NPN) because I heard they respond to transmitters farther away. I understand that IR diodes respond faster than IR NPN's but when I read the datasheet of the IR NPN, I can put up with several milliseconds of waiting time in my application.

I have heard of transmitters and detectors modulating their data over 38Khz. Since I connect my IR components to a microcontroller, I should be able to create a custom modulation all within a microcontroller with no problem.

For a standard LED, there's a "Max DC forward current" and a "Peak DC forward current". The value of the former (normally 20 to 50ma?) is lower than the latter (150ma?). I tend to limit my current to all my LED's to the Max DC forward current to eliminate burn-out.

I'm curious…

  1. Do IR circuit designers modulate data over a fixed frequency (such as 38Khz) in order to give the IR transmitter (diode) the "Peak DC forward current" instead of the "Max DC forward current" as an attempt to maximize range without blowing the part up?

  2. Why was 38Khz chosen as a standard frequency to modulate IR over? Why not 1Mhz or even a few hundred Hz?

  3. If the answer to question 1 is yes, then could I get away with sending a one-time short burst of raw data (byte switches every millisecond) to an IR transmitter (diode) with using the Peak current instead of the max current?

I just feel that if I leave any type of LED or photo diode on for too long at peak current, it will blow up. I may be wrong. I'm tempted to use peak current in a photo-transistor and photo-diode in order to maximize IR range but I don't know if I'm on the right track.

Best Answer

Hang on... 'cause we 'bout tuh answerin' completely!

Why was 38Khz chosen as a standard frequency to modulate IR over? Why not 1Mhz or even a few hundred Hz?

The short answer is: some history and some technology (also, 38kHz is not the standard frequency, but one channel of many allowed by APA in that same band).

First, Question #2

Consumer applications for wireless remote control technology first appeared in 1955, when Zenith, Inc. founder-president Eugene F. McDonald Jr. yearned for a wireless remote control that would mute the sound of commercials. So convinced was he of the imminent demise of commercial television that he viewed the development of the “Flash-matic,” designed by his engineer Eugene Polley, as a temporary fix until subscription television arrived.

The “Flash-matic” was the first commercial demonstration of wireless remote control widely sold to the public. This television controller consisted of a focused electric flash light that the user aimed at one of four photo-sensors positioned in the four corners of the television set to control volume muting (1 corner), power (1 corner), and change the channel (the last two corners).

However, its unmodulated visible light approach worked poorly during the day when incident sunlight would randomly turn on the television or cause other unwanted behaviors from the set.

In accordance, Zenith released an improved controller in 1956 based on a design by the now widely revered “father of remote control” Dr. Robert Adler (Dr. Adler went on to hold over 180 patents worldwide, including critical breakthroughs in vacuum-tube technology).

Dr. Adler’s remote, sold under the trade name “Space Command”, was based on ultrasonics (sound frequencies above the range of human hearing) and did not require batteries in the handheld unit.

enter image description here

A set of light-weight aluminum bars, akin to tuning pitch forks, were individually struck when the user pressed a button positioned above the bar.

Because the button had to strike the bar without damping it in order to maximize its audio output, a snap-action switch was used giving the remote its affectionate name, “the clicker”.

The bar would ring when struck producing a fundamental pitch in the near-ultrasonic audio spectrum (20kHz-40kHz).

We now interject!

Notice this frequency band?! The ultrasonic bell design will be "upgraded" to IR emitters later in the story. However, modulating at the ultrasonic audio frequency band was retained so that they didn't have to redesign the baseband processing section of the television receiver. Clever, right? That's why APA (more later) will specify IR modulation frequencies between 20k-50k.

...and we resume... ;-)

In Adler's television, an audio transducer fanned out to a set of six vacuum tubes forming a bank of bandpass filters to decode the incoming signal and discern which button the user had pressed (e.g. which bell they had rung in the remote).

Despite its technical achievement, Adler’s ultrasonic remote control saw slow consumer adoption in the late 1950’s as the additional vacuum tubes raised the market price of the television set by 30%.

Although invented in 1947 by Dr. William Shockley, John Bardeen, and Walter Brattain, the transistor did not begin appearing in consumer products until the early 1960’s. The advent of the transistor (solid state semiconductor) brought about dramatic reductions in the cost to manufacture remote control electronics. Adler’s ultrasonic design was reborn as a battery powered electronic version and gained widespread use. More than 9 million ultrasonic remote controls were sold by 1981.

This large-scale adoption created market pressures for new products with ever greater capabilities and convenience. Engineers, in turn, began demanding greater range, longer battery life, and greater control over a larger number of features (i.e. more buttons on the remote) from their remote control interfaces, all the while reducing the cost to manufacture the assembly.

Advances in semiconductor transistors occurring in parallel with semiconductor-based optoelectronics brought about the modern infrared (IR) remote, the first commercial product appearing in 1978.

Electronics companies, Plessey and Philips, both of whom had divisions specializing in semiconductors, were the earliest manufacturers of chips that contained the entire IR transmitter and receiver. Unfortunately, they failed to predict the multiplicity and popularity of the medium and assumed that only one receiver would be in range of a remote at any given time.

Their protocols and modulations schemes made no attempt to distinguish one receiver from another.

In July of 1987, the Appliance Product Association (APA) standardized the protocol used by most commercial IR remote controls. The standard was subsequently adopted by the AEHA, a government consumer product regulatory authority, in Japan and Philips began a product registration service to further ensure correct operation across multiple vendors and devices.

enter image description here

However, collaborative projects always branch and by 2000, more than 99 percent of all TV sets and 100 percent of all VCRs and DVD players sold in the United States were equipped with IR remote control based on one of five major variants of the APA protocol.

All variants specified a fixed carrier frequency, typically somewhere between 33 and 40 kHz or 50 to 60 kHz. The most commonly used protocol is the NEC protocol, which specifies a carrier frequency of 38 kHz.

  • The NEC protocol is used by the vast majority of Japanese-manufactured consumer electronics.
  • The Philips RC-5 and RC-6 protocols both specify a carrier frequency of 36 kHz. However, the early RC-5 encoding chips divided the master frequency of the 4-bit microcontroller by 12. This required a ceramic resonator of 432 kHz to achieve a 36 kHz carrier, which was not widely available. Many companies therefore used a 455 kHz ceramic resonator, which is commonplace due to that frequency being used in the intermediate frequency stages of AM broadcasting radios, resulting in a carrier frequency of 37.92 kHz (essentially 38 kHz). Even documentation for Philips' own controller chips recommended an easier-to-obtain 429 kHz ceramic resonator, yielding a carrier frequency of 35.75 kHz.
  • Modern IR transmitters typically use 8-bit microcontrollers with a 4 MHz master clock frequency, allowing a nearly arbitrary selection of the carrier frequency.

So you see...

Mechanical ultrasonic audio became electrical ultrasonic audio became Infrared ultrasonic modulation. All the while, the processing electronics could evolve without disturbing the other parties and could remain compatible with their immediate legacy counterparts.

Additionally, operating at MHz scale modulations would severely curtain range and increase cost as it would require shorter accumulation times in the receiver, which would make the receiver more vulnerable to noise (lower signal per pulse, ergo lower SNR)

This has led to a plethora of commodity-priced IR emitters, detectors, demodulators, and encoders that are manufactured in staggering quantities.

Back to Question #1...

Do IR circuit designers modulate data over a fixed frequency (such as 38Khz) in order to give the IR transmitter (diode) the "Peak DC forward current" instead of the "Max DC forward current" as an attempt to maximize range without blowing the part up?

Nope. They modulate data for three reasons:

  1. Eliminate false positive signals. Unlike the first remote controls (flashlights), modern IR communication uses modulation to make the transmitted signals look as unnaturally occurring as possible. This makes it extremely unlikely that sunlight or other phenomenon are confused by the receiver as data sent by the operator.
  2. Expand the code space. Using modulation means that many different commands can be transmitted on the same channel and the receiver can distinguish them. The data is now represented by a combination of carrier, sub-carrier, timing, and sequence. There are many valid unique combinations and that allows volume up and volume down to be sent with the same hardware and same environment -- unlike the early remotes where your command was just signal present or absent.
  3. Deconflict devices. Using modulation enables different devices and different manufacturers to identify to whom they intend to transmit. This prevents the experience of early days of remote control where turning on your TV might turn off your VCR.

As to transmit power, many IR transmitters are, in fact, designed to their pulsed limits (which can be 10x-100x higher than their continuous limits). This allows the use of smaller/cheaper diodes.

On the receive side, the amount of transmit power it would take to "burn out" an IR receiver (diode, photo-transistor, or otherwise) is extreme and doesn't play a role in practical earth-based system design. That output power level would be dangerous to humans long before it would be dangerous to the silicon.

Finally, Question #3

If the answer to question 1 is yes, then could I get away with sending a one-time short burst of raw data (byte switches every millisecond) to an IR transmitter (diode) with using the Peak current instead of the max current?

Yes. If you are careful. This is typically how it is done. That said, you don't have to design/build this system yourself. There are plenty of commodity parts that do all the physical layer protocol handling (modulation/demodulation) for you.

Behold the Sharp GP1 family:

enter image description here

It's got the complete demodulating receiver in there! :)

enter image description here