First, your link does not work, so I have no idea of what power supply you are using. Simply recreating your link when on the web site produces no result. I'm assuming you were looking at their 30V/3A power supply PS300U3. This supply has no PWM setting, and if you applied 30 volts to your LED for more than 10 usec, yes you killed it. As for applying 15 volts, I suspect that you had the current limit set to 40 mA. At this point your LED was dissipating .6 watts, and if you did that for long you would have killed that LED, too.
Looking at the current curve, a quick approximation for voltage rise is to note that from 15 mA to 50 mA, the nominal curve rises 0.1 volts. 1.15 / .1 is 11.5 volts, so a rough estimate suggests 12 volts at 1.2 amps. Note that this is a peak power of 14.4 watts, and with a 1% duty cycle the average power is 144 mW, which is reasonable, since 1.6 volts time .05 amps is 80 mW - the two are within a factor of 2.
(1) Are LEDs able to take massive amounts of voltage when pulsed at
such short times, as long as the peak current stays below the limit?
Yes, indeed. Of course, you MUST keep the duration less than 10 usec, and the PWM frequency less than 1 kHz. Also, long term reliability may be bad. The data sheet just says keep the current below 50 mA, and if you want to do something else (like high-current pulses) you are free to do so. Just don't go crying to the manufacturer if the LED doesn't last long.
(2) As these voltage and current figures do not match the datasheet of
the LED, perhaps the lab source doesn't display the correct current
flow. How can we accurately measure this?
This is pretty straightforward. You make a setup like
simulate this circuit – Schematic created using CircuitLab
and monitor the voltages with an oscilloscope. A multimeter will not work.
You vary R1 while monitoring the scope V1 (1 volt equals 1 amp), and when you get a current you like, you can read the voltage across the LED (V2 minus V3). And whatever you do, don't use a pot for R1 - a 1 amp current will very likely burn the wiper. Turn power off, replace R1 with a different value, then turn power on again. Start with 50 ohms. Use 10 volts on the FET gate, and don't play with it. Make sure that the gate drive never stays high for more than 10 usec.
(3) What's the longevity of the LED when you operate it at the limit?
Does it have enough time in between pulses to dissipate the heat that
is generated when running at 1%?
Absolutely no way to tell other than by doing it. Probably not great.
(4) Is it possible to get a peak current of 18A @ 1% duty cycle out of
a 3A source without blowing it up?
With a good, current-limited supply? No. It won't blow up, mind you. It just won't provide more than 3 amps. With a cheap, voltage-only supply and a narrow pulse width? Sure, especially if you put a big capacitor on the output. Of course, this requires that you are not trying to provide the pulses by commanding the power supply.
With all of this said, you are going about this the wrong way. You need to stop and think about what you are doing. At the very best, your average current per LED will be 1.2 amps x 1% (your duty cycle) or 12 mA. And I can guarantee that the efficiency of the LED will drop at higher current levels, so you will get even less than this in terms of brightness. An LED is not a light bulb, where the light power is roughly the electrical power in. You will get more brightness by driving each LED to a maximum of 40 mA. Not 50 mA. 50 is the manufacturer's absolute maximum, and driving any component to its rated maximum is a good way to get reduced reliability.
EDIT -
1) Power Supply - The problem with the link is that Velleman apparently does not sell that model in the US, so it is necessary to select a European country in order to see it. However, this doesn't matter, it's just a switching supply.
You have misunderstood the current limiting circuitry, though. You might do well to contact Velleman and ask for their specification on response time to a current limit event. It is probably in the range of 50 to 100 usec. Not only that, but the high ripple voltage (200 mV) suggests that they don't do anything special on their output. It is just an inductor/capacitor combination. This means that when you pulsed your LED, the output capacitor discharged immediately into your LED, and the supply also provided a pretty good slug of current as well, while the current limit function never really engaged.
You need to follow mkeith's advice, and use a current limiting resistor in series with the LED.
2) Pulse Width - Your description of what you need is still unclear. As best I can understand it, you have an autonomous camera which takes 3 fps pictures, and are trying to provide IR illumination. At this point, you do not know exactly when each picture is taken or the shutter speed of the camera.
If this is true, PWMing the LEDs is simply not appropriate. Yes, by running the LEDs continuously you will waste power by illuminating the target area when the camera is not utilizing the illumination. However, since you don't know when that is, there is no sense worrying about it. Just run the LEDs at 40 mA and be done with it. Consider the situation where the camera takes 3 fps with a shutter speed of 1/100. If the LEDs are simply run continuously, each exposure will use only .01/.33, or 3% of the available light. If the LED is being PWM'd at 1 kHz, a single exposure will only use 10 pulses worth of light out of 333 which occur during 1/3 of a second. Efficiency is 10/333, or about 3%.
On the other hand, let's say you can either provide the shutter drive, or look at the camera data to determine when the camera has finished acquiring an image. This still does not tell you what the shutter speed is, so you cannot tell how short a pulse you need.
Note that the pulse condition (10 usec @ 1% duty cycle) says that as long as the shutter speed is greater than 1 msec, continuous illumination is the way to go. Like I said earlier, 1% of 1.2 amps is 12 mA, and 40 mA average for continuous is more than 3 times better, regardless of efficiency drops. The only exception to this is if you need shorter exposure times. If the camera shutter speed is less than about 300 usec, than pulsing the LED can be considered. And it's also possible to consider using very short LED pulses as a strobe light to freeze high-speed motion.
3) Efficiency - Efficiency is measured in optical output vs current, and all LEDs show a peak efficiency at (typically) a few mA. An article on the subject: http://www.electronicsweekly.com/news/components/led-lighting/provred-why-led-efficiency-drops-at-high-current-2013-08/. And here http://www.tech-led.com/data/L940-66-60-550.pdf is the spec sheet on a high-current illuminator. Note that the efficiency (mW/mA) is .875 at 700 mA, .800 at 5 A.
4) Voltage Drop - While your specific LED does not have a high-current spec for Vf, http://www.adafruit.com/datasheets/IR333_A_datasheet.pdf is probably a pretty good guide. The material (GaAlAs) is the same.
Best Answer
Hang on... 'cause we 'bout tuh answerin' completely!
The short answer is: some history and some technology (also, 38kHz is not the standard frequency, but one channel of many allowed by APA in that same band).
First, Question #2
Consumer applications for wireless remote control technology first appeared in 1955, when Zenith, Inc. founder-president Eugene F. McDonald Jr. yearned for a wireless remote control that would mute the sound of commercials. So convinced was he of the imminent demise of commercial television that he viewed the development of the “Flash-matic,” designed by his engineer Eugene Polley, as a temporary fix until subscription television arrived.
The “Flash-matic” was the first commercial demonstration of wireless remote control widely sold to the public. This television controller consisted of a focused electric flash light that the user aimed at one of four photo-sensors positioned in the four corners of the television set to control volume muting (1 corner), power (1 corner), and change the channel (the last two corners).
However, its unmodulated visible light approach worked poorly during the day when incident sunlight would randomly turn on the television or cause other unwanted behaviors from the set.
In accordance, Zenith released an improved controller in 1956 based on a design by the now widely revered “father of remote control” Dr. Robert Adler (Dr. Adler went on to hold over 180 patents worldwide, including critical breakthroughs in vacuum-tube technology).
Dr. Adler’s remote, sold under the trade name “Space Command”, was based on ultrasonics (sound frequencies above the range of human hearing) and did not require batteries in the handheld unit.
A set of light-weight aluminum bars, akin to tuning pitch forks, were individually struck when the user pressed a button positioned above the bar.
Because the button had to strike the bar without damping it in order to maximize its audio output, a snap-action switch was used giving the remote its affectionate name, “the clicker”.
The bar would ring when struck producing a fundamental pitch in the near-ultrasonic audio spectrum (20kHz-40kHz).
In Adler's television, an audio transducer fanned out to a set of six vacuum tubes forming a bank of bandpass filters to decode the incoming signal and discern which button the user had pressed (e.g. which bell they had rung in the remote).
Despite its technical achievement, Adler’s ultrasonic remote control saw slow consumer adoption in the late 1950’s as the additional vacuum tubes raised the market price of the television set by 30%.
Although invented in 1947 by Dr. William Shockley, John Bardeen, and Walter Brattain, the transistor did not begin appearing in consumer products until the early 1960’s. The advent of the transistor (solid state semiconductor) brought about dramatic reductions in the cost to manufacture remote control electronics. Adler’s ultrasonic design was reborn as a battery powered electronic version and gained widespread use. More than 9 million ultrasonic remote controls were sold by 1981.
This large-scale adoption created market pressures for new products with ever greater capabilities and convenience. Engineers, in turn, began demanding greater range, longer battery life, and greater control over a larger number of features (i.e. more buttons on the remote) from their remote control interfaces, all the while reducing the cost to manufacture the assembly.
Advances in semiconductor transistors occurring in parallel with semiconductor-based optoelectronics brought about the modern infrared (IR) remote, the first commercial product appearing in 1978.
Electronics companies, Plessey and Philips, both of whom had divisions specializing in semiconductors, were the earliest manufacturers of chips that contained the entire IR transmitter and receiver. Unfortunately, they failed to predict the multiplicity and popularity of the medium and assumed that only one receiver would be in range of a remote at any given time.
Their protocols and modulations schemes made no attempt to distinguish one receiver from another.
In July of 1987, the Appliance Product Association (APA) standardized the protocol used by most commercial IR remote controls. The standard was subsequently adopted by the AEHA, a government consumer product regulatory authority, in Japan and Philips began a product registration service to further ensure correct operation across multiple vendors and devices.
However, collaborative projects always branch and by 2000, more than 99 percent of all TV sets and 100 percent of all VCRs and DVD players sold in the United States were equipped with IR remote control based on one of five major variants of the APA protocol.
All variants specified a fixed carrier frequency, typically somewhere between 33 and 40 kHz or 50 to 60 kHz. The most commonly used protocol is the NEC protocol, which specifies a carrier frequency of 38 kHz.
This has led to a plethora of commodity-priced IR emitters, detectors, demodulators, and encoders that are manufactured in staggering quantities.
Back to Question #1...
Nope. They modulate data for three reasons:
As to transmit power, many IR transmitters are, in fact, designed to their pulsed limits (which can be 10x-100x higher than their continuous limits). This allows the use of smaller/cheaper diodes.
On the receive side, the amount of transmit power it would take to "burn out" an IR receiver (diode, photo-transistor, or otherwise) is extreme and doesn't play a role in practical earth-based system design. That output power level would be dangerous to humans long before it would be dangerous to the silicon.
Finally, Question #3
Yes. If you are careful. This is typically how it is done. That said, you don't have to design/build this system yourself. There are plenty of commodity parts that do all the physical layer protocol handling (modulation/demodulation) for you.
Behold the Sharp GP1 family:
It's got the complete demodulating receiver in there! :)