How can I determine the optimum carrier frequency for a given bandwidth and BER [for an OOK optical communication system ]?
There is no optimum carrier frequency.
It would be nice if you could avoid frequencies already in use -- for optical communication,
most of the time that's only the constant light from the sun,
100 Hz / 120 Hz light coming from ancient flickery fluorescent lights, and
30,000 Hz - 40,000 Hz light coming from modern fluorescent lights.
What is the best demodulator circuit for this?
I'm not sure you even want a demodulator.
Perhaps you can more-or-less directly transmit Manchester-coded data,
then recover with a high-pass filter (to remove sunlight) and a comparator.
There are many demodulator circuits that would work fine for this application.
There are several low-cost demodulator ICs available at the big suppliers, some even in easy-to-prototype DIP packages, that work similar to the part described in
ON Semiconductor Application Note AN531: "MC1496 Balanced Modulator".
( Actual Implementation of a Multiplier )
such as the SA612 (also called the NE612, SA602, etc.), the SA614, the MC1496, the TDA9881, MAX2685, BGA2022, TFF1017, SA58640, SA58641, MAX2682, UPC2757, etc.
You'll have to pick a carrier frequency somewhere in the range your demodulator can handle.
The SA612 seems to work well up to roughly 50 MHz.
What filter should I use?
The rule of thumb is to try to capture most of the energy of each pulse.
Filters should pass (preferably with a flat passband) the stuff you're trying to deliberately transmit,
and should block as much as possible interfering stuff.
There's a transition band between the passband and the stopband.
For optical communication, sometimes this transition band is really wide, so you can get away with using simple and low-cost RC filters.
(The lower parts of the frequency radio frequency communication spectrum have so many people trying to communicate at the same time that it forces the transition band to be narrow, which forces them to use more complicated filters).
With randomly positioned on/off pulses of width L seconds,
the frequency distribution looks like (1/L)sinc(f/L),
where sinc(x) = sin(pix) / (pi*x).
(I.e. the Fourier transform of a rectangle function is a sinc function).
The first zero (end of the main lobe) of that sinc frequency distribution is at 1/L Hertz.
Most of the energy of the sinc frequency distribution is in the main lobe -- between 0 Hz and 1/L Hertz.
If you choose to modulate that baseband signal by some carrier frequency -- perhaps something like the way the RC-5 protocol converts each pulse to a burst of square waves -- the main lobe has a bandwidth (between zeros) of 2/L Hz.
At 1 Mb/s, L = 0.5 microsecond for Manchester-encoded data (half the full bit-time).
So the bandpass filter in your receiver needs to handle at least 4 MHz of bandwidth between the IR detector and the demodulator.
After the demodulator the lowpass filter should pass (up to the first zero) at least up to 2 MHz.
To pass a minimum f of 2 MHz with a simple RC low-pass filter,
w = tauf =~= 12.6 Mradians/s;
and the RC time constant T = 1/w = RC =~= 80 ns.
So, for example, if I arbitrarily choose C = 1 nF, the maximum R is 80 Ohms.
The lowpass parts of your bandpass filter, being at a much higher frequency,
requires an even smaller time constant, which implies
a smaller R or C or both.
further comments
The Shannon–Hartley theorem and the Fourier transform revolutionized telecommunications.
Most EE programs run students through several semesters of classes to clear out common misunderstandings and to fully understand the ramifications of these simple-seeming ideas.
You might consider skimming through the relevant sections of a digital signal processing book.
Have I mentioned the RC-5 protocol?
You could do worse than to try to speed up each part of that (approximately) 562 bit/second protocol by (approximately) 1000 or 2000 times faster to get (approximately) 500,000 bit/second or 1 Mbit/second.
Each bit of data is Manchester coded into two half-bits,
with 32 pulses in one half-bit and the LED turned off for the other half-bit.
(That seems to require a 64 MHz clock to get a full 1 Mbit/second,
and lots of hardware can't go that fast --
perhaps fewer pulses per half-bit, more like Ronja,
or fewer data bits per second,
would make it easier to use standard off-the-shelf hardware).
You might also consider looking at the the RONJA project, especially the schematics. They use some unusual clever tricks to get their data rate up to 10 Mbit/s using low-cost parts.
The incoming 10BASE-T Ethernet has already Manchester coded each data bit into two half-bits,
and Ronja transmits one or 2 pulses per data bit.
(In effect, Ronja turns the LED on for 1 clock pulse per data bit, and the receiver watches to see if Ronja transmits another pulse or not to decide whether the data bit is a 1 or a zero).
Sounds like a fun and educational project. Good luck!
The TEL-3 0523 has an efficiency of 74% at 100mA into the full output of 30V (+/-15 volts). Put it another way, it's likely that this 26% power loss will also be present on extremely light loads too.
So, 26% of 3 watts is 0.78 watts and from a 5 volt supply, the current is therefore going to be about 160 mA. 160 mA will drop about 8 mV across the resistance of the input inductor so this side of things looks fine even if two tracos are sharing the inductor.
You might also note that the voltage specification across load is for loads of 10% to 100% and this is a significant clue to something you haven't considered; You'll probably need to load the output of each traco with a load equivalent to 300mW to guarantee the output voltage doesn't significantly rise about +/- 15 volts.
I've fallen foul of this before and cursed myself. Hey it's not a biggy but you'll curse if you haven't left room for this - put this load directly on the output - no point it going after the filter.
Regards the use of the tracos as shown. Been there and done it with no problem (that I'm aware of)! In fact I needed 18V on one job and seriesed the output from a 15 volt and 3V3 (near enough).
I'd make room for a 10nF ceramic across the 120uF caps too. Clearly I can't tell whether this is good enough for your application because you don't really know. I've used linear voltage regulators on the outputs of tracos several times because I didn't know either and I didn't want to take a chance.
Good luck
Best Answer
A few things to keep in mind:
Ground is not special. Not in reality, and not in LTSpice. Ground is nothing more than the potential that we've decided to be 0V. It's a label, and one that is totally contrived and arbitrary.
To drive my point home, it doesn't matter what part of your LTSpice circuit you pick as ground. If you move your ground from one net to a completely different one, there will not be any change in the simulated result. The values will probably change but superficially only (because you've changed what LTSpice is using for 0V).
LTSpice can only simulate one circuit. Isolation or floating nodes are not supported.
That said, it sounds like you might be overthinking this. The only thing that you need to worry about when choosing your ground node is what you want LTSpice to reference all the voltages in the simulation to. That's all.
And when you want a 'second ground', what does that actually mean? It means you simply want a net that is, for all intents and purposes, not connected to ground, but is kept at the same potential. 'Kept at the same potential' here really just means that you want this to also be a 0V reference point.
What I typically do is use the already available 'COM' net option, which is just another net label and symbol provided for convenience. It isn't connected to ground, its just connected to what you connect it to. I build my circuit exactly how I intend it, with the separate GND ground and COM grounds placed and connected just like they would be physically.
Then, once I am done, I connect COM to GND... though my trusty 1 EΩ resistor. That's right, Exaohms. Is that perfectly isolated? No, but neither is your real-world circuit. The leakage through our 1EΩ resistor is going to less than an fA, which is likely substantially (like, orders of magnitude) less than the leakage you'll be getting in the real deal.
But don't just use a resistor, put a 1 zF (yep, zeptofarad) capacitor in parallel. This will again be much much lower than the real capacitive coupling that is almost certainly present when this is built physically, and it eliminates some issues with unrealistically high resistance values making simulation speed extremely slow.
Of course, in your application, it would probably be better try and make a rough estimate of the parasitic capacitive coupling you might have between your power ground and chassis ground and use that value instead of a 1 zF capacitor. a few pF is not unusual.
Here is an example of this in action. It's the text fixture for an isolated push-pull power supply. Note that the isolation is simulated using COM on the output, but with this little impedance hack, it still behaves exactly as expected.
Regardless, it really is that simple. But it is also easy to convince ourselves that it isn't.