Electronic – How to debug a circuit that uses Charlieplexing and PWM with LEDs under ISIS

charlieplexingledpicpwmsimulation

Using mikroC and Proteus ISIS, I have built a circuit that uses a PIC12F629 (link to datasheet PDF). It controls several LEDs using Charlieplexing technique (and additionally PWM them to change their brightness over the time).

Example:

LED1 100%
LED2 75%
LED3 0%

A few ms after :

LED1 75%
LED2 100%
LED3 25%

If I run the simulation step by step, everything works fine (LEDs turn on and off at each step when they are supposed to).

However if I run the simulation at full speed, the only result I get are the LEDs blinking like crazy (they turn on/off randomly). Anyway, I know that if I run circuit using real components, result should be totally different (like in my first example, with percentages).

Here is what I think happen: LEDs turn on/off very fast under ISIS, but not fast enough to recreate the persistence of vision effect as in real life.

Is it possible, using ISIS, to simulate what i would get in real life?

It would be nice, for example, to have a special type of LED component in ISIS (to place on the circuit) that would render every 50ms (not only when the status has changed). It would render LEDs brighter or darker depending the signal since last check (e.g., signal was "high" 90% of the time since last 50ms, render as bright red, signal was "high" 10% of the time, render dark red, and so on).

Best Answer

Proteus VSM (Virtual System Modeling) does not operate in real-time, at least not on even very high end desktop computers. The simulation is very processor-intensive, as you would notice if you brought up Windows Task Manager while the simulation is running. Depending on your computer's capabilities and the number of active simulation elements in your design, the actual simulation speed varies widely.

One way to estimate how much slower than real-time the simulator works, is to incorporate a loop (not a timer ISR) in your simulated microcontroller code, that flips some GPIO once every so many counts. Let's say we set this period to be around a tenth of a second in real-time, by counting clock cycles of code.

So, if the simulation flips that bit once every 20 seconds, the simulation is running at 1/200th of real-time.

Add a couple of active simulation elements to the design, then run the simulation again and watch the bit-flipping, to see how much slower it gets.

That being said, there isn't actually any practical way to achieve true real-time simulation of any non-trivial Proteus design, with present-day consumer hardware. By throwing enough processing power at it, perhaps by overclocking your PC massively, real-time might be feasible, but I wouldn't hold my breath on it.