Electronic – How is this ground fault circuit working

current measurement

I am having a difficult time understanding a few things in regards to the ground fault detection circuit below. I have found this circuit while investigating ground fault detection. I intend to build my own circuit for use in my system and am using this as a starting point.

Note the circuit below is to be matched with this CT.

GFI Circuit

Now, my questions are…

  1. I assume the intent of D1 is to provide over voltage protection. However, I don't see how that works with this particular diode. These are standard Shottky diodes which don't conduct when reversed biased. Why not just use a TVS for this sort of thing? I must be missing something here. I feel silly asking…
  2. R17 becomes the burden of the CT and develops a voltage proportional to the current. This particular circuit uses a value of 330 ohms. If I understand the CT datasheet correctly that yields a voltage around 6.5V at the rated current which doesn't seem like a good choice. However, I am not sure I read the datasheet correctly. Is Te turns ratio? How do I chose the value R for a GFI situation? The data sheet recommends choosing R such that V < .8 Vl but they fail to specify what Vl is (or perhaps I should know this).
  3. The threshold for tripping is set by R15 and R14. C14 would add some delay. R16 and R20 act to amplify the current reading. How would appropriate values be determined here? In other words, what's safe or customary?

Note that this is a one off design which wouldn't necessarily have to conform to any regulations but I'd still like to get a handle on this.

EDIT:

The AC leads (hot, hot, ground) will enter the enclosure and run to relay contacts. The leads then run from the relay contacts and exit the enclosure. I was planning only to run the two hot legs through the CT right where they enter the enclosure.

Best Answer

The circuit is just a peak detector, that has a gain of 100 with a little bandwidth limiting Low Pass Filter, followed by a comparator that compares the output of the peak detector with 0.8V.

C14 is not for a delay, its there to set the peak hold time. It's charged through the diode, D3, but has no discharge path so holds the highest voltage it sees. Normally there would be a parallel large value resistor to make this hold time more predictable, currently it's set by the comparator opamps input resistance and it's own leakage.

As the current in the CT increases (because of a fault) the voltage across R17 increases, this is clipped by D1. The peak detector amplifies negative signals better than positive ones (because it's inverting and the output can't go below zero), so it smooths out the bumps in the negative AC, as well as (for a short time) capturing the peak voltage of the negative cycle. If this peak exceeds 0.8V the output goes high.

The circuit is missing a latch to capture the fault signal, so as it is it would oscillate. When the load is remove by the detector circuit, the current would drop to zero and after a short time the fault output would be de-assert, turning back on the load.