Electronic – Intuitive meaning of setting determinant = 0 in oscillator analysis

oscillatorphasor

I'm reading a book on oscillators at the moment and the author is explaining a method of analysis based on the system determinant.
Given the following generic FET tuned oscillator arrangement:
enter image description here

and the corresponding system of equations determined by mesh/loop analysis:
enter image description here

The determinant is obtained:
enter image description here

The author then comments that the "conditions for oscillations can be obtained by setting the system determinant equal to zero"

I'm struggling to understand this in an intuitive way. I'm almost there (I think), I just need a little push!

Here's what I think this exercise is about: the system determinant is a way of determining the equivalence of the slopes of the linear equations involved. In the event that the determinant is 0, then the slopes are either the same or parallel (either infinite solutions or no solutions). So, if these equations were referring to phasers, a zero determinant might suggest that the slope of the phasers were equal. Which in turn might suggest that the phase was zero (a condition for oscillation).

EDIT: a corollary of that thought is that the two equations are significant in the sense that one defines the "amplifier" and the other the "feedback". Then the phase relationship between the two is zero for oscillation. Is that what zero determinant is solving for?

But, I'm sort of guessing here. Can anyone give me a more comprehensive and intuitive understanding of what this means?

Best Answer

Sorry about the delay. I had to find time to view some of your references.

It's a more interesting question. (I already did +1 your question, though.) Perhaps this is because I may understand it better. (Or perhaps I don't understand your question better but have acquired a better, different imagined thought of the question and am merely projecting that you are seeing what I'm seeing.)

I've seen this more as an algebraic similarity, as an engineering viewpoint. By this, I mean the recognition that the oscillatory part of a system that is grounded in Euler's and where the \$2^\textrm{nd}\$ derivative is self-similar except perhaps in sign, always yields an expression which after trivial manipulation results in a problem that looks exactly the same algebraically as an Eigenvalue problem and therefore can be solved by the same approach.

I now think you already got that much. I wasn't sure before. But I think I hadn't understood before that you do understand that aspect. But instead I now think you are looking more specifically at the larger picture as if you really are a mathematician and not an engineer. That's a good thing. But this is really more a question for mathematicians, to be honest. And there is a group for that. However, they will probably want this couched in mathematical language to get your question, as well. Since I've struggled a little to see your perspective, I'll offer my thoughts as poor as they may be on this.


Before I proceed, let me just summarize the obvious between us.

In order to solve for oscillation, we assume it. Since there is no damping involved with oscillation, we can choose to assume a purely oscillatory solution. (A damping one looks like \$\vec{V}e^{\left(\sigma+ j \omega\right) t}\$ but an oscillatory one looks like \$\vec{V}e^{j \omega t}\$.)

The vector \$\vec{V}\$ is just an arbitrary vector that may include variables for currents and voltages.

$$\begin{align*} \vec{X}&=\vec{V}e^{j \omega t}\\\\ \dot{\vec{X}}&=j \omega \vec{V}e^{j \omega t}\\\\ \ddot{\vec{X}}&=-\omega^2\vec{V}e^{j \omega t}=-\omega^2\vec{X}\\\\ \end{align*}$$

If you take the systems of equations as forming \$\textbf{A}\vec{\textbf{X}}=-\omega^2 \vec{\textbf{X}}\$, then it seems obvious that \$\lambda=-\omega^2\$ and the whole thing just "looks like" an Eigenvalue problem.

But that is more a "pattern matching" answer and it's not necessarily by itself formed as a result of any kind of geometrical vision or intuition.

So that's the real question I perceive you are asking.


One thought that comes to mind arrives from your video sequence. The video points out that Eigenvalues (Eigenvectors implied) find the vectors within the space that stay on a hyperplane (a lower dimensional space) and are stretched but are not displaced from that plane. So I think perhaps this is a clue. Displacement from a subplane isn't stable/oscillatory. Such vectors can corkscrew or wind around and may never return. All rotational-only vectors (which must return to their original place after enough rotation) formed from linear systems will stay on a subplane within the hyperspace.

I also think the key here is the idea of linear as discussed in your excellent videos. These tie in with superposition and the fact that electronic circuits behave as if they are linear (in that definition of addition and scaling.) And of course Eigenvalues make this assumption, as well.

To highlight more what I mean here, imagine that there could be, hypothetically, some odd mathematical transformation that would move vectors through the entire hyperspace and simultaneously do so in a way that doesn't trap them on a subplane within it. They wobble around and eventually somehow manage to get back to the same place in order to start the process over again. This would be oscillatory and it would also NOT result in turning it into an Eigenvalue problem/solution. But....

It also would not be a linear transform. And therefore cannot be the result of electronic circuits nor can it be addressed using Eigenvalues. The axioms are broken now. It might be interesting for a mathematician. Just not interesting for an electronics designer.

And I think that's an important point.

I think I'm reaching towards the idea here that all electronics is linear, as it means to a mathematicians (addition and scaling behavior.) Therefore, all solution transformations must follow the mathematics discussed in your videos. There will be a subplane where the vectors remain in that plane under rotation. The Eigenvalue problem/solution is therefore always applicable and can always be used as a tool for solving oscillation problems.

For even dimensions of \$\textbf{A}\$, any real Eigenvalue solution for electronics will come in pairs and have an axis of rotation that is an even dimensional subspace (or span) of the whole space. For odd dimensions, at least one real Eigenvalue exists (along with others perhaps traveling in pairs, I suppose) and the axis of rotation will also be some even dimensional subspace.

I don't know if this helps at all. But I really like your thoughtful question and that's my take on it, after gestating on it a little bit.