Jump to content

Recommended Posts

Posted

Hello you all!

Here's a means to produce a sine wave voltage, very pure, with metrologic amplitude, whose frequency can be varied over 2+ octaves in the audio range - this combination may serve from time to time.

It uses sums of square waves with accurate shape and timeshift. A perfectly symmetric square wave has no even harmonics. Adding two squares shifted by T/6 suppresses all 3N hamonics as the delay puts them in opposition; this makes the waveform well-known for power electronics. Two of these waveforms can be added with T/10 shift to suppress all 5N harmonics, then two of the latter with T/14 shift, and so on. A filter removes the higher harmonics as needed.

QuasiSineWave.png.53f07800a567ef29aec022b01f60409d.png

The operation makes sense, and may be preferred over direct digital synthesis, because components and proper circuits may provide superior performance.

Counters produce accurate timings. If a fast output flip-flop outputs a zero 1ns earlier or later than a one, at 20kHz it leaves -90dBc of second harmonic and at 1kHz -116dBc, but at 1MHz less interesting -56dBc. If the propagation times of the output flip-flops match to 0.5ns, at 20kHz they leave -100dBc of third, fifth, seventh... harmonic and at 1kHz -126dBc.

74AC Cmos output buffers have usually less than 15ohm and 25ohm impedance at N and P side. On a 100kohm load, the output voltage equals the power supply to +0 -200ppm. 5ohm impedance mismatch contributes -102dBc to the third harmonic, less at higher ones.

Common resistor networks achieve practically identical temperatures and guarantee 100ppm matching, but measures give rather 20ppm. This contributes -110dBc to the third harmonic, less at higher ones.

QuasiSineDiagram.png.2736bbcba8d27a29eedae075713ceb2c.png

This diagram example would fit 74AC circuits. Programmable logic, Asic... reduce the package count and may use an adapted diagram. To suppress here the harmonics multiple of 2, 3, 5 and 7, it uses 8 Cmos outputs and resistors. As 3 divides 9, the first unsqueezed harmonic is the 11th.

A counter by 210 has complementary outputs so that sending the proper subsets to 8-input gates lets RS flip-flops change their state at adequate moment. Programmable logic may prefer GT, LE comparators and no RS. I would not run parallel counters by 6, 5 and 7 instead of 210 as these would inject harmonics.

The RS flip-flops need strong and fast outputs. Adding an octuple D flip-flop is reasonable, more so with programmable logic.

I feel paramount that the output flip-flops have their own regulated and filtered power supplies, for instance +-2.5V, and the other logic circuits separated supplies like +-2.5V not touching the analog ground. That's a reason to add an octuple D flip-flop to a programmable logic chip. For metrologic amplitude, the output supplies must be adjusted. All the output flip-flops must share the same power supplies, unless the voltages are identical to 50ppm of course.

A fixed filter can remove the higher harmonics if the fundamental varies by less than 11 minus margin, and a tracking filter for wider tuning is easy as its cutoff frequency is uncritical. The filter must begin with passive components due to the slew rate, and must use reasonably linear components.

----------

I tried almost three decades ago the circuit squeezing up to the fifth harmonic, and it works as expected. Squeezing up to the third is even simpler, with a Johnson counter by 6 and two resistors. Measuring the spectrum isn't trivial, for instance Fft spectrometers can't do it; most analog spectrometers need help by a linear high-pass filter that attenuates the fundamental.

Marc Schaefer, aka Enthalpy

  • 3 months later...
Posted

The previous method uses intermediate output voltages to suppress some harmonics. Alternately, the waveform can have more transitions between just two output voltages. This needs only one accurate output and no matched resistors,  but reinforces other harmonics and has some limitations. I suppose much of that exists already.

==========

To suppress one harmonic, three transitions per half-period suffice. Understanding it as a convolution with a three-Dirac signal spares some integrals and helps thinking further. Convolving multiplies the spectra. With the Diracs T/6N apart, the Nth harmonic cosine in 0.5 there, so its sum over the three positions (the Fourier integral) is zero. The sine is zero by symmetry, so is the power of the harmonic in the convolving signal, and in the output signal too.

ThreeTransitions.png.4e69c432171f571b09db2632f1d1470f.png

The transitions are placed accurately by a counter, for instance by 18 to suppress the 3rd harmonic, and the circuit fits nicely in a PAL for instance - with a separate CMOS flipflop for the output, fast to propagate the rising and falling edges with nealy the same delay, and having its own clean power supplies. With a simpler analog filter, it suffices already to measure the second and third harmonic distortion of an audio amplifier, detect the nonlinear antitheft magnetic filaments in goods at a shop, and so on.

==========

Can we convolve several times by signals that suppress different harmonics? For the maths, we can combine the convolving signals first. Suppressing one harmonic more takes 3* as many transitions. An output signal of two values needs alternate +Diracs and -Diracs at the combined convolving signal, which happens if the suppressed harmonics' ranks differ by more than 2, annoyingly. For instance the ranks 3 and 5 need more output values hence summing resistors.

Could the starting signal be unsymmetrical, like positive for T/3 and negative for 2T/3, to contain no 3rd harmonic? This helps little. One 3-Dirac convolution could then suppress the 2nd harmonic but not the 4th: if meant against rank N, it suppresses also the ranks 5*N, 7*N, 11*N, 13*N... and ranks 2 and 4 are too close for two 3-Dirac convolutions.

Can 3-Dirac convolutions combine with the previous multi-level signal? Yes... Think with calm at what rank to suppress by which method so no new output level is needed. Combined solutions use to mitigate the advantages but cumulate the drawbacks.

==========

Better, we can convolve the square by a signal with more alternated Diracs, putting more transitions at the output signal to cancel more harmonics. I take here the ranks 3 and 5 as an example (as the initial square already squeezes the ranks 2, 4, 6...) and extrapolate the observations to other ranks or more ranks.

FiveTransitions.png.3ea800211bce79a911f9dbb456071b6c.png

The seemingly harmless equation for the angular positions a1 and a2 of the -Diracs and +Diracs at the convolving signal are:
 1-2cos(3a1)+2cos(3a2)=0 for H3=0
 1-2cos(5a1)+2cos(5a2)=0 for H5=0
I found no analytic solution quickly. The deduced 5th grade equation with few nonzero coefficients may exceptionally accept analytic solutions, but these wouldn't help design an electronic circuit.

Instead, I solved numerically
 a1~0.0656804005273 and a2~0.0925768876666 turns (of one fundamental period)
with the joined spreadsheet. Programming, or Maple, or supposedly Mathcad, Mathematica and others would do it too.
Squeeze35.zip (unzip, open with Gnumeric, Excel 97...)

These parts of a period are not fractions, at least up to 66 000. Fractions can approximate them, and I picked some favourable ones from the same spreadsheet to put on the previous Png. The best denominators (clock pulses per period of output signal) against one harmonic are bad against the other, but I believe circuitry that places the transitions accurately needs one common counter, so I chose denominators decent against both harmonics. Such denominators are big: 4796 to exceed the signal purity of a Digital-to-Analog Converter. 23386 is manageable since the output signal is limited by the propagation time mismatch roughly to the audio band: clock around 23MHz for 1kHz output, and unusual but feasible 460MHz for 20kHz; the signal purity resulting from this approximation only, -122dBc before filtering, is excellent.

The circuit to squeeze two more harmonics than the square signal does is simple, notably for programmable logic. The 65027 denominator needs a 16-bits counter with complementary outputs of which the And gates pick 16-bits configurations.

FiveTransitionsDiagram.png.b82b5f002a40e0151daea3fa73817593.png

Squeezing more harmonics, with more transitions per output period, would take more And gates. Denominators are supposedly less efficient or much bigger.

Marc Schaefer, aka Enthalpy

Posted

Adjusting continuously two transition positions lets suppress two harmonics, but positions approximated by a counter need a fast clock. Here, I propose to use 7 transitions and a slower clock.

7 transitions per fundamental half-period have shown no solution exact for both harmonics, but the number of positions combinations lets chance provide some good ones.

SevenTransitions.png.2a94d730834196d5b87ad360ce27814a.png

This time, I let software make an exhaustive search: number of clock ticks per fundamental period, positions of the transitions. Dumb programmer but fast computer.

Search735.zip

3060 ticks per output period need a 60MHz clock for 20kHz: 74AC counters and pipelined logic achieve it, an Epld is more compact. The waveform squeezes H5 to 126dB below H1, and H3 to nothing. The residual amplitude around 10-16 fits the computer's inaccuracy and chance can't decently explain it, so it must be an algebraic solution for this harmonic.

This diagram is simpler. A toggle flip-flop reuses the logic for both output half-periods, increasingly useful with more transitions. The main counter has one bit less.

SevenTransitionsDiagram.png.a5267bd621e071abe52b5a4c436f04b6.png

Marc Schaefer, aka Enthalpy

 

Posted

Nine transitions per half-period reduce the clock frequency further: 1840 ticks per period, that's 36MHz for 20kHz, and keep H3 and H5 at 132dB and 126dB below H1.

Some solutions are exact (very probably) for one harmonic, but none is for both harmonics, up to 2500 ticks per period.

NineTransitions.png.4ed102d0c867333b1dcedfaa570e0848.png

Found again by software that exhausts the computer more than the programmer's imagination.

Search935.zip

The counter could go up for a quarter period of the output signal, down for the next quarter period, and again for the next half-period. That would use the same Nands four times per period, reducing their number. Send the adequate Nands Or'ed to the output flip-flop as previously.

With an integer number of ticks per quarter-period, such a counter must stay at the extreme values for two ticks, for instance with an extra flip-flop. Other period lengths and transition positions should spare that. No diagram, sorry.

Marc Schaefer, aka Enthalpy

Posted

Bingo with eleven transitions per half-period: three waveforms contain no third nor fifth harmonic to the computer's precision, with periods of 180, 180 and 420 clock ticks only. Very probably algebraic solutions, if someone proves it.

ElevenTransitions.png.2e5a84664bbfc306c4b0273d0e336908.png

Up to some 1000 ticks per period, the software finds only exact multiples of these three waveforms, including at their common multiple of 1260 ticks.

Search1135.zip

The former diagram suggestions save more gates here. The circuit fits nicely in programmable logic.

Marc Schaefer, aka Enthalpy

Posted

One algebraic pseudo-proof of H3=H5=0 would write the cos(a*2pi/T) and others as polynoms of cos(2pi/T) and check that the sum is zero.

This path is inaccessible to hand computation. Even the waveform with T=180 needs e=32 where the contribution to H5 is a polynom of degree 32*5=160 of cos(2pi/T) that has all even coefficients.

Software could do this computation exactly using ratios of integer numbers. I call it pseudo-proof because humans can't check the computation. They can only prove the algorithm and hope that the program, compiler, runtime libraries , OS, hardware... make no mistake.

Alone the expression of cos160(2pi/T) has coefficients like (16080) that take almost 160bits to write, so the program or library must compute on rational numbers with arbitrary precision.

Posted

Real logic gates cannot yield perfect square waves and introduce harmonics of their own.

If you want a pure sine, go analogue ;).

Posted

I've let the dumb program combine eleven transitions to minimize the harmonics 3, 5 and 7.

Search11357.zip

Up to T=1220, no exact solution was found, not even excellent ones, but better than a DAC: -87dBc. 1006 choices are too little to squeeze three harmonics by chance.

==================================================
   T    a    b    c    d    e   H3   H5   H7   H7
==================================================
 344   14   37   43   75   76  -71  -72  -78 0.75
 980   17   58   81  152  159 -105  -80  -86 0.79
1092   41  106  122  171  173  -95  -87  -87 0.81
==================================================

Marc Schaefer, aka Enthalpy

On 1/22/2018 at 10:29 PM, Bender said:

Real logic gates cannot yield perfect square waves and introduce harmonics of their own.

If you want a pure sine, go analogue ;).

Thanks for you interest!

I give in the first message estimations of the distortions introduced by the gates.
I did realize and measure that one, and the observations fit the estimations.
We're speaking of harmonics at -120dBc here, which isn't common.

While I have already produced a spectral purity of -143dBc by an analogue filter, this was at a fixed frequency. If the fundamental were to vary by a factor-of two or more, I wouldn't like to design and build a tracking filter with such a performance.

Posted
8 hours ago, Enthalpy said:

I dB of attenuation of out-of-band signals you could just about start with white noise and get a decent sine wave. wouldn't like to design and build a tracking filter with such a performance.

You don't need to. There are chips that pretty much do it for you. (Other chips are available)

https://www.intersil.com/en/products/timing-and-digital/dsp/dsp-digital-filters/HSP43220.html

With a 96 dB attenuation for out-of-band signals you could start with white noise and get a pretty good sine wave.

I'd still stick a low pass RC filter on the output.

As an exercise for the interested reader, imagine that you are feeding the signal into a traditional 600 ohm load and that it's an audio signal delivering 1 mW.
How good does harmonic rejection of a  1 KHz signal need to be before the johnson noise in the input resistor exceeds the sum of the errors due to harmonics?
(i.e. at what point does improved harmonic reduction become pointless?)

 

 

Posted

Rising and falling edges propagate with different delays to logic outputs, but here's a way to eliminate the consequences.

Logic +1 and -1 states commonly decide outputs for the full duration of a symbol. An output symbol then begins with a rising or falling transition or none. Rising is done with a different delay than falling, which can be modelled by an added noise that has short peaks of duration tr-tf and amplitude +-2 at the transitions. This deterministic noise has a complicated pattern that adds unwanted harmonics to the output.

I propose instead to have one rising and one falling transitions in each output symbol, and let the position of one transition, for instance the rising one on the sketch, represent the symbol's logic state. I haven't seen this before, but didn't check neither.

TrTfUnimportant.png.9d406517fb025d16a7e60d9d7d26eabd.png

Taking as a time reference the (here rising) edge that represents the logic +1:

  • The position of the edge representing the logic -1 is at a time interval that does not depend on tr-tf. Only the smaller clock jitter has still an effect.
  • The position of the (here falling) other edge is shifted by tr-tf, but this happens once per symbol independently of representing +1 or -1, so it adds a noise at the period of the symbols. Being independent of the represented sequence, it adds no unwanted harmonics.

Using such symbols, we don't need any more flip-flops much faster than the analog output, which can now exceed the audio frequencies with good purity. The clock must be faster to define the positions of the (here rising) edges within a symbol, for instance 4* faster if the rising edges are at 1/4 and 3/4 of the symbol duration, but if a sequence has 180 symbols as does one described here above, the clock is only 720* faster than the analog output.

This symbol representation has uses beyond the synthesis of harmonic-free sine. Notably, sigma-delta circuits can benefit from it: DAC, ADC (mind the stability) and power audio amplifiers.

Marc Schaefer, aka Enthalpy

Posted
On 1/28/2018 at 11:28 AM, John Cuthber said:

[After citing Enthalpy's " While I have already produced a spectral purity of -143dBc by an analogue filter, this was at a fixed frequency. If the fundamental were to vary by a factor-of two or more, I wouldn't like to design and build a tracking filter with such a performance."]

You don't need to. There are chips that pretty much do it for you. (Other chips are available)

https://www.intersil.com/en/products/timing-and-digital/dsp/dsp-digital-filters/HSP43220.html

With a 96 dB attenuation for out-of-band signals you could start with white noise and get a pretty good sine wave.

I'd still stick a low pass RC filter on the output.

As an exercise for the interested reader, imagine that you are feeding the signal into a traditional 600 ohm load and that it's an audio signal delivering 1 mW.
How good does harmonic rejection of a  1 KHz signal need to be before the johnson noise in the input resistor exceeds the sum of the errors due to harmonics?
(i.e. at what point does improved harmonic reduction become pointless?)

Hi JC and the others!

The nice toy from Intersil is a digital filter. Digital processing achieves about any performance, but when you convert to an analogue signal, the DAC spoils the spectrum. -60dBc signal purity is common, -80dBc is rare, and for the -120dBc or -140dBc I needed in some applications, there is no other means than an analogue filter, with components chosen for linearity.

Fortunately, my pure sine accepted a fixed frequency then, so the analogue filter was reasonably easy. "Only" a matter of isolation and clean routing for electromagnetic compatibility.

But if the frequency must vary by more than a factor of 2 or 3, you have no fixed corner frequency where you can put a filter cutoff to separate the varying fundamental from the varying harmonics. One approach, seriously difficult, is to build a (very linear) filter whose corner frequency follows the wanted fundamental frequency. The other approach, for which I propose the waveforms here, is a means to create a fundamental that is free of the lower harmonics, so that a fixed filter removing the higher harmonics suffices.

==========

Pure sine from a noise: if a filter has a very narrow bandwidth, its output resembles a sine over a limited duration. If you observe it over a longer time, the amplitude and phase of the pseudo-sine fluctuate. It's Heisenberg's energy-time uncertainty, call it bandwidth-time for electrical engineers.

==========

The exercise: it's a matter of power, independent of the ohmic value, and the bandwidth decides rather than the carrier frequency. Your power is 0dBm, the 300K noise is -174dBm/Hz. So if you measure over a 15kHz=42dBHz bandwidth you detect (depending on the certainty you want) harmonics of -132dBm = -132dBc, over 1Hz bandwidth -174dBc, and so on. Now, if it's an audio signal, our ears don't hear such a purity, and the loudspeakers introduce more distortion than the amplifier.

I had built inductive transmitter and receiver to locate a rocket on the ground. With 1Hz bandwidth (triple heterodyne, synchronous local oscillators, and more) around 457kHz I detected -172dBm. Radioastronomers use correlation receivers that integrate over hours, and their noise temperature is more like 20K.

Posted

Digital filters designed explicitly to be tunable to a (digitally) given frequency are pretty commonplace.

So, if you start off with a fairly rubbish  sine wave (say an integrated square wave), you can get an output that's a very good sine wave.

t's often going to be noise, rather than harmonics, that dominate the "impurities" in the output.

If you don't need a very high frequency you can get an very good sine wave by "anding" two fast square waves with different frequencies together and filtering the result.

 

If you use, for example a 1 MHz square wave and a 1.001 MHz square wave (derived from a PLL if you like) then the output is 1KHz and most of the noise is still in the MHz regions which makes it easy to remove, even with a fixed filter. For audio purposes, you ca ignore it- it's far too high to hear.

Posted

Random noise is often stronger than the harmonics, yes. But in some uses, often with a narrow band, the harmonics dominate. Measuring very weak harmonics is not easy, I had already to invest some time in it.

The and (or the xor) of two fast squares is, after removing the high frequencies, a triangle, full of harmonics.

Making a product (call it heterodyne) of two approximate sines makes a better sine. But with squares, both third harmonics beat too and produce a third harmonic of the beat frequency.

Posted

  

I had wanted T=4n for bad reasons. T=2n suffices and enables new combinations.

Improving slightly over Jan 22, 2018: T=374 gives weaker H3, H5, H7 with 11 transitions than T=344, while the bigger T=856 and T=1092 still outperform both. Still the dumb software, run up to T=1040.

==================================================
   T    a    b    c    d    e   H3   H5   H7   H1
==================================================
 374   14   32   34   42   46  -73  -91  -82 0.82
==================================================

----------

I've tried 15 transitions to minimize H3, H5, H7 with T=2n. Only up to T=434, which less stupid software would relieve.

15357.cpp

======================================================
 H3   H5   H7    H1     T   a   b   c   d   e   f   g
======================================================
-75  -73  -72   0.55  222   4  12  18  31  35  49  51
-84  -81  -96   0.81  368  11  25  30  45  46  49  51
-80 -inf -inf   0.49  420   7  18  37  47  70  77 102
-100 -73  -75         432  11  30  32  33  41  76  80
======================================================

Found no exact solution: only -81dBc with T=368. The number of trials is too small to squeeze three harmonics by chance.

Marc Schaefer, aka Enthalpy

 

  • 3 weeks later...
Posted

At last, +-1 waveforms that reduce nicely H3, H5 and H7. They take 21 transitions per half-period but only T=210.

  H3   H5   H7    H1   T   a   b   c   d   e   f   g   h   i   j
=================================================================
-104 -inf -inf  0.73 210   2   7  14  16  19  20  26  28  42  43 <<<<<
-110 -inf -inf  0.23 210   8  10  14  22  32  34  38  41  42  46 <<<<<
=================================================================

The amplitudes of H5 and H7 are algebraic zeros almost certainly. The first waveform has its H9 some 23dB below H1, while the second has a weaker H1, about 10dB below H9.

A 6 bits up-down counter takes only 11 big And gates to define all the transitions.

--------------------

The harmonics that are zero to the rounding accuracy with 64-bits floats remain so with 80-bits floats, both here and for the previous waveform that suppresses H3 and H5 using 11 transitions.

Marc Schaefer, aka Enthalpy

  • 2 weeks later...
Posted

The search programme could gain 10dB on H3, H5 and H7 with +-1 waveforms using 23, 25 and 27 transitions per half-period. Still the dumb algorithm, but the source is better written.

Search27357b.zip

Here's a selection of waveforms, with 21 transitions too. Among even T, 210 stands widely out. The H1 amplitude refers to a square wave while H3, H5, H7, H9 are dBc.

  H1   H3   H5   H7   H9 |   T  a  b  c  d  e  f  g  h  i  j  k  l  m
=====================================================================
0.73 -104  nil  nil  -23 | 210  2  7 14 16 19 20 26 28 42 43
0.34 -114  nil  nil    6 | 210  3  6  7 14 22 35 36 38 43 45 46
0.59 -111  nil  nil  -19 | 210  5  6 10 14 16 17 19 20 29 32 44 46
0.38 -114  nil  nil   -8 | 210  1  2  4  6 10 19 25 34 35 39 41 43 46
=====================================================================

Marc Schaefer, aka Enthalpy

Posted

The wide Nand gates that detect the transition times from the counter's outputs are welcome with programmable logic. With packages of fixed logic instead, decoding subgroups of counter outputs allows small Nands. This diagram for T=210 and 27 transitions per half-period needs only 16 packages. The by-105 counter and transition locators in odd number make two cycles per sine period, the output JK rebuilds a complete period. The logic can be pipelined for speed; think with calm at what state decides the reset (or better preload), and then at the other transitions.

DecodedCounter.png.12c91b5392448a74e365c4f3a1b4d5fe.png

--------------------

Alternately, diodes-and-resistor circuits can make the logic between a 4-to-16 decoder and an 8-to-1 multiplexer. Few logic packages and 1 diode per transition. Or use a tiny PROM easy to address by the counter.

--------------------

We can also split the counter into subfactors, like T=210=6*5*7=14*15. This enables Johnson counters, which comprise D flip-flops plus few gates for N>=7, and are easier to decode and faster. For a count enable, feed the outputs of flip-flops through a multiplexer back.

Traditionally, the subcounters run a different paces, and the carry outputs of faster subcounters determine the count enable inputs of slower ones. We can run them all at full speed instead: with factors relatively prime, they pass through all combinations of states in a period.

PrimeSubcounters.png.317e6c0dc5fd00833d7ac321d5e1c91a.png

Subcounters ease several phased sine outputs, at 90°, at 120° and 240°... For instance with T=210=6*5*7, common logic can locate transitions from the /5 and /7 subcounters, and these transitions serve not only twice per period, but also for the three sines, as switched by the /6 subcounter. To my incomplete understanding, Or gates can group several located transitions if their interval is no multiple of 6. Notice the T states and RS flip-flops, not T/2 and JK, to ensure the relative phases.

A PROM is a strong contender for phased sine outputs.

--------------------

Here's a subdiagram to make tr-tf unimportant, as proposed here on Jan 28, 2018. 4T clock ticks per sine period in this example, adding a /4 subcounter whose carry out drives the count enable of the other (sub)counter(s). Or use a PROM 4* bigger.

TrTfDiagram.png.248fe237deff9247cc0ee6b88816e2ef.png

Marc Schaefer, aka Enthalpy

Posted

We can combine both methods to reduce more harmonics: add or subtract optimized +-1 waveforms with the proper phase shift. This combines the drawbacks, but also the advantages: for instance the number of summing resistors doubles for each suppressed harmonic, which at some point a +-1 waveforms does for cheaper.

----------

Voltage differences appear in power electronics at full bridges and three-phase bridges. If two outputs are out of phase minus a fourteenth of a period, the load between them sees no H7, so using the waveforms of Jan 13, 2018 to Jan 21, 2018 that squeeze H3 and H5, the first strong one is H9. More commonly, the outputs can lag by 120°, which suppresses H3 and H9. This is done with square waves and improves with the coming +-1 waveforms that squeeze H5 and H7, leaving H11 as the first strong one.

BridgesFullThree.png.ea780fd4bf5989071a1a45907c02234c.png

  • Three square waves at 0°, 120° and 240° were common with thyristors, especially for very high power. They need an additional regulation of the supply voltage, often a buck.
  • With Igbt, sine waves made by Pwm are more fashionable. They need less filtering, avoid cogging at motors, adjust the output amplitude, but suffer switching losses.
  • The more elaborate +-1 waveforms I propose are intermediate. They need an additional regulation, but have small switching losses, and little filtering avoids harmonics and cogging.

Maybe useful for very high power, to minimize switching losses and save on costly filters. I see an emerging use for quick electric motors:
http://www.scienceforums.net/topic/73798-quick-electric-machines/

 

  • Machine tools demand a fast spindle hence a high three-phase frequency;
  • Centrifugal pumps and compressors demand fast rotating motors too;
  • Electric aeroplanes need a high three-phase frequency to lighten the motor, either with a small fast motor and a gear, or with a large ring motor at the fan's speed but with many poles for a light magnetic path.

The high frequency (several kHz) is uneasy to obtain by Pwm as switching losses rise. But for fans, compressors, pumps... whose speed varies little, a fixed LC network filters my waveforms to a nice sine.

Rfid generators at low frequencies might perhaps benefit from such waveforms too, since they must filter much their harmonics to avoid interferences, which is costly. RF transmitters maybe, for LW.

----------

The selected +-1 waveforms in this table squeeze H5 and H7 since the phased sum does the rest. 7 transitions per half-period ideally suppress H5 and H7 with T=210, more transitions bring no obvious advantage in this quest.

Power electronics tends to reduce the transitions that create switching losses, and want a strong H1 voltage, while spectral purity isn't so stringent, so the table's top fits better, while the bottom is more for signal processing. One single transition more than the square wave puts the H5 voltage at 6% of the fundamental, two transitions at 0.7%. At 2kHz, 100ns accuracy on the transitions suffices easily, so a specialized oscillator isn't mandatory. 0.97 and 0.93 are fractions of the square wave's H1 voltage, and the usual coefficients like sqrt(3)/2 still apply.

  H1   H3   H5   H7   H9  H11 |   T  a  b  c  d  e
===================================================
0.97  -12  -25  -27  -19  -16 |  36  1
0.93  -15  -43  -43  -21  -14 | 180  8 11
0.90   -9  nil  -64  -16  -16 | 180  5 41 42
0.93  -16  nil  nil  -30  -39 | 210  5 14 16
0.90  -18  nil  -51  -21  -12 | 120  1  4 11 12
0.87   -8  nil  -61  -21  -15 | 120  2  3  4 26 27
0.77  -10  nil  -77   -7  -15 | 120  4 16 17 28 29
0.93  -15  nil  -77  -23  -17 | 180  1  7  9 12 13
===================================================

Marc Schaefer, aka Enthalpy

Posted

The stone-old Proms like 2716 make the waveforms easily, as they receive all addresses at once on pins distinct from the data. They are still available in small amount. I didn't check if more recent components exist nor how they are addressed.

Prom.png.0745c96eb4029582846bf69411ef1c77.png

Of the diagrams, the left example provides the drive signals for a three-phase power stage driving a motor, a transformer and transport line... The Prom behaves statically, so a counter and a set of flip-flops suffice. One of the eight output bits defines each waveform, possibly more than three to stack transformers. Only the switching losses in the power components limit the number of transitions per cycle.

The right example makes a sine exempt of H5 and H7 thanks to the chosen transitions, and of H3 and H9 by summing two waveforms shifted by 60°. T=210 a=5 b=14 c=16 is a logic candidate here, though more transitions can attenuate more harmonics, alone or helped by the resistors. More waveforms and resistors can attenuate more harmonics too. The waveforms can be longer too, for instance to create new phase shifts. Counting by 840 for T=210 here lets the Prom store 0001 and 0111 for each symbol to make tr-tf unimportant.

The dinosaur Proms consume power and limit the clock to about 10MHz hence the sine to 50kHz. Newer Proms (in a programmable logic chip?) could be much faster, but the general solution to speed is logic rather than Proms.

Marc Schaefer, aka Enthalpy

Posted

On the right side of the last message's diagram, I suggested separate supplies for the output flip-flops.

While it can be useful to filter individual supply lanes for the flip-flops (in separate packages with LC cells), the phased outputs attenuate the target harmonics only if the supply potentials match very accurately, and this is best obtained from a common regulator.

Posted

Here's the aspect of the waveform with T=210 a=5 b=14 c=16 from Mar 04, 2018 9:05 pm. Adding two of them with 60° lag or subtracting them with 120° gives the same wave.

Waveform210_51416.png.93fa1ad63f23014e99c5e7004e76ade9.png

Electric motors sometimes run slower: at start on an electric plane, more often on an electric or hybrid car. The same waveform and filter would then drive the motor with a jagged voltage, but the drive electronics can use the same power components in PWM mode when running slowly.

For an electric motor, a counter with fixed frequency suffices to place the transitions. Maybe a fast microcontroller can create the waveforms directly from its clock, or the controller tells the dates of the coming transitions to comparators that refer to one fast big counter. This can be integrated on a special chip, optionally the same as the controller.

Marc Schaefer, aka Enthalpy

  • 2 weeks later...
  • 7 months later...
Posted

Hi...as per my observation it as a convolution with a three-Dirac signal spares some integrals and helps thinking further. The sine is zero by symmetry, so is the power of the harmonic in the convolving signal, and in the output signal too.

  • 6 months later...
Posted

Here's eventually an algebraic proof that H3=H5=0 for both T=180 sequences with 11 transitions
scienceforums on Jan 21, 2018 9:58 pm

Of the convolving sequence explained there
scienceforums on Jan 13, 2018 6:38 pm
I represent the complex amplitude of a given harmonic, H3 or H5, at each position of a Dirac, including the ones before zero. These amplitudes are also the contributions of each Dirac to said harmonic, by Fourier series. The angles are computed modulo one turn, that is 180 ticks here.

ProofT180aH3H5.png.3fff5eb5d13c9b56d9b0ed12a17300b0.png

ProofT180bH3H5.png.87572c7d2e995df1be6a06a6163cd727.png

Then, I use the known property that in a (commutative?) field, here the complex numbers, the sum of the powers of a root of 1 is zero.

SumRootsOne.png.3772bf3c8afa64b9ff0c010a647efdfb.png

On the diagrams above, I use n=3 and n=9. n=2 too if you decide that sqrt(1)=-1, or rather use 1-1=0. The 3rd and 9th roots of 1 are exp(j*2pi/3) and exp(j*2pi/9) in the complex plane. Any set of points regularly spaced on the unit circle has a null sum, including if all are multiplied by the same amount exp(j*angle).

  • The positions of the Diracs in the sequance are multiplied by the harmonic's order, modulo one turn.
  • I could decompose the constellations into sums of 2, 3 or 9 points regularly spaced. The set of 9 could also be 3 sets of 3. Spacing is 20, 60 and 90 ticks here, for 180 ticks per period, displayed as 40°, 120° and 180°. Note that these sequences occupy the point 1 twice.
  • Because the convolving sequence is even, the imaginary part of all harmonics' amplitude is zero. Interesting is the real part of the sums.
  • Conjugating some points may help to form regularly spaced sets. Here it wasn't necessary.
  • Adding to a constellation virtual points whose sum is zero may help form several regularly spaced sets. Here it wasn't necessary.

I ignore if all good convolving sequences must be constructed that way, and leave this question to a mathematician.

Software searching only convolving sequences constructed that way may be faster than what I did, but programming is not trivial.

Here is the spreadsheet. The original xml is for Gnumeric, the exported xls for Excel loses some formatting.
ProofT180abH3H5.zip

Marc Schaefer, aka Enthalpy

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.