Tuesday, August 15, 2017

Analyzing "fake" solar eclipse viewing glasses - how good/bad are they?

Note:  Please read and heed the warnings in this article.

About a month and a half ago I ordered some "Eclipse Viewing Glasses" from Amazon - these being those cardboard things with plastic filters.  When I got them, I looked through them and saw that they were very dark - and in looking briefly at the sun through them they seemed OK.
Figure 1:
The suspect eclipse viewing glasses.
These are the typical cardboard frame glasses with very dark plastic lenses.
Click on the image for a slightly larger version.

I was surprised and chagrined when, a few days ago, I got an email from Amazon saying that they were unable to verify to their satisfaction that the supplier of these glasses had, in fact, used proper ISO rated filters and were refunding the purchase price. This didn't mean that they were defective - it's just that they couldn't "guarantee" that they weren't.

I was somewhat annoyed, of course, that this had happened too soon prior to be able to get some "proper" glasses, but I then started thinking:  These glasses look dark - how good - or bad - are they?

I decided to analyze them.


What follows is my own, personal analysis of "potentially defective" products that, even when used properly, may result in permanent eye damage.  This analysis was done using equipment at hand and should not considered to be scientifically rigorous or precise.

DO NOT take what follows as a recommendation - or even an inference - that the glasses that I tested are safe, or that if you have similar-looking glasses, that they, too, are safe to use!

Figure 2:
The 60 watt LED light used for testing.  This "flashlight" consists of
a 60 watt Luminus white LED with a "secondary" lens placed in front of it.
The "primary" lens (a 7" diameter Fresnel) used to collimate the beam
was removed for this testing.
Click on the image for a larger version.
This analysis is relevant only the glasses that I have and there is no guarantee that glasses that you have may be similar.  If you choose to use similar glasses that you might have, you are doing so at your own risk and I cannot be held liable for your actions!


White Light transmission test:

I happen to have on hand a homemade flashlight that uses a 60 watt white LED that, when viewed up close, would certainly be capable of causing eye damage when operating at full power - and this seemed to be a good, repeatable candidate for testing.  For measuring the brightness I used a PIN photodiode (a Hammatsu S1223-01) and relative measurements in intensity could be ascertained by measuring the photon-induced currents by measuring that current with and without the filter in place.

Using my trusty Fluke 87V multimeter, when placed 1/4" (about 6mm) away from the light's secondary lens I consistently measured a current of about 53 milliamps - a significantly higher current than I can get from exposing this same photodiode to the noonday sun.  In the darkened room, I then had the challenge of measuring far smaller current.

Switching the Fluke to its "Hi Resolution" mode, I had, at the lowest range, a resolution of 10 nanoamps - but I was getting a consistent reading of several hundred nanoamps even when I covered the photodiode completely.  It finally occurred to me that the photodiode - being a diode - might be picking up stray RF from radio and TV stations as well as the ever-present electromagnetic field from the wires within our houses so I placed a 0.0022uF capacitor across it and now had a reading of -30 nanoamps, or -0.03 microamps.  Reversing the leads on the meter did not change this reading so I figured that this was due to an offset in the meter itself so I "zeroed" it out using the meter's "relative reading" function.  Just to make sure that the all of the current that I was measuring was from the front of the photodiode I covered the back side with black electrical tape.
Figure 3:
A close up of the S1223-01 photodiode and capacitor in front of the LED.
The bypass capacitor was added to minimize rectification of stray RF
and EM fields which caused a slight "bias" in the low-current readings.
Click on the image for a lager version.

I then placed the plastic film lens of the glasses in front of the LED, atop the flashlights secondary lens - and it melted.


Moving to a still-intact "unmelted" portion of the lens I held it against the photodiode this time, placing it about 1/4" away  and got a consistent reading of 0.03-0.04 microamps, or 30-40 nanoamps.  Re-doing this measurement several times, I verified the numbers.

Because the intensity of the light is proportional to the photodiode current, we can be reasonably assured that the ratio of the "with glasses" and "without glasses" currents are indicative of the amount of attenuation afforded by these glasses, so:

53mA = 5.3*10E-2 amps - direct LED, no glasses
40nA = 4.0*10E-8 amps - through the glasses

The ratio is therefore:

5.3*10E-2 / 4.8*10E-8 = 1325000

What this implies is that there is a 1.325 million-fold reduction in the brightness of the light. Compare this with #12 welding glass which has about a 30000 (30k)-fold reduction of visible light and the absolute minimum that is considered to be "safe" for direct viewing while #14 offers about a 300000 (300k)-fold reduction.  According to various sources (NASA, etc.) a reduction of 100000 (100k)-fold will yield safe direct viewing.  The commonly available #10 welding glass offers only "about" a 10000 (10k)-fold reduction at best and is not considered to be safe for direct solar viewing.
Figure 4:
The typical spectral output of a "white" LED (blue line) and
a typical silicon PIN photiode (black line.)  The distinct peak
is from the internal blue LED while the "yellow" Ce:YAG
phosphors emit longer wavelengths to produce a "white" light.
As can be seen, the sensitivity of the photodiode increases
with longer wavelengths while the spectral output of a white
LED drops.
Click on the image for a larger version.

This reading can't be taken entirely at face value as this assumes that the solar glasses have an even color response over the visible range - but in looking through them, they are distinctly red-orange.  What this means is that the spectrum of the white LED - which is mostly red-yellow and some blue (because white LEDs use blue LEDs and a phosphor to produce the rest of the spectrum) and very little infrared - means that we are doing a bit of apples-oranges comparison.

In addition to this, the response of the photodiode itself is not "flat" over the visible spectrum, peaking in the near-infrared and trailing off with shorter wavelengths - that is, toward the blue end.  Figure 4, above, shows the relative peak light outputs of a typical "white" LED overlaid with the response of the photodiode and once can see that they are somewhat complimentary.

To a limited degree, these two different curves will negate each other in that the sensitivity of the photodiode is a tilted toward the "red" end of the spectrum.  With the inference being that these glasses may be "dark enough", I wanted to take some more measurements.

Photographing the sun:

As it happens I have a Baader ND 5.0 solar film filter for my 8" telescope to allow direct, safe viewing of the sun via the telescope.  Because I'd melted a pair of glasses in front of the LED, I wasn't willing to make the same measurement with this (expensive!) filter so I decided to place each filter in front of the camera lens and photograph the sun using identical exposure settings as seen in Figure 5, below.

Figure 5:
The Baader filter on the left and the suspect glasses on the right.
These pictures were taken through a 200mm zoom lens using a Sigma SD-1 camera set to ISO 200 at F8 and 1/320th of a second.  Both use identical, fixed "Daylight" white balance.
Click on the image for a lager version.

What is very apparent is that the Baader filter is pretty much neutral in tone while the glasses are quite red.  To get a more meaningful measurement, I used an image manipulation program to determine the relative brightness of the R, G and B channels with their values rescaled to 8 bits:  Because the camera that I used - a Sigma SD-1 actually has RGB channels with its Foveon sensor rather than the more typical Bayer MCY matrix, these levels are reasonably accurate.

For the Baader filter:
  • Red = 163
  • Green = 167
  • Blue = 162
For the glasses:
  • Red = 211
  • Green = 67
  • Blue = 0 
Again, this seems to confirm that the glasses are quite red - with a bit of yellow and thrown in, which explains the orange-ish color.  Clearly, the glasses let in more red than the Baader, but the visible energy overall would appear to be roughly comparable using this method.

What the eye cannot see:

It is not just the visible light that can damage the eye's retina, but also ultraviolet and infrared and these wavelengths are a problem because their invisibility will not trigger the normal, protective pupilary response.  I have no easy way to measure the attenuation of ultraviolet of these glasses, but the complete lack of blue - and the fact that many plastics do a pretty good job of blocking UV - I wasn't particularly worried about it.  If one was worried, ordinary glasses or a piece of polycarbonate plastic would likely block much of the UV that managed to get through.

Infrared is another concern - and the sun puts out a lot of it!  What's more is that many plastics - even strongly tinted - will transmit near infrared quite easily even though they may block visible light.  An example of this are "theater gels" that are used to color stage lighting:  These gels can have a deep hue, but most are nearly transparent to infrared - and this also helps prevent them from instantly  bursting into flame when placed in front of hot lights.

Because of this I decided to include near-infrared in my measurements.  In addition to my Sigma SD-1, I also have an older SD-14 and a property of both of these cameras is that they have easily-removable "hot mirrors" - which double as dust protectors.  What this means is that in a matter of seconds, one can adapt the camera to "see" infrared.  Using my SD-14 (that camera is mostly retired and I didn't want to get dust on the SD-1's sensor) I repeated the same test with the hot mirror removed as can be seen in Figure 6.

Figure 6:
The Baader filter on the left and the glasses on the right showing the relative brightness when photographed in visible light + near infrared.
The camera was set to ISO 100 at F25 and 1/400th of a second using the same 200mm lens as Figure 4.
Click on the image for a larger version.

According to published specifications (see this link) the response of the red channel of the Foveon sensor is fairly flat from about 575 to 775 nanometers and useful out a bit past 900 nanometers while the other channels - particularly the blue - have a bit of overlapping response while the hot mirror itself very strongly attenuates wavelengths longer than 675 nanometers.  What this means is that by analyzing the pictures in Figure 5, we can get an idea as to how much infrared the respective filters pass by noting the 8-bit converted RGB levels:

For the Baader filter:
  • Red = 111
  • Green = 0
  • Blue = 62
For the glasses:
  • Red = 224
  • Green = 0
  • Blue = 84 
While the camera used for figures 5 and 6 aren't the same, they use the same technology of imager which is known to have the same spectral response.  Taking into account the ISO differences, there is an approximate 3-4 F-stop difference between the two exposures (some of this is due to the fact that the morning sun was higher when the infrared pictures were taken) indicating that there is a significant amount of infrared energy - particularly manifest by the fact that the exposure had to be reduced such that the green channel no longer shows any readings when using the Baader filter.   

(Follow this link for a comparison of the transmission spectra of common filter media and follow this link for a discussion about the Baader filter in particular.)

What is clear is that the glasses let in a significant amount more infrared than the Baader filter within the response curve of the sensor - but by how much?

The data indicates that the pixel brightness of the "Red+IR" channel of the glasses is twice that of that of the Baader filter, but if one accounts for the gamma correction applied to photographic images (read about that here - link) - and presume this gamma value to be 2 - we can determine that the actual differences between the two is closer to 4:1.

What does all of this mean?

In terms of visible light, these particular "fake" glasses appear to transmit about the same amount of light as the known-safe Baader filter - although the glasses aren't offering true color rendition, putting a distinct red-orange cast on the solar disk.  In the infrared range - likely between 675 and 950nM - the glasses seem to permit about 4 times the light of the Baader filter.  When you include the "white light" measurements from the LED and compare them

At this point is is worth reminding the reader that this Baader filter is considered to be "safe" when placed over a telescope - in this case, my 8" telescope as the various glass/plastic lenses will adequately block any stray UV.  What this means is that despite the tremendous light-gathering advantage of this telescope over the naked eye, the Baader filter still has a generous safety margin.  (It should be noted that this Baader film is not advertised to be "safe for direct viewing".  Their direct-viewing film has a stronger blue/UV and IR blocking.)

What may be inferred from this is that, based solely on the measurements that obtained with these glasses it would seem that they may let in about 4 times the amount of infrared (e.g. >675nm) light as the Baader filter.

Again, I did not have the facility to determine if these glasses adequately block UVA/B radiation - but the combination of these glasses and good-quality sunglasses will block UV A/B - and provide additional light reduction overall.

Will I use them?

Based on my testing, these particular glasses seem to be reasonably safe in most of the way that matter, but whatever "direct viewing" method that I choose (e.g. these glasses or other alternatives) I will be conservative:  Taking only occasional glances.

(I will acquire some "bona-fide" glasses and analyze them when I get a chance.)

* * *
Once again:

What preceded was my own, personal analysis of potentially defective products that, even when used properly, may result in permanent eye damage.  This analysis was done using equipment at hand and should not considered to be scientifically rigorous or precise.

DO NOT take what follows as a recommendation - or even an inference - that the glasses that I tested are safe, or that if you have similar-looking glasses, that they, too, are safe to use!

This analysis is relevant only the glasses that I have and there no guarantee that glasses that you have may be similar.  If you choose to use similar glasses that you might have, you are doing so at your own risk and I cannot be held liable for your actions!



Thursday, July 20, 2017

A 173 mile (278km) all-electronics, FSO (Free Space Optical) contact: Part 1 - Scouting it out

Nearly 10 years ago - in October, 2007, to be precise - we (exactly "who" to be mentioned later) successfully managed a 173 mile, Earth-based all-electronic two-way contact between two remote mountain ranges in western Utah.

For many years before this I'd been mulling over in the back of my mind various ways that optical ("lightbeam") communications could be accomplished over long distances.  Years ago, I'd observed that even a modest, 2 AA-cell focused-beam flashlight could be easily seen over a distance of more than 30 miles (50km) and that sighting even the lowest-power Laser over similar distances was fairly trivial - even if holding a steady beam was not.  Other than keeping such ideas in the back of my head, I never really did more that this - at least until the summer of 2006, when I ran across a web site that intrigued me, the "Modulated Light DX page" written by Chris Long (now amateur radio operator VK3AML) and Dr. Mike Groth (VK7MJ).  While I'd been following the history and progress of such things all along, this and similar pages rekindled the intrigue, causing me to do additional research and I began to build things.

Working up to the distance...

Over the winter of 2006-2007 I spent some time building, refining, and rebuilding various circuits having to do with optical communications.  Of particular interest to me were circuits used for detecting weak optical signals and it was those that I wanted to see if I could improve.  After considerable experimentation, head-scratching, cogitation, and testing, I was finally able to come up with a fairly simple optical receiver circuit that was at least 10dB more sensitive than other voice-bandwidth circuits that were out there.  Other experimentation was done on modulating light sources and the first serious attempt at this was building a PIC-based PWM (Pulse-Width Modulation) circuit followed, somewhat later, by a simpler current-linear modulator - both being approaches that seemed to work extremely well.

After this came the hard part:  Actually assembling the mechanical parts that made up the optical transceivers.  I decided to follow the field-proven Australian approach of using large, plastic, molded Fresnel lenses in conjunction with high-power LEDs for the source of light emissions with a second parallel lens and a photodiode for reception and the stated reasons for taking this approach seemed to me to be quite well thought-out and sound - both technically and practically.  This led to the eventual construction of an optical transceiver that consisted of a pair of identical Fresnel lenses, each being 318 x 250mm (12.5" x 9.8") mounted side-by-side in a rigid, wooden enclosure comprising an optical transceiver with parallel transmit and receive "beams."  In taking this approach, proper aiming of either the transmitter or receiver would guarantee that the other was already aimed - or very close to being properly aimed - requiring only a single piece of gear to be deployed with precision.

After completing this first transceiver I hastily built a second transceiver to be used at the "other" end of test path.  Constructed of foam-core posterboard, picture frames and inexpensive, flexible vinyl "full-page" magnifier Fresnel lenses, this transceiver used, for the optical emitter and transmitter assemblies, my original, roughly-repackaged prototype circuits.  While it was neither pretty or capable of particularly high performance, it filled the need of being the "other" unit with which communications could be carried out for testing:  After all, what good would a receiver be if there were no transmitters?

On March 31, 2007 we completed our first 2-way optical QSO with a path that crossed the Salt Lake Valley, a distance of about 24 km (15 miles.)  We were pleased to note that our signals were extremely strong and, despite the fact that our optical path crossed directly over downtown Salt Lake City, they seemed to have 30-40dB signal-noise ratio - if you ignored some 120 Hz hum and the occasional "buzz" from an unseen, failing streetlight.  We also noted a fair amount of amplitude scintillation, but this wasn't too surprising considering that the streetlights visible from our locations also seemed to shimmer being subject to the turbulence caused by the ever-present temperature inversion layer in the valley.

Bolstered by this success we conducted several other experiments over the next several months, continuing to improve and build more gear, gain experience, and refine our techniques.  Finally, for August 18, 2007, we decided on a more ambitious goal:  The spanning of a 107-mile optical path.  By this time, I'd completed a third optical transceiver using a pair of larger (430mm x 404mm, or 16.9" x 15.9") Fresnel lenses, and it significantly out-performed the "posterboard" version that had been used earlier.  On this occasion we were dismayed by the amount of haze in the air - the remnants of smoke that had blown into the area just that day from California wildfires.  Ron, K7RJ and company (his wife Elaine, N7BDZ and Gordon, K7HFV) who went to the northern end of the path (near Willard Peak, north of Ogden, Utah) experienced even more trials, having had to retreat on three occasions from their chosen vantage point due to brief, but intense thunderstorms.  Finally, just before midnight, a voice exchange was completed with some difficulty - despite the fact that they never could see the distant transmitter with the naked eye due to the combination of haze and light pollution - over this path, with the southern end (with Clint, KA7OEI and Tom, W7ETR) located near Mount Nebo, southeast of Payson, Utah.

Figure 1:
The predicted path projected onto a combination
map and satellite image.  At the south end
(bottom) is Swasey Peak while George Peak is
indicated at the north.
Click on the image for a larger version.
Finding a longer path:

Following the successful 107-mile exchange we decided that it was time to try an even-greater distance.  After staring at maps and poring over topographical data we found what we believed to be a 173-mile line-of-sight shot that seemed to provide reasonable accessibility at both ends - see figure 1.  This path spanned the Great Salt Lake Desert - some of the flattest, desolate, and most remote land in the continental U.S.  At the south end of this path was Swasey Peak, the tallest point in the House range, a series of mountains about 70 miles west of Delta, in west-central Utah.  Because Gordon had hiked this peak on more than one occasion we were confident that this goal was quite attainable.

At the north end of the path was George Peak in the Raft River range, an obscure line of mountains that run east and west in the extreme northwest corner of Utah, just south of the Idaho boarder.  None of us had ever been there before, but our research indicated that it should be possible to drive there using a high-clearance 4-wheel drive vehicle so, on August 25, 2007, Ron and Gordon piled into my Jeep (along with a 2nd spare tire swiped from Ron's Jeep as recommended by more than one account) and we headed north to investigate.

Getting there:

Following the Interstate highway nearly to the Idaho border, we turned west onto a state highway, following it as the road swung north into Idaho, passing the Raft River range, and we then turned off onto a gravel road to Standrod, Utah.  In this small town (a spread-out collection of houses, really) we turned onto a county road that began to take us up canyons on the northern slope of the range.  As we continued to climb, the road became rougher and we resorted to peering at maps and using our intuition to guide us onto the one road that would take us to the top of the mountain range.

Luckily, our guesses were correct and we soon found ourselves at the top of the ridge.  Traveling for a short distance, we ran into a problem:  The road stopped at a fence gate that was plastered with "No Trespassing" signs.  At this point, we simply began to follow what looked like road that paralleled the fence only to discover, after traveling several hundred feet - and past a point at which we could safely turn around - that this "road" had degenerated into a rather precarious dirt path traversing a steep slope.  After driving several hundred more feet, fighting all the while to keep the Jeep on the road and moving in a generally forward direction, the path leveled out once again and rejoined what appeared to be the main road.  After a combination of both swearing at and praising deities we vowed that we would nevertravel on that "road" again and simply stay on what had appeared to have been the main road, regardless of what the signs on the gates said!

Looking for Swasey Peak:

Having passed these trials, we drove along the range's ridge top, looking to the south.  On this day, the air was quite hazy - probably due to wildfires that were burning in California, and in the distance we could vaguely spot, with our naked eyes, the outline of a mountain range that we thought to be the House range:  In comparing its outline and position with a computer-simulated view, it "looked" to be a fairly close match as best as we could guess.

Upon seeing this distant mountain we stopped to get a better look, but when we looked through binoculars or a telescope the distant outline seemed to disappear - only to reappear once again when viewed with the naked eye.  We finally realized what was happening:  Our eyes and brain are "wired" to look at objects, in part, by detecting their outlines, but in this case the haze reduced the contrast considerably.  With the naked eye, the distant mountain was quite small but with the enlarged image in the binoculars and telescope the apparent contrast gradient around the object's outline was greatly diminished.  The trick to being able to visualize the distant mountain turned out be keeping the binoculars moving as our eyes and brain are much more sensitive to slight changes in brightness of moving objects than stationary ones.  After discovering this fact, we noticed with some amusement that the distant mountain seemed to vanish from sight once we stopped wiggling the binoculars only to magically reappear when we moved them again.  For later analysis we also took pictures at this same location and noted the GPS coordinates.

Continuing onwards, we drove along the ridge toward George Peak.  When we got near the GPS coordinates that I had marked for the peak we were somewhat disappointed - but not surprised:  The highest spot in the neighborhood, the peak, was one of several gentle, nondescript hills that rose above the road only by a few 10's of feet.  Stopping, we ate lunch, looked through binoculars and telescopes, took pictures, recorded GPS coordinates, and thought apprehensively about the return trip along the road.
Figure 2:
The predicted line-of-sight view (top) based on 1 arc-second SRTM terrain data between the Raft River range
and Swasey peak as seen from the north (Raft River) side.
On the bottom is an actual photograph of the same scene at the location used in the simulated view.  As can be seen,
more of the distant mountain can be seen than the prediction would indicate, this being due to the refraction of
the atmosphere slightly extending the visible horizon.  Under typical conditions, this "extension" amounts to
an increase of approximately 10/9th of the distance than geometry would predict.  This lower picture was produced
by "stacking" multiple images using software designed for astronomy.
Click on the image for a larger version.

Returning home:

Retracing our path - but not taking the "road" that had paralleled the fence line - we soon came to the gate that marked the boundary of the private land.  While many of the markings were the same at this gate, we noticed another sign - one that had been missing from the other end of the road - indicating that this was, in fact, a public right-of-way plus the admonition that those traveling through must stay on the road.  This sign seemed to register with what we thought we'd remembered about Utah laws governing the use of such roads and our initial interpretation of the county parcel maps:  Always leave a gate the way you found it, and don't go off the road!  With relief, we crossed this parcel with no difficulty and soon found ourselves at the other gate and in familiar territory.

Retracing our steps down the mountain we found ourselves hurtling along the state highway a bit more than an hour later - until I heard the unwelcome sound of a noisy tire.  Quickly pulling over I discovered that a large rock that had embedded itself in the middle of the tread of a rear tire.  After 45 minutes of changing the tire and bringing the spare up to full pressure, we were again underway - but with only one spare remaining...

Analyzing the path:

Upon returning home I was able to analyze the photographs that I had taken.  Fortunately, my digital SLR camera takes pictures in "Raw" image mode, preserving the digital picture without loss caused by converting it to a lossy format like JPEG.  Through considerable contrast enhancement, the "stacking" of several similar images using an astronomical photo processing program and making a comparison against computer-generated view I discovered that the faint outline that we'd seen was not Swasey Peak but was, in fact, a range that was about 25 miles (40km) closer - the Fish Springs mountains - a mere 150 or so miles (240km) away.  Unnoticed (or invisible) at the time of our mountaintop visit was another small bump in the distance that was, in fact, Swasey Peak.

Interestingly, the first set of pictures were taken at a location that, according to the computer analysis, was barely line-of-sight with Swasey Peak.  At the time of the site visit we had assumed that the just-visible mountain that we'd seen in the distance was Swasey Peak and that there was some sort of parallax error in the computer simulation, but analysis revealed that not only was the computer simulation correct in its positioning of the distant features, but also that the apparent height of Swasey Peak above the horizon was being enhanced by atmospheric refraction - a property that the program did not take into account:  Figure 2 shows a comparison between the computer simulation and an actual photograph taken from this same location.

Building confidence - A retry of the 107-mile path:

Having verified to our satisfaction that we could not only get to the top of the Raft River mountains but also that we also had a line-of-sight path to Swasey Peak, we began to plan for our next adventure.  Over the next several weeks we watched the weather and the air - but before we did this, we wanted to try our 107-mile path again in clearer weather to make sure that our gear was working, to gain more experience with its setup and operation, and to see how well it would work over a long optical path given reasonably good seeing conditions:  If we had good success over a 107-mile path we felt confident that we should be able to manage a 173-mile path.

A few weeks later, on September 3, we got our chance:  Taking advantage of clear weather just after a storm front had moved through the area we went back to our respective locations - Ron, Gordon and Elaine at Inspiration Point while I went (with Dale, WB7FID) back to the location near Mt. Nebo.  This time, signal-to-noise ratios were 26dB better than before and voice was "armchair" copy.  Over the several hours of experimentation we were able to transmit not only voice, but SSTV (Slow-Scan Television) images over the LED link - even switching over to using a "raw" Laser Pointer for one experiment and a Laser module collimated by an 8" reflector telescope in another.

With our success on the clear-weather 107-mile path we waited for our window to attempt the 173-mile path between Swasey and George Peak but in the following weeks we were dismayed by the appearance of bad weather and/or frequent haze - some of the latter resulting from the still-burning wildfires around the western U.S.

To be continued!


This page was stolen from "ka7oei.blogspot.com"

Wednesday, June 21, 2017

Odd differences between two (nearly) identical PV systems

I've had my 18-panel (two groups of 9) PV (solar) electric system in service for about a year and recently I decided to expand it a bit after realizing that I could do so, myself, for roughly $1/watt, after tax incentives.  An so it was done, with a bit of help from a friend of mine who is better at bending conduit than I:  Another inverter and 18 more solar panels were set on the roof - all done using materials and techniques equal to or better than that which was originally done in terms of both quality and safety.

Adding to the old system:

The older inverter, a SunnyBoy SB 5000-TL, is rated for a nominal 5kW and with its 18 panels, 9 of each located on opposite faces of my east/west facing roof (the ridge line precisely oriented to true north-south) would, in real life, produce more than 3900 watts for only an hour or so around "local noon" on late spring/early fall summer days that were both exquisitely clear and very cool (e.g. below 70F, 21C).  I decided that the new inverter need not be a 5kW unit so I chose the newer - and significantly less expensive SunnyBoy SB3.8 - an inverter nominally rated at 3.8kW.  The rated efficiencies of the two inverters were pretty much identical - both in the 97% range.
Figure 1:
The installed 3.8 kW inverter in operation with the 2kW
"SPS" (Secure Power System) island power outlet shown below.
Click on the image for a larger version.

One reason for choosing this lower-power inverter was to stay within the bounds of the rating of my main house distribution panel.  My older inverter, being rated for 5kW was (theoretically) capable of putting 22-25 amps onto the panel's bus, so a 30 amp breaker was used on that branch circuit while the new inverter, capable of about 16 amps needed only a 20 amp breaker.  This combined, theoretical maximum of 50 amps (breaker current ratings, not actual, real-world current from the inverters and their panels!) was within the "120% rule" of my 125 amp distribution panel with its 100 amp main breaker:  120% of 125 amps is 150 amps, so my ability to (theoretically) pull 100 amps from the utility and the combined capacity of the two inverters (again, theoretically - not real-world) being 50 amps was within this rating.

Comment:  The highest total power that power that I have seen from my system has been about 8000 watts - 3900 watts from the SB3.8 and just over 4100 watts from the SB 5000 for a maximum of about 36 amps at 220 volts (abnormally low line voltage!) or about 33 amps total with a more typical 240 volt feed-in - well under the "50 amp" maximum.

For the new panels I installed eighteen 295 watt Solarworld units - a slight upgrade over the older 285 watt Suniva modules already in place. In my calculations I determined that even with the new panels having approximately 3.5% more rated output (e.g. a peak of 5310 watts versus 5130 watts, assuming ideal temperature and illumination - the latter being impossible with the roof angles) that the new inverter would "clip" (e.g. it would hit its maximum output power while the panels were capable of even more power) only a few 10s of days per year - and this would occur for only an hour or so at most on each occasion.  Since the ostensibly "oversized" panel array would be producing commensurately more power at times other than peak as well, I was not concerned about this occasional "clipping".

What was expected:

The two sets of panels, old and new, are located on the same roof with the old array being higher, nearer the ridge line and the new being just below.  In my situation I get a bit of shading in the morning on the east side, and a slight amount in the very late afternoon/evening in mid summer on west side and the geometry of the trees that do this cause the shading of both the new and old systems to be almost identical.

With this in mind, I would have expected the two systems to behave nearly identically.

But they don't!

Differences in produced power:

Having the ability to obtain graphs of each system over the course of a day I was surprised when the production of the two, while similar, showed some interesting differences as the chart below shows. 

Figure 2:
The two systems, with nearly identical PV arrays.  The production of the older SB5000 inverter with the eighteen 285 watt panels is represented by the blue line while the newer SB3.8 inverter with eighteen 295 watt panels is represented by the red line:  Each system has nine east-facing panels and nine west-facing panels.  The dips in the graph are due to loss of solar irradiance due to clouds.  Because the data for this graph is collected every 15 minutes, some of the fine detail is lost so the "dip" in production at about 1:45PM was probably deeper than shown.
The total production of the SB3.8 system (red line) for the day was 27.3kWh while that of the SB5000TL system (blue line) was 25.4kWh - a difference of about 7% overall.
Click on the image for a larger version.
In this graph the blue line is the older SB5000TL inverter and the red line is the newer SB3.8 inverter.  Ideally, one would expect that that the newer inverter, with its 295 watt panels, would be just a few percent higher than the older inverter with its 285 watt panels, but the difference, particularly during the peak hours, is closer to 10%, particularly during the peak times when there is no shading at all.

What might be the cause of this difference?
Figure 3:
 The two parallel east-facing arrays, the older one being closer to
the (north-south) peak of the roof.
Click on the image for a larger version.

Several possible explanations come to mind:
  1. The new panels are producing significantly more than their official ratings.  A few percent would seem likely, but 10%?
  2. The older panels have degraded more than expected in the year that they have been in service.
  3. The two manufacturers rate their panels differently.
  4. There may be thermal differences.  The "new" panels are lower on the roof and it is possible that the air being pulled in from the bottom by convection is cooler when it passes by the new panels, being warmer by the time it gets to the "old" panels.  If we take at face value that 3.5% of the 10% difference is due to the rating - leaving 6.5% difference unaccounted, this would need only about a 16C (39F) average panel temperature difference, but the temperature differences do not appear to be that large!
  5. The new panels don't heat up in the sun as much as the old.  The new panels, in the interstitial gap between individual cells and around the edges are white while the old panels are completely black, possibly reducing the amount of heating.  Again, there doesn't seem to be a 16C (39F) difference.
  6. The new inverter is better at optimizing the power from the panels than the old one.
It's a bit difficult to make absolute measurements, but in the case of #2, the possibility of the "old" panels degrading, I think that I can rule that out.  In comparing the peak production days for 2016 and 2017, both of which occurred in early May (a result of the combination of reasonably long days and cool temperatures) the best peak was about the same - approximately 28.25kWh on the "old" system even after I'd installed the "new" panels on the east side.

I suspect that it is a combination of several of the above factors, probably excluding #2, but I have no real way of knowing the amount of contribution of each.  What is surprising to me is that I have yet to see any obvious clipping on the new system on the production graphs even though I have "caught" it pegged at about 3920 watts on several occasions during local noon, so it seems that my calculation of "several dozen of hours" per year where this might happen is about right.

I'll continue to monitor the absolute and relative performance of the two sets of panels to see how they track over time.


This page stolen from "ka7oei.blogspot.com"

Tuesday, June 13, 2017

Adding a useful signal strength indication to an old, inexpensive handie-talkie for transmitter hunting

A field strength meter is a very handy tool for locating a transmitter, but a sensitive field strength meter by itself has some limitations:  It will respond to practically any RF signal that enters its input.  This property has the effect of limiting the effective sensitivity of the field strength meter, as any nearby RF source (or even ones far away, if the meter is sensitive enough...) will effectively mask the desired signal if it is weaker than these "background" signals.
Figure 1:
The modified Icom IC-2A/T HT with a broadband
field strength meter paired with the AD8307-based field
strength meter mentioned and linked in the article, below.
Click on the image for a larger version.

This property can be mitigated somewhat by preceding the input of the meter with a simple tuned RF stage and, in most cases, this is adequate for finding (very) nearby transmitters.  A simple tuned circuit does have its limitations:
  • It is only broadly selective.  A simple, single-tuned filter will have a response encompassing several percent (at best) of the operating frequency.  This means that a 2 meter filter will respond to nearly any signal near or within to the 2 meter band.
  • A very narrow filter can be tricky to tune.  This isn't usually too much of a problem as one can peak on the desired signal (if it is close enough to register) or use your own transmitter (on the same or nearby frequency) to provide a source of signal on which the filter may be tuned.
  • The filter does not usually enhance the absolute (weak signal) sensitivity unless an amplifier is used.
An obvious approach to solving this problem is to use a receiver, but while many FM receivers have "S-meters" on them, very few of them have meters that are truly useful over a very wide dynamic range, most firmly "pegging" even on relatively modest signals, making them nearly unusable if the signal is any stronger than "medium weak".  While an adjustable attenuator (such as a step attenuator or offset attenuator) may be used, the range of the radio's S-meter itself may be so limited that it is difficult to manage the observation of the meter and adjusting the signal level to maintain an "on-scale" reading.

Another possibility is to modify an existing receiver so that an external signal level meter with much greater range may be connected.

Picking a receiver:

When I decided to take this approach I began looking for a 2 meter (the primary band of interest) receiver with these properties:
  • It had to be cheap.  No need to explain this one!
  • It had to be synthesized.  It's very helpful to be able to change frequencies.
  • Having a 10.7 MHz IF was preferable.  The reasons for this will become apparent.
  • It had to have enough room inside it to allow the addition of some extra circuitry to allow "picking off" the IF signal.  After all, that's the entire point of this exercise.
  • It had to be easy to use.  Because one may not use this receiver too often, it's best not to pick something overly complicated and would require a manual to remind one how to do even the simplest of tasks.
  • The radio would still be a radio.  Another goal of the modification was that the radio had to work exactly as it was originally designed after you were done - that is, you could still use it as a transceiver!
Based on a combination of past familiarity with various 2 meter HTs and looking at prices on Ebay, at least three possibilities sprang to mind:
  • The Henry Tempo S-1.  This is a very basic 2 meter-only radio and was the very first synthesized HT available in the U.S.  One disadvantage is that, by default, it uses a threaded antenna connection rather than a more-standard BNC connector and would thus require the user to install one to allow it to be used with other types of antennas.  Another disadvantage is that it has a built-in non-removable battery.  It's power supply voltage is limited to under 11 volts.  (The later Tempo S-15 has fewer of these disadvantages and may be better, but I am not too familiar with it.)
  • The Kenwood TH-21.  This, too, is a very basic 2 meter-only radio.  It uses a strange RCA (e.g. phono) like threaded connector, but this mates with easily-available RCA-BNC adapters.  Its disadvantage is that it is small enough that the added circuitry may not fit inside.  It, too, has a distinct limitation on its power supply voltage range and requires about 10 volts.
  • The Icom IC-2A/T.  This basic radio was, at one time, one of the most popular 2 meter HTs which means that there are still plenty of them around.  It can operate directly on 12 volts, has a standard BNC antenna connector, and has plenty of room inside the case for the addition of a small circuit.  (The "T" suffix indicates that it has a DTMF numeric keypad.  The "non-T" version such as the IC-2A is a bit less common, but would work just fine for this application.)
Each of these radios is a thumbwheel-switch tuned, synthesized, plain-vanilla radio. I chose the Icom IC-2AT (it is also the most common) and obtained one on Ebay for about $40 (including accessories) and another $24 bought a clone of an IC-8, an 8-cell alkaline battery holder (from Batteries America) that is normally populated with 2.5 amp-hour NiMH AA cells.  With its squelched receive current of around 20 milliamps I will often use this radio as a "listen around the house" radio since it will run for days and days!

"Why not use one of those cheap chinese radios?"

Upon reading this you may be thinking "why spend $$$ on an ancient radio when you can buy a cheap chinese radio that has lots of features for $30-ish?"

The reason is that these radios have neither a user-available "S" meter with good dynamic range or an accessible IF (Intermediate Frequency) stage.  Because these radios are, in effect, direct conversion with DSP magic occurring on-chip, there is absolutely nowhere that one could connect an external meter - because that signal simply does not exist!

While many of these "single-chip" radios do have some built-in S-meter circuitry, the manufacturers of these radios have, for whatever reason, not made it available to the user - at least not in a format that would be particularly useful for transmitter hunting.
Modifying the IC-2A/T (and circuit descriptions):

This radio is the largest of those mentioned above and has a reasonable amount of extra room inside its case for the addition of the few small circuits needed to complete the modification.  When done, this modification does not, in any way, affect otherwise normal operation of the radio:  It can still be used as it was intended!

An added IF buffer amplifier:

This radio uses the Motorola MC3357 (or an equivalent such as the MP5071) as the IF/demodulator.  This chip takes the 10.7 MHz IF from the front-end mixer and 1st IF amplifier stages and converts it to a lower IF (455 kHz) for further filtering and limiting and it is then demodulated using a quarature detector.  Unfortunately, the MC3357 lacks an RSSI (Receive Signal Strength Indicator) circuit - which partly explains why this radio doesn't have an S-meter, anyway.  Since we were planning to feed a sample of the IF from this receiver into our field strength meter, anyway, this isn't too much of a problem.

Figure 2:
The source-follower amplifier tacked atop the IF amplifier chip.
Click on the image for a larger version.
We actually have a choice to two different IFs:  10.7 MHz and 455 kHz.  At first glance, the 455 kHz might seem to be a better choice as it has already been amplified and it is at a lower frequency - but there's a problem:  It compresses easily.  Monitoring the 455 kHz line, one can easily "see" signals in the microvolt range, but by the time you get a signal that's in the -60 dBm range or so, this signal path is already starting to go into compression.  This is a serious problem as -60 dBm is about the strength that one gets from a 100 watt 2 meter transmitter that is clear line-of-sight at about 20 miles (about 30km) distant, using unity-gain antennas on each end.  What this means is that if we were to use this signal tap, we might still be a fair distance away from the transmitter when the signal peaked.

The other choice is to tap the signal at the 10.7 MHz point, before it goes into the MC3357.  This signal, not having been amplified as much as the 455 kHz signal, does not begin to saturate until the input reaches about -40 dBm or so, reaching full saturation by about -35 dBm.  Given our example, above, -35 to -40dBm is roughly equivalent to a line-of-sight 100 watt 2 meter transmitter at 1-3 miles (approx. 1.6-5km) - which means that we'll get much closer before the signal path saturates - but we can easily deal with that as we'll discuss shortly.

One point of concern here was the fact that at this point, the signal has less filtering than the 455 kHz, with the latter going through a "sharper" bandpass filter.  While the filtering at 10.7 MHz is a bit broader, the 4 poles of crystal filter do attenuate  a signal 20 kHz away by at least 30 dB - so unless there's another very strong signal on this adjacent channel, it's not likely that there will be a problem.  As it turns out, the slightly "broader" response of the 10.7 MHz crystal filters is conducive to "offset tuning" - that is, deliberately tuning the radio off-frequency to reduce the signal level reading when you are nearby the transmitter being sought.

To be able to tap this signal without otherwise affecting the performance of the receive requires a simple buffer amplifier, and a JFET source-follower does the job nicely (see figure 6, below for the diagram).  Consisting of only 6 components (two resistors, three capacitors and an MPF102 JFET - practically any N-channel JFET will do) this circuit is simply tack-soldered directly onto the MC3357 as shown in figures 2 and 3.  This circuit very effectively isolates the (more or less) 50 ohm output load of the field strength meter from the high-impedance 10.7 MHz input to the MC3357 and it does so while only drawing about 700 microamps, which is only 3-4% of the radio's total current when it is squelched.

Figure 3:
A wider view of the modifications to the radio.
Click on the image for a larger version.
As can be seen from the pictures (figure 2 and 3) all of the required connections were made directly to the pins of the IC itself, with the 330 pF input capacitor connecting directly to pin 16.  The supply voltage is pulled from pin 4, and pins 12 and/or 15 are used for the ground connection. 

A word of warning:  Care should be taken when soldering directly to the pins of this (or any) IC to avoid damage.  It is a good idea to scrape the pin clean of oxide and use a hot soldering iron so that the connection can be made very quickly.  Excess heat and/or force on the pin can destroy the IC!  It's not that this particular IC is fragile, but this is care that should be taken.

Getting the IF signal outside the radio:

The next challenge was getting our sampled 10.7 MHz IF energy out of the radio's case.  While it may be possible to install another connector on the radio somewhere, it's easiest to use an existing connector - such as the microphone jack.

One of the goals of these modifications was to retain complete function of the radio as if it were a stock radio, so I wanted to be sure that the microphone jack would still work as designed, so I needed to multiplex both the microphone audio (and keying) and the IF onto the tip of the microphone connector as I wasn't really planning to use the signal meter and a remote microphone at the same time.  Because of the very large difference in frequencies (audio versus 10.7 MHz) it is very easy to separate the two using capacitors and an inductor:  The 10.7 MHz IF signal is passed directly to the connector with the series capacitor while the 10.7 MHz IF signal is blocked from the radio's internal microphone/PTT line with a small choke:  Anything from 4.7uH to 100uH will work fine.
Figure 4:
The modifications at the microphone jack.
Click on the image for a larger version.

The buffered IF signal is conducted to the microphone jack using some small coaxial cable:  RG-174 type will work, but I found some slightly smaller coax in a junked VCR.  To make the connections, the two screws on the side of the HT's frame were removed, allowing it to "hinge" open, giving easy access to the microphone connector.  The existing microphone wire connected to the "tip" connection was removed and the choke was placed in series with it, with the combination insulated with some heat-shrinkable tubing.

The coax from the buffer amp was then connected directly to the "tip" of the microphone connector.  One possible coax routing is shown in Figure 4 but note that this routing prevents the two halves of the chassis from being fully opened in the future unless it is disconnected from one end.  If this bothers you, a longer cable can be routed so that it follows along the hinge and then over to the buffer circuit.  Note:  It is important to use shielded cable for this connection as the cable is likely to be routed past the components "earlier" in the IF strip and instability could result if there is coupling.

Interfacing with the Field Strength meter:

Using RG-174 type coaxial cable, an adapter/interface cable was constructed with a 2.5mm connector on one end and a BNC on the other.  One important point is that a small series capacitor (0.001uF) is required in this line somewhere as a DC block on the microphone connector:  The IC-2A/T (like most HTs) detects a "key down" condition on the microphone by detecting a current flow on the microphone line and this series capacitor prevents current from flowing through the 50 ohm input termination on the field strength meter and "keying" the radio.

Dealing with L.O. leakage:

As soon as it was constructed I observed that even with no signal, the field strength meter showed a weak signal (about -60 to -65 dBm) present whenever the receiver was turned on, effectively reducing sensitivity by 20-25 dB.  As I suspected when I first noticed it, this signal was coming from two places:
  • The VHF local oscillator.  On the IC-2A/T, this oscillator operates 10.7 MHz lower than the receive frequency.  In other words, tuned to 146.520 MHz, the local oscillator is running at 135.82 MHz.
  • The 2nd IF local oscillator.  On the IC-2A/T this oscillator operates at 10.245 MHz - 455 kHz below the 10.7 MHz IF as part of the conversion to the second IF.
The magnitude of each of these signals was about the same, roughly -65 dBm or so.  The VHF local oscillator would be very easy to get rid of -  A very simple lowpass filter (consisting of a single capacitor and inductor) would adequately suppress it - but the 10.245 MHz signal poses a problem as it is too close to 10.7 MHz to be easily attenuated enough by a very simple L/C filter without affecting it.

Figure 5:
The inline 10.7 MHz bandpass using filter using a ceramic
filter.  The diagram for this may be seen in the upper-right
corner of Figure 6, below.
Click on the image for a larger version.
Fortunately, with the IF being 10.7 MHz, we have another (cheap!) option:  A 10.7 MHz ceramic IF filter.  These filters are ubiquitous, being used in nearly every FM broadcast receiver made since the 80s, so if you have a junked FM broadcast receiver kicking around, you'll likely have one or more of these in them.  Even if you don't have junk with a ceramic filter in it, they are relatively cheap ($1-$2) and readily available from many mail-order outlets.  This filter is shown in the upper-right corner of the diagram in Figure 6, below.

The precise type of filter is not important as they will typically have a bandpass that is between 150 kHz and 300 kHz wide (depending on the application) at their -6 dB points and will easily attenuate the 10.245 MHz local oscillator signal by at least 30 dB.  With this bandwidth it is possible to use a 10.7 MHz filter (which, themselves, vary in exact center frequency) for some of the "close - but not exact" IF's that one can often find near 10.7 MHz like 10.695 or 10.75 MHz.  The only "gotcha" with these ceramic filters is that their input/output impedances are typically in the 300 ohm area and require a (very simple) matching network (an inductor and capacitor) on the input and output to interface them with a 50 ohm system.  The values used for matching are not critical and the inductor, ideally around 1.8uH, could be anything from 1.5 to 2.2 uH without much impact of performance other than a very slight change in insertion loss.

While this filter could have been crammed into the radio, I was concerned that the L.O. leakage might find its way into the connector somehow, bypassing the filter.  Instead, this circuit was constructed "dead bug" on a small scrap of circuit board material with sides, "potted" in thermoset ("hot melt") glue and covered with electrical tape, heat shrink tubing or "plastic dip" compound, with the entire circuit installed in the middle of the coax line (making a "lump.")  Alternatively, this filter could have been installed within the field strength meter itself, either on its own connector or sharing the main connector and being switchable in/out of the circuit.

Figure 6:
The diagram, drawn in the 1980s Icom style, showing the modified circuity and details of the added source-follower JFET amplifier (in the dashed-line box) along with the 10.7 MHz bandpass filter (upper-right) that is built into the cable.
Click on the image for a larger version.
With this additional filtering the L.O. leakage is reduced to a level below the detection threshold of the field strength meter, allowing sub-microvolt signals to be detected by the meter/radio combination.

Operation and use:

When using this system, I simply clip the radio to my belt and adjust it so that I can listen to what is going on.

There's approximately 30 dB of processing gain between the antenna to the 10.7 MHz IF output - that is, a -100 dBm signal on the antenna on 2 meters will show up as a -70 dBm signal at 10.7 MHz.  What this means is that sub-microvolt signals are just detectable at the bottom end of the range of the Field Strength meter.  From a distance, a simple gain antenna such as a 3-element "Tape Measure Yagi" (see the article "Tape Measure Beam Optimized for Direction Finding - link) will establish a bearing, the antenna's gain providing both an effective signal boost of about 7dB (compared to an isotropic) and directivity.

While driving about looking for a signal I use a multi-antenna (so-called) "Doppler" type system with four antennas being electrically rotated to get the general bearings with the modified IC-2AT being the receiver in that system.  With the field strength meter connected I can hear its audio tone representing the signal strength without need to look at it.  As I near the signal source and the strength increases, I have both the directional indication and the rising pitch of the tone as dual confirmation that I am approaching it.

The major advantage of using the HT as tunable "front end" of the field strength meter means that the meter has greatly enhanced selectability and sensitivity - but this is not without cost:  As noted before, this detection system will begin to saturate at about -40 dBm, fully saturating above -35 dBm - which is a "moderately strong" signal.  In "hidden-T" terms, it will "peg" when within a hundred feet or so of a 100 mW transmitter with a mediocre antenna.

When the signals become this strong, you can do one of several things:
  • Detune the receiver by 5, 10, 15 or even 20 kHz.  This will reduce the sensitivity by moving the signal slightly out of the passband of the 10.7 MHz IF filters.  This is usually a very simple and effective technique, although heavy modulation can cause the signal strength readings to vary.
  • Add attenuation to the front-end of the receiver.  The plastic case of the IC-2A/T is quite "leaky" in terms of RF ingress, but it is good enough for a step attenuator on the antenna lead to work nicely and will thus extend usable range to at least -10dBm dBm.  I use a switchable step attenuator for this and I have found that I can drive to the location (house, yard, park) where the transmitter is located and still have sufficient adjustment range.
  • When you are really close (e.g. 10s of yards/meters) to the transmitter being sought you can forgo the receiver altogether, connecting the antenna directly to the field strength meter!
If you want to be really fancy, you can build the 10.7 MHz bandpass filter and add switches to the field strength meter so that you can switch the 20 dB of attenuation in and out as well as routing the signal either to the receiver, or to the field strength meter using a resistive or hybrid splitter to make sure that the receiver gets some signal from the antenna even when the field strength meter is connected to the antenna.

What to use as the field-strength meter:

The field strength meter used is one based on the Analog Devices AD8307 which is useful from below 1 MHz to over 500 MHz, providing a nice, logarithmic output over a range that goes below -70dBm to above +10dBm.  It is, however, broad as the proverbial "barn door" and the combination of this fact and that its sensitivity of "only" -70dBm is nowhere near enough to be useful with weak signals - especially if there are any other radio transmitters nearby - including radio and TV stations within a few 10s of miles/kilometers.  The integration of this broadband detector with the narrowband, tuneable receiver IF along with its gain makes for a complete system useful for signals that range from weak to strong.

The description of an audible field-strength meter may be found on the web page of the Utah Amateur Radio club in another article that I wrote, linked here:  Wide Dynamic Range Field Strength Meter - link.  One of the key elements of this circuit is that it includes an audio oscillator with a pitch that increases in proportion with the dB indication on the meter, allowing "eyes-off" assessment of the signal strength - very useful while one is walking about or in a vehicle.

There are also other web pages that describe the construction of an AD8307-based field strength meter (look for the "W7ZOI" power meter as a basis for this type of circuit) - and you can even buy pre-assembled boards on EvilBay (search on "AD8307 field strength meter").  The downside of most of these is that they do not include an audible signal strength indication to allow "eyes off" use, but this circuit could be easily added, adapted from that in the link above.

Another circuit worth considering is the venerable NE/SA605 or 615 which is, itself, a stand-alone receiver.  Of interest in this application is its "RSSI" (Receive Signal Strength Indicator) circuit which has both good sensitivity, is perfectly suited for use at 10.7 MHz,  has a nice logarithmic response and a wide dynamic range - nearly as much as the AD8307.  Exactly how one would use just the RSSI pin of this chip is beyond the scope of this article, but information on doing this may be found on the web in articles such as:
  • NXP Application note AN1996 - link (see figure 13, page 19 for an example using the RSSI function only)

Additional comments:
  • At first, I considered using the earphone jack for interfacing to the 10.7 MHz IF, but quickly realized that this would complicate things if I wanted to connect something to the jack (such as pair of headphones or a Doppler unit!) while DFing.  I decided that I was unlikely to be needing to use an external microphone while I was looking for a transmitter!
  • I haven't tried it, but these modifications should be possible with the 222 MHz and 440 MHz versions (IC3 and IC4) of this radio - not to mention other radios of this type.
  • Although not extremely stable, you can listen to SSB and CW transmissions with the modified IC-2A/T by connecting a general-coverage/HF receiver to the 10.7 MHz IF output and tuning +/- 10.7 MHz.  Signals may be slightly "warbly" - but they should be easily copyable!
Finally, if you aren't able to build such a system and/or don't mind spending the money and you are interested in what is possibly the best receiver/signal strength meter combination device available, look at the VK3YNG Foxhunt Sniffer - link.  This integrates a 2 meter receiver (also capable of tuning the 121.5 MHz "ELT" frequency range) and a signal strength indicator capable of registering from less than -120dBm to well over +10dBm with an audible tone.

Comment:  This article is an edited/updated version of one that I posted on the Utah Amateur Radio Club site (link) a while ago.


This page stolen from "ka7oei.blogspot.com"

Wednesday, May 17, 2017

Teasing out the differences between the "AC" and "DC" versions of the Tesla PowerWall 2

Being naturally interested in such things, I've been following the announcements and information about the Tesla PowerWall 2 - the follow-on product of the (rarely seen - in the U.S., at least) "original" PowerWall.

Somewhat interestingly/frustratingly, clear, concise (and even vaguely) technical information on either version of the PowerWall 2 (yes, there are two versions - the "DC" and "AC") has been a bit difficult to find, so in my research, what have I found?

This page or its contents are not intended to promote any of the products mentioned nor should it be considered to be an authoritative source.

It is simply a statement of opinion, conjecture and curiosity based on the information publicly available at the time of the original posting.

It is certain that as time goes on that information referenced on this page may be officially verified, become commonplace, or proven to be completely wrong.

Such is the nature of life!

The "DC" PowerWall 2:
  • Data sheets (two whole pages, each - almost!) for both the DC and AC versions of the PowerWall may be found here at this link - link.
Unless you have a "hybrid" solar inverter, this one is NOT for you - and if you had such an inverter, you'd likely already know it.  A "hybrid" inverter is one that is specifically designed to pass some of the energy from the PV array (solar panels) into storage, such as a battery and used that stored energy later.

Unlike its "AC" counterpart (more on this later) this version of the PowerWall 2 does NOT appear to have an AC (mains) connection of any type - let alone an inverter (neither are mentioned in the brochure) - but rather it is an energy back-up for the solar panels on the DC input(s) of the hybrid inverter.   "Excess" power from the panels may used to charge the battery and this stored energy could be used to feed the inverter when the load (e.g. house) exceeds that available from the panels - when it is cloudy, if there is a period in which the load exceeds the output of the PV array for a period of time or there is no sun at all (e.g. night).

Whether or not this version of the PowerWall can actually be (indirectly) charged via the AC mains (e.g.  via a hybrid inverter capable of working "backwards" to produce AC from the mains) would appear to depend entirely on the capability and configuration of the hybrid inverter and the system overall.

But, you might ask,why would you ever want to charge the battery from the utility rather than from solar?  You might want to do this if there were variable tariffs in your area - say, $0.30/kWh during the peak hours in the day, but only $0.15kWh at night - in which case it would make sense supplant the "expensive" power during the day with "cheap" power bought at night to charge it up:  Although there would be, perhaps, a 10% "round trip loss" in doing this, it would still save money overall and help "even out" the oading that a utility might see during peak hours.

Whether or not this system would be helpful in a power outage is also dependent on the nature of the inverter to which it is connected:  Most grid-tie solar converters become useless when the mains power disappears (e.g. cannot produce any power for the consumer - more on this later) - and this applies to both "series string" (e.g. a large inverter fed by high-voltage DC from a series of panels) and the "microinverter" (small inverters at each of the panels) topologies.  Inverters configured for "island" operation (e.g. "free running" in the absence of a live power grid) or ones that can safely switch between "grid tie" and "island" modes would seem to be appropriate if you use the DC PowerWall and you want to keep your house "powered up" when there is a grid failure.

In other words, if you have a typical PV system that involves grid-tie inverters (series string or microinverter) and you have no "islanding" capability at present, the "DC" Power Wall is not for you!

The "AC" PowerWall 2:
  • Data sheets (two whole pages, each - almost!) for both the DC and AC versions of the PowerWall may be found here - LINK.
While the "AC" version seems to have the same battery storage capacity as the "DC" version (e.g. approx. 13.5kWh) it also has an integrated inverter and charger that interfaces with the AC mains that is apparently capable of supporting any standard voltage from 100 to 277 volts, 50 or 60 Hz, split or single phase.  This inverter, rated for approximately 7kW peak and 5-ish kW continuous, is sufficient to run many households.  Multiple units may be "stacked" (e.g. connected in parallel-type configuration - up to nine of them, according to the data sheet linked above) for additional storage and capacity.
Unlike the "DC" version, all of the power inflow/outflow is via the AC power feed, which is to say, it will both output AC power via its inverter and charge its battery via that same connection.  What this means is that it need not (and cannot, really) be directly connect to the PV (photovoltaic) system at all except, possibly, via a local network to gather stats and do controlling.  What seems clear is that this version has some means of monitoring the net flow in to and out of the house to the utility which means that the PowerWall could balance this out by "knowing" how much power it could use to charge its battery, or needed to output.

(The basic diagram of Figure 1, below, shows how such a system might be connected.  This diagram does not specifically represent a PowerWall, but rather how any battery-based inverter/charger system might be used to supply back-up power to a home in the past and future.)

Because its power would be connected "indirectly" via AC power connections to the PV system it should (in theory) work with either a series-string or microinverter-type system - or, maybe even if you have no solar at all if you simply want to charge it during times of lower tariffs and pull the charge back out again during high tariffs.

(The Tesla brochure simply says "Support for wide range of usage scenarios" under the heading "Operating Modes" - which could be interpreted many ways, but at the time of the original posting of this article I have not actually seen an "official" suggestion of a use without any sort of solar power.)

What might such a system look like - schematically, at least?

How might this version of the PowerWall operate?  First, let's take a look at a diagram of how any sort of battery/inverter/charger like this might be configured for a house.
Figure 1:
Diagram of a generic battery-based "whole house" backup system based on obvious requirements.  This is a very basic diagram, showing most of the needed components that would be required to interface a battery-based inverter/charger with a typical house's electrical system and a PV (PhotoVoltaic/solar) charging system.
For those not familiar with North American power systems, typical residences are fed with 240 volt, center-tapped service from the utility's step-down transformer with this center-tap grounded at the service entrance.  This allows most devices to operate at 120 volts while those that consume large amounts of power (ranges, electric water heaters, electric dryers, air conditioners, etc.) are connected to 240 volt circuit, which may or may not need the "neutral" lead at all.  In most other parts of the world there would be only "L1" and the "Neutral" operating at about 240 volts.
Click on the image for a larger version.

Referring to Figure 1, above:

Shown to the right of center is a switch that opens when the utility's power grid goes offline, isolating the house and the inverter/charger from the power grid and included in that is a voltage monitor (consisting of potential transducers, or "PTs") that can detect when the mains voltage has returned and stabilized and it is "safe" to reconnect to the grid.  The battery-based inverter/charger is connected across the house's mains so that it can both pull current from it to charge its battery as well as push power into the house in a back-up situation.

The "Net current monitoring" current transducers ("CTs") might be used to allow the inverter/charger to "zero out" the total current (and, thus power) coming in from and going out to the power grid (under normal situations) such as when its battery is being charged and extra power is being produced by the PV system, but also to control the charge rate just so that only that "extra" power from the PV system is being used to assure, as much as possible, a net-zero flow to/from the utility.  The "House Current monitoring" is used to determine how much current is being used by the entire house while the "PV current monitoring" is used to determine the contribution of the PV system.

By knowing these things it is possible to determine how much excess/deficit their may be in terms of the production of the PV system with respect to actual usage by the household.  Not shown is the current monitoring that would, no doubt, be included in the inverter/charger itself.  Some of the shown current monitoring points may be redundant as this information could be determined in other ways, but are included for clarity.

Finally, a local network (data) connection is shown for both the inverter/charger and the PV system so that they may communicate with each other, perhaps for control purposes, as well as communicate via the Internet so that statistics may be monitored and recorded and to allow firmware updates to be issued.

How it might operate in practice:

As can be seen in Figure 1 and determined from the explanation, we can see that the PV is connected to the input/output of the inverter/charger (which could be a PowerWall - or any other similar system) via the house wiring which means that there is a path to the PowerWall to charge its battery, and the same path out of it when it needs to supply power, along with means of monitoring power flow.

With a system akin to that depicted in Figure 1, consider these possible scenarios:
  1. Excess power is being produced by the PV system and put back into the grid and the PowerWall's battery is fully-charged.   Because the battery is fully-charged there is nowhere to put this extra power so it goes back into the grid, tracked by the utility's "Net Meter" in the same way that it would be without a PowerWall.
  2. Excess power is being produced by the PV system and the PowerWall's battery is not fully charged.  It will pull the amount of "excess" power that the PV system would normally be putting into the grid and charge its own battery at that same rate resulting in a net-zero amount of power being put into the grid.
  3. More power is being consumed by the user's household than is being produced by the solar array.  Depending on the state-of-charge and configuration of the PowerWall it may produce enough power to make up for the difference between what the PV system is producing and the user's needs.  At night this could (in theory) be 100% of the usage if the system were so-configured.
  4. Tariff leveling.  It would be theoretically possible to configure it so that whether or not solar was present and the utility charged a higher daytime than nighttime power rate, one could charge overnight from the mains and put out power during the day to reduce the power costs overall and to help "level" the utility's load.
What about a power outage?

All of the above scenarios are to be expected - and they are more-or-less standard offerings for many of the battery-based products of this type - but what if the AC mains go down?  For the rest of this discussion we will ignore the "DC" version of the PowerWall as its capability would rely on the configuration of the user's hybrid inverter and its capabilities/configuration when it comes to supplying backup, "islanded" AC power although the combination of a DC power wall and the appropriate inverter could be functionally identical to an AC Power Wall.

As mentioned before, with a typical PV system - either "series string" (one large inverter) or distributed (e.g. "microinverter") - if the power grid goes offline the PV system becomes useless:  A PV system requires the power grid to be present to both synchronize itself and present an infinite "sink" into which it can always "push" all of the "extra" power power that it is producing.  Were such units to not shut down, dangerous voltages could be "back-fed" into the power grid and be a hazard to anyone who might be trying to repair it.  It is for this reason that all grid-tie inverters are, by law, required to go offline and/or disconnect themselves completely from the power grid during a mains power outage.

The "AC" version of the Tesla PowerWall's system includes a switch that automatically isolates the house from the utility's power grid when there is a power failure.  Once this switch has isolated the house from the power grid the inverter built into the PowerWall can supply power to the house - at least as long as its battery lasts.

What about charging the battery during a power outage?

Here is where it seems to get a bit tricky.

If all grid-tie inverter systems go offline when the power grid fails, is it possible to use it to assist, or even charge the PowerWall during a grid failure?  In other words, can you use power from your PV system to recharge the PowerWall's battery or, at the very least, supply at least some of the power to extend its battery run-time?

In corresponding with a company representative - and corroborated by data openly published by Telsa (see the FAQ linked near the bottom of this posting) - the answer would appear to be "yes" - but exactly how this works is not very clear.

Based on rather vague information and knowing the behavior of the components involved it would seem to need to work this way:
  • The power (utility) grid goes down.
    • The user's PV system goes offline with the failure of the grid.
    • The PowerWall's switch opens, isolating the house completely from the grid - aside from the ability to monitor when the power grid comes back up.
    • The inverter in the PowerWall now takes the load of the (now isolated) house, producing AC power.
Were this all that happened, the house would again go dark once the battery in the PowerWall or similar "back-up power system" was depleted, but there seems to be more to it than this when a PV system is involved, as in:
  • When the back-up power system's inverter goes online, the PV system again sees what looks like the power grid and comes back online.
    • As it does, the back-up power system monitors the total power consumption and usage and any excess power being produced by the PV system is used to charge its battery.
    • If the PV system is producing less power than is being used, the back-up power system will supply the difference:  Its battery will still be discharged, but at a lower rate.  The house will still go dark when the battery is fully discharged.

What if you run the Power Wall down to the point where it goes offline and then the sun comes out the next day:  Is it possible to "bootstrap" the system to cause the PV to go online and start charging the battery, or are you "stuck" in an "offline" state where you can't produce PV to charge the battery because there is no AC power, but you can't produce AC power to charge the battery?

It is entirely possible that the DC version of the Power Wall may be "immune" to this "catch-22" situation by allowing some "reserve" capacity to restart once PV power is again available for charging - but how would it know that?
But now it gets even trickier and a bit more vague.

What if there is extra power being produced by the PV system?

Grid tie PV systems expect the power grid to be an infinite sink of power - but what if, during a power failure, when your backup up system is standing in as the power grid, your PV system is producing 5kW of solar energy and your house/inverter is using only 2kW:  Where does the extra 3kW of production go if it cannot be infinitely sinked into the utility grid, and how does one keep the PV system from "tripping out" and going off line?

To illustrate the problem, let us bring up a related scenario where we have a generator instead of some sort of battery-based back-up power system system.

There is a very good reason why owners of grid-tie systems are warned against using it to "assist" a backup generator and using that generator as a substitute for the power grid.  What can happen is this:
  • The AC power goes out and the transfer switch connects the house to the generator.
  • The generator comes online and produces AC power.
  • If the AC power from the generator is stable enough (not all generators produce adequately stable power) the PV system will come back online thinking that the power grid has come back.
  • When the PV system comes back online and produces power, the generator's load decreases:  Most generator's motors will slightly speed up as the load is decreased.
    • When the generator's motor speeds up, the frequency goes high.  When this happens, the PV system will see that as unstable power and will go offline.
    • When the PV system goes off, the power is suddenly dumped on the generator and it is hit with the full load and slows back down.
  •  The cycle repeats, with the PV system and generator "fighting" each other as the PV system continually goes on and offline.
An even worse scenario is this:
  • The AC power goes out, the transfer switch connects the house to the generator.
  • The generator comes online and produces power.
  • The PV system comes up because it "sees" the generator as the power grid, but its producing, say, 5kW but the house is, at the moment, using 2kW.
  • The PV system, because it think that it is connected to the power grid, will try to shove that extra 3kW somewhere, causing one or more of the following to happen:
    • The generator to speed up as power is being "pushed" into it, its frequency wil go high and trip the PV system offline, and/or:
    • If the PV system tries to push more power into the system than there is a place for it to go (e.g. the case, above, where the solar is producing 3kW more than is being used) the voltage will necessarily go up.  Assuming that the generator doesn't "overspeed" and trip-out and the frequency doesn't go up and trip the PV system offline, the PV system will increase the voltage, trying to "push" the extra power into a load where there is nowhere for it to go:
      • As the PV system tries to "push" its excess power into the generator, it will increase the output voltage.  At some point the PV system will trip out on overvoltage, and the same "on-off" cycle mentioned above will occur.
      • It is possible that the excess power from the PV will "motor" the generator (e.g. the input power tries to "spin" the generator/motor) - an extremely bad thing to do which will probably cause it to overheat and eventually be destroyed if this goes un-checked.
      • If it is an "inverter" type generator, it can't be "motored", but the excess power will probably cause the generator's inverter to get stuck in the same "trip out/restart" cycle or simply fault out in an "overload condition - or the inverter might even be damaged/destroyed.
If having extra power from a grid-tie inverter is so difficult to deal with, what could you do with extra power that the PV system might be producing?

What if we have excess power and nowhere to put it?

The question that comes to mind now is "What does the PV system do when the PowerWall's battery is fully-charged and there is no-where to put extra energy that might be being produced?"  Where we have is the situation where our PV system is producing 5kW but we are using only 2kW leaving an extra 3kW to go... where?

The answer to that question is not at all clear, but four possibilities come to mind:
  1. Divert the power elsewhere.  Some people with "island" systems utilize a feature of some solar power systems that indicate when excess power is available and use it to operate a diversion switch to shunt the excess power in an attempt to do something useful like run an electric water heater, pump water or simply produce waste heat with a large resistor bank.  Such features are usually available only on "island" systems (e.g. those that are entirely self-contained and not tied to the power grid) and with large battery banks.
  2. Disable the PV system temporarily.  If it is possible, simply disable the PV system for a while and drain, say, 5-10% of the power out of the back-up power system battery before turning it back on and recharging it.  This will cause the PV system to cycle on and offline, but it will do so relatively slowly and it should cause no harm.
  3. Tell the PV system to shut off.  One could somehow communicate with the PV system and "tell" it to produce only the needed amount of energy.  This is a bit of a fine line to walk, but it is theoretically possible provided such a feature is available on the PV system.
  4. Alter the power to cause the PV system to drop off-line.  One could, in theory, alter the conditions of the power being produced by the back-up power system inverter such that it causes the PV system to go offline and stay that way until it needs to come back online.
Analyzing the possibilities:

Let's eliminate #1 as that will not apply to a typical grid-tie system, so that leaves us with:

#2:  Disabling the PV system:

Of these three possibilities #2 would seem to be the most obvious and it could be done simply by having another switch/relay on the output of the PV system that disconnects it from the rest of the house, forcing it to go offline - but this has its complications.

For example, in my system the PV is connected into a separate sub-panel located in the garage:  If one were to disconnect this branch circuit entirely, the power in the garage would go on and off, depending on the state-of-charge of the PowerWall or other battery-based back-up power system.  Connecting a PV system to a sub-panel is not an unusual configuration as it is not uncommon to find them connected to sub-panels that feed other systems, say, the air conditioner, kitchen, etc. (e.g. wherever a suitable circuit is available) so I'm guessing that they do not do it this way - unless they do it at the point before the PV system connects to the panel.  Doing this would require a remotely-controlled switch in many situations - awkward to wire up in many situations, but not impossible.

As noted above, one would disable the PV system once the battery had fully charged but enable it again once the battery had run down a bit - say, to 90-95%.  This way, one would not be rapid-cycling the PV system and the vast majority of the back-up power system's battery storage capacity would be available.

While this, too, should work, I suspect that it is not the one that is used as the drawings in the brochures don't show any such connection - but then again, they don't show the main house disconnect that would have to be present - but it would probably work just fine if the PV system were to gracefully come back online when it was time to do so (e.g. no user intervention to "reset" anything.)

#4:  Alter the operating conditions to cause the PV system to go offline:

Then there is #4, and one interesting possibility comes to mind - and it may a kludge, but it should work.

One of the parameters that could be altered would be the frequency at which the back-up power system's inverter operates (say, 2-3 Hz or so above and/or below the proper line frequency) and force the PV system offline with that variance.  Even though this minor frequency change is not likely to hurt anything (many generators' frequencies drift around much more than this with varying loads!) devices that use the power line frequency as a reference - such as clocks, and clocks within various appliances - would drift rather badly unless the frequency were "dithered" above and below the proper frequency so that its long term average was properly maintained.

I suspect that this is not a method that would be used, but it could work - at least in theory.

Edit - 20170719:
In digging around, I have determined that "dithering" the frequency is, in fact, one of several ways that is used by an battery-backed inverter to disable a PV inverter when the PV is producing more power than can be accommodated by the load and/or battery charger.  This system, called "Frequency Shift Power Control" (FSPC) by at least one manufacturer (e.g. SunnyBoy) is designed to do this very thing.

A description of this technique may be found in section 6 of the document at this link.

Whether or not this is a control method used by the Power Wall is not known at this time.
#3:  "Talk" to the PV system and control the amount of power that it is producing:

That leaves us with #3:  Communicate with the PV system and "tell" it (perhaps using the "ModBus" interface) to produce only enough power to "zero" out the net usage.

The problem with this method is that it would depend on the capabilities of the PV inverter system and require that they support such specific remote control functions.  While it is very possible that some do, this method would be limited to those so-equipped.

#3 and #2:  "Talk" to the PV system to turn it on and off as needed:

Included in #3 could be a variant of method #2 and that would be to send a command to the inverter via its network connection to simply shut down and come back online as needed to keep the battery between, say, 90% and 100% charge as mentioned above.

This second variant of #3 seems most likely as there it is likely that there is some sort of set of commands capable of this that would be widely implemented across vendors and models.

* * *

What do I think the likelihood to be?

I'm betting on the second variant of #3 where a command is sent to the PV system to tell it to turn off - at least until there is, again, somewhere to "send" excess power - but #4 is looking increasingly likely.

* * *

Having said all of this, there is a product FAQ that was put out by Tesla that seems to confirm the basic analysis - that is, its ability to run "stand alone" in the event of a power failure and the charge be maintained if there is sufficient excess PV capacity - read that FAQ here - LINK.

I'm investigating getting a PowerWall 2 system to augment my PV generation and provide "whole house" backup.  In the process I have been researching how it works and interfaces with both the utility and my existing PV system.

While I have occasionally asked questions of representatives of  Tesla, nothing that they have said is anything that could not be easily found in publicly-released information on the internet and as of the original date of this posting I haven't signed anything that could possibly keep me from talking about it.

However all of its interfacing and connectivity is done, it should be interesting!
Additional information may be found on the GreenTech Media web site:  "The New Tesla Powerwall Is Actually Two Different Products" - LINK.  This article and follow-up comments seem to indicate that there were, at the time of their writing, there were only a few manufacturers of inverters, namely SolarEdge and SMA (a.k.a. SunnyBoy) with which Tesla was installing/interfacing their systems, perhaps indicating some version of #2 or #3, above.  Clearly, the comments, mostly from several months ago, are also offering various conjectures on how the system actually works.

* * *

Finally, if you can find more specific information - say from a public document or from others' experience and analysis that can add more to this, please pass it along!


This page stolen from "ka7oei.blogspot.com"