Scientific Seen

News, Commentary, and Tutorials from a Scientific Perspective

Solid-state lighting — using light emitting diodes for general illumination — offers a host of advantages over incumbent technologies. For starters, LED lighting is five times (or more) as efficient as incandescent lighting, it has no components requiring hazardous material waste provisions (as do fluorescent lamps), and LEDs instantly respond to electric current, eliminating the delays inherent in high-intensity-discharge at startup and allowing the illumination level to be controlled. And those really are just for starters, as there are many other advantages.

But LED lighting has one big disadvantage: it’s completely different from other lighting technology. For example, LEDs are generally driven with DC current as opposed to the AC that powers just about every other source. Another big difference is that LED packages themselves (the LED chip, encapsulant, and primary optics) shape and direct the light. Contrast that with the tungsten-alloy filament at the heart of an incandescent bulb. When the filament heats up it puts out light (about 5 to 10 percent of its electrical power usage) in every direction.

Manufacturers can make specialty incandescent bulbs that have internal mirrors to direct the light in a certain way, but in general, one light bulb is just like another. You can replace a Sylvania bulb with a General Electric bulb with a Phillips bulb and you can put it in a desk lamp, an outdoor sconce, or a recessed fixture. Incandescent bulbs are commodities because the physics determines how a filament works, and there’s not much any manufacturer can do do distinguish their offering from another company’s. The good part is the interchangeability; the bad part is that the light output of incandescent lamps is cut down by the fixture efficiency (the amount of light that doesn’t get where you want it to go. Because the light from the bulb itself is uncontrolled — spreading out in every direction — you can’t match a bulb to the fixture in which you’d like to put it, so light (and energy) is wasted.

Matching the Light Source to the Task

What if, though, you could buy a different light bulb for every fixture, one tailored to minimize wasted energy for that one particular application? It would be expensive. And every few months or so it would burn out and you’d need to run down to the hardware store and buy a replacement bulb (selecting from the scores your retailer would need to keep in stock). So that’s not really a viable solution for incandescent bulbs. But it is for LEDs.
One of the advantages alluded to above is that LEDs can last a really long time (there are some issues with exactly how long, but that’s for another day). Long lifetime means that it’s reasonable to select an LED lighting solution specific to each application — because you won’t need to change it out for 5, 10, perhaps even 20 years or more. In practice, it’s a little more complicated than that because there’s not just one LED that will go in every desk lamp or cove light. Every luminaire (light fixture) manufacturer selects and arranges LEDs in a different fashion. So if you really want to optimize LED lighting you have to buy an entire fixture. It’s still a money-saving proposition (it’s not uncommon for industrial customers to get payback times of anywhere from a few years to several months), but it creates a dilemma for LED lighting.

LED lighting can be forced into an Edison-Screw bulb.  Image from U.S. Department of Energy.

LED lighting can be forced into a traditional incandescent bulb shape — but should it be?
Image from U.S. Department of Energy.

Retrofit or Redesign?

LED lighting manufacturers have two choices: they can make fixtures that optimize the distribution of light to fully take advantage of the new capabilities offered by solid-state lighting, or they can design retrofit bulbs that can be put into place as one-for-one replacements of existing incandescent or fluorescent fixtures. At last week’s LED Show the dilemma was (quite literally) on display. The purists argue that forcing LEDs to mimic (crappy) incandescent or fluorescent sources will set the industry back because customers will see energy savings, but not much more of the advantages of LED lighting. The retrofit folks argue that the replacement bulb solution lowers the barriers to entry, giving LEDs an absolutely necessary foothold into the general illumination market. At times, there’s some visible contempt expressed on one side or the other.

The logical path (one that’s already being played out) seems to be that retrofit solutions get LEDs in the door, where they then get to display some of their other advantages. Those bonus advantages then create showcase solutions that become selling points for designs that fully embrace the LED lighting solution.

Share

It’s interesting to watch the evolution of a technology.  I attended my first LED lighting conference in 1998 and I’ve attended at least one in every intervening year.  In those first years, Las Vegas was the primary customer.  Nowhere else was lighting such a significant expense — and such an essential part of marketing.  Higher efficiency and lower maintenance represented a huge savings for casino operators on The Strip.  But the ability of LEDs to manage the distribution of light (direct it to the consumer instead of spraying it into space) and the flexible control of LED lighting (to easily reconfigure displays) added to the energy savings to make LED lighting worth the investment.

But there’s a big difference between convincing casino operators to upgrade their high-value, high-maintenance displays and getting consumers to shell out big bucks to replace their 25-cent incandescent bulbs.  The fundamental problem is that LEDs are significantly different than their incandescent predecessors, but it’s difficult to take advantage of their new capabilities within the current infrastructure.  So people pushing for LED adoption have to limit their argument to two factors: it will save energy and it will reduce maintenance costs.  It’s kind of like arguing for replacement of horsedrawn buggies with gas-powered automobiles, but having your arguments for change limited to discussions of lower hay costs and reduced need for street cleaning.

LED lighting squeezed into a familiar incandescent bulb form factor.

LED lighting can be made to look like an incandescent bulb, but that’s like requiring a buggy whip on a Ferrari.

Even with that limited evaluation, LEDs are now well past the point where they are economically viable (after some intense political, economic, and technological growing pains), so just about any lighting project today needs to at least consider LEDs as an option, and for many developers they are the option of choice.  But now that LED general illumination is in place, system operators are realizing some other benefits.

Networked Capabilities of LED Lighting

At the LED Show (starting yesterday in — fittingly enough — Las Vegas) Kelly Cunningham of the California Lighting Technology Center (CLTC) at the University of California-Davis described a networked implementation of LED lighting control on the university’s campus.  Outdoor LED lighting at the campus is triggered by passive infrared sensors that provide little more than a simple present-or-not signal.  Even with that limited input, the control system anticipates pedestrian and cyclist movement, bringing lighting up to full brightness levels before the traffic reaches the lit area.  The ability to remotely control and instantaneously modify the illumination level of LEDs is central to the operation of this kind of system, and the immediate benefits are impressive.

For example, 100 wall packs (those curious rectangular fixtures affixed to the outside of buildings and washing the walls with light) detected only a 28 percent occupancy rate, leading to an 85 percent reduction in energy costs — over and above the reduction simply due to LED efficiency alone.  The CLTC implemented the same kind of system on a stretch of urban roadway.  Although the final report has not been released, Cunningham said the results are similar.

That’s encouraging news for the industry, because those are the kind of integrated lighting systems that insiders have been claiming would lead to additional levels of savings (and other capabilities, but that’s another story), and this provides another fairly significant example of the promise coming to fruition.  It also demonstrates another general truth: if you don’t have a capability, then you don’t have any idea what you’ll do with it; but when you develop the capability you will apply the capability in clever ways.  That’s true for networked lighting now.  In the near future, the precise control LEDs offer over color, intensity, and distribution of light will be used to modify illumination in our work and home environments to enhance our comfort and productivity in ways we can only glimpse today.

Share

Remember the hullabaloo a few years ago about camcorders capable of infrared photography — folks modifying their camcorders to see through clothing? One of the more annoying elements of the press coverage was the label “x-ray” for that kind of image. Sure, it’s a quick way of saying the modified cameras can see through clothing that appears opaque to the eye, but the infrared wavelengths are about a thousand times less energetic than x-rays.

And speaking of wavelength, there’s a lot of confusion about infrared cameras in general. The confusion stems from the fact that the infrared region of the spectrum is about 200 times as wide as the visible light spectrum. That is, if you reflect a beam of deep blue light off a mirror, it will act (just about) exactly like a beam of red light — so the two ends of the visible light spectrum act almost exactly the same way.

The Infrared Spectrum

Not so for the infrared. The infrared is roughly and loosely divided into three (or more) regions: the near-infrared (NIR), the midwave-infrared (MWIR), and the longwave- (or far-) infrared (LWIR). Those regions act completely different from each other. A material that absorbs energy in the NIR can be transparent in the MWIR and reflective in the LWIR.

Those regions of the infrared spectrum are generated in different ways as well. Every object in the universe emits radiation, at wavelengths that correspond to the object’s temperature. The human body, for example, emits light in the LWIR region. An army tank or an airplane emits in the MWIR. A hot stove emits in the NIR — and when it gets even hotter it emits in the visible…leading to our familiar experience of something being “red hot.” There are dozens of different types of infrared cameras, imaging different parts of the infrared spectrum.

The modified camcorders that do the “x-ray” infrared photography work in the near-infrared spectrum, so they aren’t creating pictures from the infrared energy originating in the human body. They create images from reflected NIR. At night, when there is not much visible light around, these cameras would shine a NIR “flashlight” and capture the infrared wavelengths reflected off the object. Even though the scene would be dark, the camera would capture a perceptible scene.

Near-infrared photography creates subtly different effects.  Image courtesy of Wikimedia Commons
Near-infrared photography creates subtly different — almost eerie — effects.

The sun and artificial light sources emit near-infrared radiation, but they also emit visible light. The detector in the camcorder could sense the NIR, but it’s usually not the image you want during the daytime, so an infrared blocking filter is put in front of the detector. To take the night-time photos, the infrared blocking filter is flipped up, out of the optical path. If the filter is flipped up during the day, the detector senses the NIR, but there’s so much visible light around that the infrared image would be swamped by the visible light. To get around that, some users put a visible light blocking filter in front of the camera. Then, during the daytime, the camera senses the NIR without all the extra visible light. That NIR image captures NIR reflected from the scene, in the same way that visible light images capture light reflected from the scene.

Seeing Through Clothes

So how did that “see through the clothes” thing work? Well, there are some materials that are transparent in the NIR and opaque in the visible. Some (usually thinner cotton) fabrics do not reflect near-infrared. Meanwhile, the undergarments, fabricated from different materials, do reflect NIR. The effect is almost as if the outer garment isn’t there.

It’s really not too common a situation, where the external clothes are transparent in the NIR, but the uproar over the situation was enough to cause camcorder manufacturers to make it much more difficult to use the cameras to image NIR under daylight conditions. It’s a shame, because there are some really interesting effects possible with daytime infrared photography. Still, since there are some scumballs who do things like take “naked” photos of the Chinese diving team with their modified cameras, one can see why the manufacturers have tried to eliminate the infrared imaging capabilities of their cameras.

Read More
Related article at Salon.com.

Share

Ran across this item at Renewable Energy World:

It wasn’t so long ago when some solar company executives – particularly those in the thin film business – dismissed the idea that innovation could still thrive in the world of crystalline silicon technology. The silicon technology was getting cheaper and factories were getting larger, and its dominance seemed unshakable, at least in the short-term. Why would anyone invest in new materials or production processes?

Turns out, a lot more can be done to chip away at the manufacturing cost. This is especially true when silicon technology companies are eager to set their products apart in a market that’s got way too many same-same solar panels.

Although Ms. Wang is certainly accurate in her reporting, I have to chuckle (metaphorically) at the “rush” to innovation. I think the rush is simply a sudden awareness (or sudden awareness of the importance) on the part of the non-technical executive offices of the need for technological innovation at the material, cell, and panel level—a process that’s been continuous for decades. For example, a year and a half ago, at Photon’s PV conference, Centrotherm discussed/announced their “turnkey” CIGS factory, allowing anyone with the required capital to produce thin film solar cells. The question then becomes, how does one distinguish oneself from the other producers?

Although the situation for silicon photovoltaics is slightly different, in that there are a few competitive approaches to cell and panel fabrication, there is not a clear differentiation in the final cost and performance numbers (or the levellized cost of ownership, as folks like to discuss nowadays). As long as there is no clear differentiator, there will continue to be a search for one.

What kind of differentiator? Well, what are the customers saying? At PV America West in March of this year, a panel discussion about the solar utility business model had absolutely no discussion of the role of technological innovation on the part of their suppliers. When presented with the question of the relative importance of technological innovation, the panel members stated that it was important, but (essentially) only in the context of decreasing costs.

Certainly this is a bit too complicated to address in a short post (it would probably be a bit too complicated to address in a non-partisan two-year congressional investigation, for that matter…), but given the disappearance or erosion of subsidies, technological innovation and its concomitant cost reductions will become even more important in the future.

So, “rush”? I can’t disagree with the sense of urgency that word implies, but technological innovation in silicon photovoltaics and related technologies is nothing new.

Share

Maybe you’ve made your own cheese, or installed a solar panel, or you’re filtering seawater for your aquarium. Anytime you have some kind of physical process in place, you’d probably like to monitor it so you know what’s going on and you know when you’re having a problem. With your own monitoring system, you can measure temperature, current, salinity, or just about anything else you need to track with digital panel meters. But you need to know the reading displayed is accurate. That’s why you calibrate your meter.

Read More
Originally published in eHow, OCT 2011

Share