Scientific Seen

News, Commentary, and Tutorials from a Scientific Perspective

The Commission internationale de l’éclairage, CIE, is releasing new documents on Human-Centric Lighting. The group, known in their English translation as The International Commission on Illumination, notes the types of issues discussed in this forum a couple months ago.

Specifically, their press release states:

Scientists, the lighting industry, lighting designers and other stakeholders in the lighting community have continued to identify options and to design products and solutions that make use of non-visual lighting effects in a beneficial way, despite the fact that the established knowledge in this field is still premature. Among the few points of general agreement is that the non-visual effects of light exposure depend on the spectrum, intensity, duration, timing and temporal pattern (light history) of the light exposure.

The two new documents will address how to quantify the human non-visual response to illumination (presumably in the same sort of statistically average way as the CIE chromaticity diagrams) and how to identify the human factors that are influenced by non-visual illumination. Just to clarify, “non-visual” in this context doesn’t mean invisible or outside the spectral response of the eye. It refers to light sensed by neurons that are not involved in image formation — that are not rods and cones. The primary receptors involved in the non-visual response are called intrinsically photosensitive retinal ganglion cells (ipRGCs). They have a spectral response that overlaps quite significantly with the action spectrum of green cones.

Intrinsically photosensitive retinal ganglion cells detect light and modify human metabolic activity -- the goal of human-centric lighting.

An intrinsically photosensitive retinal ganglion cell, which is the primary receptor targeted by human-centric lighting efforts. Image from Ning Tian, M.D., Ph.D., photographed by Bryan William Jones, Ph.D, via webvision.med.utah.edu.

As with any field on the edge of knowledge, what we don’t know far outweighs what we do know, but the CIE is looking to turn that around. It would be nice to be able to design human-centric lighting based on sound understanding of its physiological effects.

Share

By now even the most isolated consumer has at least heard of LED lighting. LED replacement bulbs are available in any hardware store larger than a closet. The case for the economics and quality of solid-state lighting has been won. Now the industry can concentrate on bringing true value to LED lighting. That value will not come in the form of lower energy use; it will come in the generation of light that enhances the human experience.

That means influencing some aspect of human comfort, health, or behavior by producing and modifying light’s intensity, spatial and angular distribution, spectral content, and possibly other characteristics. Before LEDs we had only limited control over the characteristics of light. Now we have nearly infinite degrees of control. With this control the question becomes: What is ideal lighting, and how can we produce it?

Schoolchildren in LED-lit classroom.

Philips LED lighting offers teachers the option of four pre-set lighting scenarios, ‘Normal’, ‘Energy’, ‘Focus’ or ‘Calm’. There’s a positive response, but what does it mean? Courtesy of Philips Lighting.

Are we ready to answer that? The just-completed Strategies in Light Conference in Las Vegas featured a workshop led by a truly distinguished panel of experts claiming that human centric lighting is ready now. The tenor and content of the discussion might be summarized by a slightly different title. Perhaps something like, “Technology is Ready to Start Investigating the Desired Qualities of Human-Centric Lighting.”

The argument put forward by the presenters is that they are already proceeding with human-centric lighting projects designed to optimize or improve some aspect of human behavior. Stan Walerczyk introduced the new solid-state lighting at the Seattle Mariners’ stadium — the first Major League Baseball stadium to make the switch — which includes (in the home team locker room only) features to tweak circadian rhythm for optimum alertness at gametime. Doug Steel talked about the growing body of knowledge linking various health conditions with light exposure — and the corollary that light exposure can potentially improve health outcomes. Professor Robert Karlicek described methods for integrating “non-invasive” sensors into lighting systems to produce light optimized for a given activity — hands-free because the sensor’s system autonomously categorizes the human activity. Mike Lambert described some human-centric lighting pilot projects in schools, including a school for autistic children.

The message is that light is already being used in innovative ways to influence human health, behavior, and comfort.

The Big Questions for Human-Centric Lighting

But some big questions remain. Bob Karlicek summarized the situation with a single statement: “We really don’t know what we don’t know yet about how lighting will influence how people learn and work.”

We don’t even really have a grasp of the most basic question:
In what ways do the visual and non-visual light receptors influence human behavior, mood, and health? It’s likely to be a very complex answer with primitive roots. That is, the intrinsically photosensitive retinal ganglion cells (ipRGCs) are very highly conserved, in biological terms. That means they appear even in “primitive” organisms only distantly related to humans, which points to a)their early origin and b)their importance to survival. That early illumination response had nothing to do with the human brain, as it was around far before there were any humans (or, for that matter, any brains). So there are going to be some very basic biochemical responses to light that are far beyond our conscious (or even unconscious) control. On the other hand, the human brain does a really good job of processing and manipulating sensory response. Basically, we can convince ourselves we feel one way when our body is telling us we truly feel something completely different. And that’s even before we introduce “thinking,” which makes everything more complicated. Thus, the complex answer with primitive roots.

Light and the brain.

Light influences the biochemistry of the brain, endocrine systems, metabolic processes, and — well, who knows where it ends? Courtesy of Nat’l Institute of General Medical Sciences.

That raises the next logical question:
How do we go about finding the answer to the previous question? One answer, proposed by Doug Steel at the SIL workshop, is to create environments where users get to choose their own lighting characteristics and then simply monitor what they do. He believes there is a natural tendency for people to drift away from their initial preferences — what they believe they like — to lighting that maximizes whatever it is they’re trying to optimize. If engineers partner with clinical professionals (many of whom are being dropped by pharmaceutical companies) then he believes these types of studies will identify (or at least illuminate ) the influence of light on human activity. On the other hand…there are so many confounding factors that the mechanisms might be hidden beneath layers of competing biochemical, emotional, and cognitive mechanisms. Professor George Brainard emphasized that point, reminding the panelists that there’s a well-developed successful path for performing this sort of research, progressing from animal through clinical studies. Although I don’t recall anyone explicitly stating this, the panelists’ consensus seemed to be that a)there are already companies out there claiming either specific or nebulous health effects of their particular lighting system, and b)traditional scientific discovery will take too long. So we should do what we can to provide at least some sort of scientific investigation into the effects of lighting, and we should do it right now.

Which brings up the final question, at least for this already long post:
What makes good lighting? In an elementary school classroom is the good lighting that which maximizes student performance or that which minimizes behavioral problems? In the work environment, is it that which maximizes worker productivity or worker comfort? If lighting makes me feel keyed up, as if I’ve been physically active, am I likely to eat more, even though I’ve been sedentary? Is lighting that makes me happy better than lighting that makes me more alert? And, distinct from those types of questions, there is the layer of light and health. You can imagine a scenario where I may feel good and work hard under certain levels of illumination, but that ends up triggering some photosensitive biochemical pathway that begins tumor growth.

My concern is that human studies right now will be almost necessarily one- or two-dimensional: looking at the effect of light on one or two aspects of human comfort, health, behavior, or activity. It’s likely that the complex structure of the human brain is reactive to illumination in many, perhaps contradictory, ways and if we really want answers we’re going to need to take as comprehensive an approach as possible.

The Human Centric Lighting Society is working on these sorts of questions, and you should take a look at their site if you want to get up to speed.

Share

A few years ago XVIVO and Harvard University released a video of a scientific visualization entitled, “The Inner Life of the Cell.” I wasn’t a big fan. It was a fairy-tale vision of cellular activities.

If you saw a simulation of traffic flow on the highways and every vehicle in each lane was going the same speed, maintaining proper following distance, signalling and changing lanes only when necessary, merging and exiting with decorum—it might be nice, but it would be so fanciful that it would do more harm than good if you were trying to understand highway traffic in the real world. When you look at real-world traffic, you have difficulty believing anyone can travel the highways safely, but the simulation would make it hard to imagine there could ever be such a thing as a freeway collision.

A scientific visualization should induce a mental model that catalyzes an improved understanding of reality, and the 2006 simulation failed.

Of course, a simulation like this is going to be unrealistic. Molecules aren’t distinguished by hues, atoms don’t remain stationary with respect to their neighbors, and there’s no classical music soundtrack in a real cell. But the 2006 simulation was so far removed from reality that (in my opinion, of course) it served more to confuse than clarify perceptions about molecular activities in a living cell. My biggest peeve: all the molecules were shown in stately glides as if a miniature synchronized swimming team was displaying the results of years of practice. On those scales, the “aqueous” environment behaves more like peanut butter. Kinesin molecules grab onto microtubules and pull because they have to to make it through the thick goop through which they travel, and none of that difficulty was shown.

Protein Packing in the Cell

A lot to admire in the new XVIVO/Harvard scientific visualization of molecular activities within a living cell.

Now the same scientific visualization team has created a new video, “Inner Life of a Cell—Protein Packing.” This one is so much better. It’s a much more crowded world, and the actions of proteins are limited by interference from all their neighbors. None of the small molecules are shown (not a criticism—if water, ions, and sugars were visible you wouldn’t be able to visualize anything through the resulting mess), but many of the proteins are shown. Of course it’s not “accurate”—it’s a visualization!—but this one is much more representative of the kind of confused and crowded environment within living cells. The new simulation makes it much clearer that the normal processes of life are challenged with every motion, and the new video makes it easier to be awed by the mere fact that we are alive. Heartily recommended!

Share

It’s interesting to watch the evolution of a technology.  I attended my first LED lighting conference in 1998 and I’ve attended at least one in every intervening year.  In those first years, Las Vegas was the primary customer.  Nowhere else was lighting such a significant expense — and such an essential part of marketing.  Higher efficiency and lower maintenance represented a huge savings for casino operators on The Strip.  But the ability of LEDs to manage the distribution of light (direct it to the consumer instead of spraying it into space) and the flexible control of LED lighting (to easily reconfigure displays) added to the energy savings to make LED lighting worth the investment.

But there’s a big difference between convincing casino operators to upgrade their high-value, high-maintenance displays and getting consumers to shell out big bucks to replace their 25-cent incandescent bulbs.  The fundamental problem is that LEDs are significantly different than their incandescent predecessors, but it’s difficult to take advantage of their new capabilities within the current infrastructure.  So people pushing for LED adoption have to limit their argument to two factors: it will save energy and it will reduce maintenance costs.  It’s kind of like arguing for replacement of horsedrawn buggies with gas-powered automobiles, but having your arguments for change limited to discussions of lower hay costs and reduced need for street cleaning.

LED lighting squeezed into a familiar incandescent bulb form factor.

LED lighting can be made to look like an incandescent bulb, but that’s like requiring a buggy whip on a Ferrari.

Even with that limited evaluation, LEDs are now well past the point where they are economically viable (after some intense political, economic, and technological growing pains), so just about any lighting project today needs to at least consider LEDs as an option, and for many developers they are the option of choice.  But now that LED general illumination is in place, system operators are realizing some other benefits.

Networked Capabilities of LED Lighting

At the LED Show (starting yesterday in — fittingly enough — Las Vegas) Kelly Cunningham of the California Lighting Technology Center (CLTC) at the University of California-Davis described a networked implementation of LED lighting control on the university’s campus.  Outdoor LED lighting at the campus is triggered by passive infrared sensors that provide little more than a simple present-or-not signal.  Even with that limited input, the control system anticipates pedestrian and cyclist movement, bringing lighting up to full brightness levels before the traffic reaches the lit area.  The ability to remotely control and instantaneously modify the illumination level of LEDs is central to the operation of this kind of system, and the immediate benefits are impressive.

For example, 100 wall packs (those curious rectangular fixtures affixed to the outside of buildings and washing the walls with light) detected only a 28 percent occupancy rate, leading to an 85 percent reduction in energy costs — over and above the reduction simply due to LED efficiency alone.  The CLTC implemented the same kind of system on a stretch of urban roadway.  Although the final report has not been released, Cunningham said the results are similar.

That’s encouraging news for the industry, because those are the kind of integrated lighting systems that insiders have been claiming would lead to additional levels of savings (and other capabilities, but that’s another story), and this provides another fairly significant example of the promise coming to fruition.  It also demonstrates another general truth: if you don’t have a capability, then you don’t have any idea what you’ll do with it; but when you develop the capability you will apply the capability in clever ways.  That’s true for networked lighting now.  In the near future, the precise control LEDs offer over color, intensity, and distribution of light will be used to modify illumination in our work and home environments to enhance our comfort and productivity in ways we can only glimpse today.

Share

Remember the hullabaloo a few years ago about camcorders capable of infrared photography — folks modifying their camcorders to see through clothing? One of the more annoying elements of the press coverage was the label “x-ray” for that kind of image. Sure, it’s a quick way of saying the modified cameras can see through clothing that appears opaque to the eye, but the infrared wavelengths are about a thousand times less energetic than x-rays.

And speaking of wavelength, there’s a lot of confusion about infrared cameras in general. The confusion stems from the fact that the infrared region of the spectrum is about 200 times as wide as the visible light spectrum. That is, if you reflect a beam of deep blue light off a mirror, it will act (just about) exactly like a beam of red light — so the two ends of the visible light spectrum act almost exactly the same way.

The Infrared Spectrum

Not so for the infrared. The infrared is roughly and loosely divided into three (or more) regions: the near-infrared (NIR), the midwave-infrared (MWIR), and the longwave- (or far-) infrared (LWIR). Those regions act completely different from each other. A material that absorbs energy in the NIR can be transparent in the MWIR and reflective in the LWIR.

Those regions of the infrared spectrum are generated in different ways as well. Every object in the universe emits radiation, at wavelengths that correspond to the object’s temperature. The human body, for example, emits light in the LWIR region. An army tank or an airplane emits in the MWIR. A hot stove emits in the NIR — and when it gets even hotter it emits in the visible…leading to our familiar experience of something being “red hot.” There are dozens of different types of infrared cameras, imaging different parts of the infrared spectrum.

The modified camcorders that do the “x-ray” infrared photography work in the near-infrared spectrum, so they aren’t creating pictures from the infrared energy originating in the human body. They create images from reflected NIR. At night, when there is not much visible light around, these cameras would shine a NIR “flashlight” and capture the infrared wavelengths reflected off the object. Even though the scene would be dark, the camera would capture a perceptible scene.

Near-infrared photography creates subtly different effects.  Image courtesy of Wikimedia Commons
Near-infrared photography creates subtly different — almost eerie — effects.

The sun and artificial light sources emit near-infrared radiation, but they also emit visible light. The detector in the camcorder could sense the NIR, but it’s usually not the image you want during the daytime, so an infrared blocking filter is put in front of the detector. To take the night-time photos, the infrared blocking filter is flipped up, out of the optical path. If the filter is flipped up during the day, the detector senses the NIR, but there’s so much visible light around that the infrared image would be swamped by the visible light. To get around that, some users put a visible light blocking filter in front of the camera. Then, during the daytime, the camera senses the NIR without all the extra visible light. That NIR image captures NIR reflected from the scene, in the same way that visible light images capture light reflected from the scene.

Seeing Through Clothes

So how did that “see through the clothes” thing work? Well, there are some materials that are transparent in the NIR and opaque in the visible. Some (usually thinner cotton) fabrics do not reflect near-infrared. Meanwhile, the undergarments, fabricated from different materials, do reflect NIR. The effect is almost as if the outer garment isn’t there.

It’s really not too common a situation, where the external clothes are transparent in the NIR, but the uproar over the situation was enough to cause camcorder manufacturers to make it much more difficult to use the cameras to image NIR under daylight conditions. It’s a shame, because there are some really interesting effects possible with daytime infrared photography. Still, since there are some scumballs who do things like take “naked” photos of the Chinese diving team with their modified cameras, one can see why the manufacturers have tried to eliminate the infrared imaging capabilities of their cameras.

Read More
Related article at Salon.com.

Share