Scientific Seen

News, Commentary, and Tutorials from a Scientific Perspective

The Reality (Star) of the Nobel Prize

Posted by rgaughan on 13 May 2015
Posted in Uncategorized  | 1 Comment

I’ve had the good fortune to interview half a dozen or more Nobel Prize winners, either before or after their awards. They’ve been universally humble, informative, and courteous. If each of them were incarnated as a television avatar, they’d all be Fred Rogers, putting on the slippers and sweater and settling in to chat about reaction chemistry, electron degeneracy pressure, optical trapping, whatever. Monday of this week we got a little more of the same at the CLEO Conference opening plenaries. Stefan Hell and W.E. Moerner both gave very informative discussions about their super-resolution microscopy techniques. Then Eric Betzig took the podium, and step aside Teresa Giudice, because the real Nobel Prize Winners of Ashburn, Virginia was on the air.

“The Nobel committee made a f***king mistake,” he said.

Tell us how you really feel, Dr. Betzig.

It was rather refreshing, especially because I happen to agree with him about the particular mistake he was referring to at that moment (I’m sure that will make him feel better). The mistake was the omission of Mats Gustafsson’s work developing Structured Illumination Microscopy. Mats himself was not eligible because he died of brain cancer in 2011, but having reported on both his and Stefan Hell’s work when they were first making news and beyond, I thought both developments deserved to share the Nobel. After that, I have to admit that the field was so crowded with super-resolution techniques that I wouldn’t have been able to pick out PALM, STORM, or NSOM from whatever other alphabet soup localization methods were being developed; so I’m glad people much smarter than me were able to pick out Drs. Moerner and Betzig for recognition.
But back to Betzig’s point: there are a variety of methods out there, including those honored with the Nobel Award, but we can get so caught up in the race to better resolution that we forget to evaluate other aspects equally (or more) important for performance. Specifically, if the goal is to measure processes in living cells then many of the super-resolution techniques aren’t optimum because of long exposure times, high illumination intensities, and/or the need to fix (i.e., kill) cells. According to Betzig, although SIM may not be able to quote the resolution numbers of some of these other methods, SIM’s performance in live cell imaging speaks for itself.

SIM image compared to conventional fluorescence.

The structured illumination microscopy image on the right shows far more detail than possible with the conventional fluorescence image on the upper left, even when computationally enhanced as in the lower left. Image courtesy of the Singapore Agency for Science, Technology and Research.

Betzig also lamented the fact that his colleague Harald Hess — who had essentially shared equally in the work they did developing localization microscopy methods — was unable to share in the prize. And when it comes down to it, Betzig’s complaints really come down to one thing: the Nobel Award is limited to three recipients, and, while the Nobel is designed to recognize outstanding achievement, it has the unintended (and sometimes undeserved) consequence of classifying other work as less outstanding. Of course, this isn’t limited to Nobel Prizes — the same classification goes for sports figures, novelists, movie directors. It’s the Oscar-winning film that gets re-booked into theaters, while other equally good or better films don’t get the extra boost they deserve. There is a fine line (and some element of chance and, yes, human error) that separates the “winners” from the “losers.” Betzig brought to our attention how arbitrary that line is and is encouraging us not to allow ourselves to relegate SIM to second-class status just because it wasn’t acknowledged with a Nobel Award.

And if it takes a little colorful language to bring that to our attention — Hey, I’m not going to argue with him; he just got a Nobel Prize.


A few years ago XVIVO and Harvard University released a video of a scientific visualization entitled, “The Inner Life of the Cell.” I wasn’t a big fan. It was a fairy-tale vision of cellular activities.

If you saw a simulation of traffic flow on the highways and every vehicle in each lane was going the same speed, maintaining proper following distance, signalling and changing lanes only when necessary, merging and exiting with decorum—it might be nice, but it would be so fanciful that it would do more harm than good if you were trying to understand highway traffic in the real world. When you look at real-world traffic, you have difficulty believing anyone can travel the highways safely, but the simulation would make it hard to imagine there could ever be such a thing as a freeway collision.

A scientific visualization should induce a mental model that catalyzes an improved understanding of reality, and the 2006 simulation failed.

Of course, a simulation like this is going to be unrealistic. Molecules aren’t distinguished by hues, atoms don’t remain stationary with respect to their neighbors, and there’s no classical music soundtrack in a real cell. But the 2006 simulation was so far removed from reality that (in my opinion, of course) it served more to confuse than clarify perceptions about molecular activities in a living cell. My biggest peeve: all the molecules were shown in stately glides as if a miniature synchronized swimming team was displaying the results of years of practice. On those scales, the “aqueous” environment behaves more like peanut butter. Kinesin molecules grab onto microtubules and pull because they have to to make it through the thick goop through which they travel, and none of that difficulty was shown.

Protein Packing in the Cell

A lot to admire in the new XVIVO/Harvard scientific visualization of molecular activities within a living cell.

Now the same scientific visualization team has created a new video, “Inner Life of a Cell—Protein Packing.” This one is so much better. It’s a much more crowded world, and the actions of proteins are limited by interference from all their neighbors. None of the small molecules are shown (not a criticism—if water, ions, and sugars were visible you wouldn’t be able to visualize anything through the resulting mess), but many of the proteins are shown. Of course it’s not “accurate”—it’s a visualization!—but this one is much more representative of the kind of confused and crowded environment within living cells. The new simulation makes it much clearer that the normal processes of life are challenged with every motion, and the new video makes it easier to be awed by the mere fact that we are alive. Heartily recommended!


Remember the hullabaloo a few years ago about camcorders capable of infrared photography — folks modifying their camcorders to see through clothing? One of the more annoying elements of the press coverage was the label “x-ray” for that kind of image. Sure, it’s a quick way of saying the modified cameras can see through clothing that appears opaque to the eye, but the infrared wavelengths are about a thousand times less energetic than x-rays.

And speaking of wavelength, there’s a lot of confusion about infrared cameras in general. The confusion stems from the fact that the infrared region of the spectrum is about 200 times as wide as the visible light spectrum. That is, if you reflect a beam of deep blue light off a mirror, it will act (just about) exactly like a beam of red light — so the two ends of the visible light spectrum act almost exactly the same way.

The Infrared Spectrum

Not so for the infrared. The infrared is roughly and loosely divided into three (or more) regions: the near-infrared (NIR), the midwave-infrared (MWIR), and the longwave- (or far-) infrared (LWIR). Those regions act completely different from each other. A material that absorbs energy in the NIR can be transparent in the MWIR and reflective in the LWIR.

Those regions of the infrared spectrum are generated in different ways as well. Every object in the universe emits radiation, at wavelengths that correspond to the object’s temperature. The human body, for example, emits light in the LWIR region. An army tank or an airplane emits in the MWIR. A hot stove emits in the NIR — and when it gets even hotter it emits in the visible…leading to our familiar experience of something being “red hot.” There are dozens of different types of infrared cameras, imaging different parts of the infrared spectrum.

The modified camcorders that do the “x-ray” infrared photography work in the near-infrared spectrum, so they aren’t creating pictures from the infrared energy originating in the human body. They create images from reflected NIR. At night, when there is not much visible light around, these cameras would shine a NIR “flashlight” and capture the infrared wavelengths reflected off the object. Even though the scene would be dark, the camera would capture a perceptible scene.

Near-infrared photography creates subtly different effects.  Image courtesy of Wikimedia Commons
Near-infrared photography creates subtly different — almost eerie — effects.

The sun and artificial light sources emit near-infrared radiation, but they also emit visible light. The detector in the camcorder could sense the NIR, but it’s usually not the image you want during the daytime, so an infrared blocking filter is put in front of the detector. To take the night-time photos, the infrared blocking filter is flipped up, out of the optical path. If the filter is flipped up during the day, the detector senses the NIR, but there’s so much visible light around that the infrared image would be swamped by the visible light. To get around that, some users put a visible light blocking filter in front of the camera. Then, during the daytime, the camera senses the NIR without all the extra visible light. That NIR image captures NIR reflected from the scene, in the same way that visible light images capture light reflected from the scene.

Seeing Through Clothes

So how did that “see through the clothes” thing work? Well, there are some materials that are transparent in the NIR and opaque in the visible. Some (usually thinner cotton) fabrics do not reflect near-infrared. Meanwhile, the undergarments, fabricated from different materials, do reflect NIR. The effect is almost as if the outer garment isn’t there.

It’s really not too common a situation, where the external clothes are transparent in the NIR, but the uproar over the situation was enough to cause camcorder manufacturers to make it much more difficult to use the cameras to image NIR under daylight conditions. It’s a shame, because there are some really interesting effects possible with daytime infrared photography. Still, since there are some scumballs who do things like take “naked” photos of the Chinese diving team with their modified cameras, one can see why the manufacturers have tried to eliminate the infrared imaging capabilities of their cameras.

Read More
Related article at


Devices constructed of photonic crystals promise to enable the precise, localized control of optical propagation. The fabrication of such crystals usually involves a complex and expensive multistep process. However, researchers at the University of Toronto recently developed a self-assembly crystallization process that points to the cost-effective production of functional photonic bandgap circuit components.

Adopting techniques that are analogous to self-assembly in natural opals, Geoffrey A. Ozin and his colleagues, Hernan Miguez and San Ming Wang at the university’s Materials Chemistry Research Group, condense silica microspheres from colloidal suspensions into crystalline opal structures.

Naturally occurring opal forms when silica spheres precipitate from solutions or colloidal suspensions into seams or crevices in surrounding rock. Most opal is amorphous, but the silica spheres in precious opal organize into a face-centered-cubic structure well-suited for photonic bandgap crystals because of the index variation between the silica spheres and the interstitial material.

Read More
Originally published in Photonics Spectra, September 2002