site's title and link back to the home page

decorative picture for the mainstream pages Space arrow back Picture Processes

Tools

Planetary missions are now mostly using CCD cameras. A CCD camera aboard a craft is basically a CCD ("charge coupled device") electronic chip, color filters, and a lens. A CCD camera features a certain number of pixels which correspond to such lines by each line divided into further pieces. A CCD camera definition may be 1,024 x 1,024 pixels e.g. Each pixel captures a value on a scale from 0 (black) to 4.096 (white). The values are translated into digital code (0's and 1's) where 0 is black and 255 white. Each landscape or scene is taken through various filters. Basic filters are red, green, and blue (RGB). Most interplanetary cameras have additional filters (up to 18 or 24 e.g.) specially suited for capturing appropriate wavelengths like in the infrared or in the ultraviolet. Like a example of how the possibilities evolved, the MESSENGER mission to Mercury, which launched by 2004 is equipped with a Wide Angle Camera (WAC), part of the Mercury Dual Imaging System (MDIS), featuring 11 narrow-band color filters. In comparison, the Mariner 10 mission which reached Mercury in 1974 was using a vidicon camera with two visible-light filters and one ultraviolet filter only. Cameras proper may be narrow- or wide-angle, refractors or reflectors. Some missions may also feature a camera with a built-in red-green-blue processing similar to commercial cameras as a wheel of additional filters may be used ahead camera's optics. More generally, a technique called 'push-frame' allows to compensate for a mission's instrument weak resolution in terms of pixels as it consists into exposing a series of images as a craft progresses in a orbit

Beaming the Pictures

Once taken, the pictures are digitally encoded by the onboard computer which further compress them to eliminate redundant or less valuable data. Data are then sent Earth. Each pixel is 12 bits. The transmission rate varies between 40 ko and 165 ko per second. The data stream is captured at Earth by the antennas of NASA's Deep Space Network (DSN). The DSN is a complex of three sites over the world (Goldstone, Calif; Madrid, Spain; Canberra, Australia) where giant antennas are tracking the planetary missions. JPL, a division of the California Institute of Technology, is managing the DSN for NASA. The three locations around the globe ensures that every craft is in sight of the network 24 hours a day, each day. Data received are then relayed, through various communications, links to the JPL, Pasadena, Calif.

thumbnail to the RGB imaging processclick to the RGB imaging process

Processing the Pictures

At the JPL it's the Multimission Image Processing Laboratory (MIPL) which is in charge of the image processing. One of their main charge is to reconstruct the data streamed under the form of digitally encoded bits. They are using computers to produce two-dimensional images. It's at the MIPL that the first digital cameras and digital image processing were conceived as soon as the 1960s, as a way to convert television signals into zeros and ones to beam them back more easily to Earth. From the JPL, the pictures are dispatched either to missions' science teams and engineers, to archive systems or to the public through the various NASA websites

The basics of image processing is RGB, that is red, green, blue. Each original picture -if not taken out outside the visible spectrum, is taken in three, black-and-white shots, one through a red filter, one through a green one, one through a blue one. A filter lets pass light of its color only filtering all other colors. A red filter e.g. lets pass red only filtering the green and blue portions of visible light. When the processing team receives the three shots, it just reassigns a red, green, and blue hue to the appropriate black-and-white picture as the merging of the three pictures renders back the view of the object in visible light, just like we would see it aboard a spacecraft. Pictures taken in the visible may be improved in a variety of ways, like, for example re-assigning differently the red, green, blue to the three black-and-white shots. This is used to highlight some features of an object. "Enhanced color pictures" are of this sort. The terms "false color picture" is related to such manipulations too, as opposed to a "true color picture" which matches what the eye would see. Strictly "false color" is better related to pictures, three basic shots of which are taken in wavelengths outside of the visible range, in the infrared or the ultraviolet for example. In this case, red, green, and blue are arbitrarily reassigned to the basic shots, leading to a color picture which does not necessarily match what the eye would see. Generally, such pictures taken outside the visible wavelengths are subjects too to varied manipulations of colors' reassignment. The false color system was introduced to science in the 1800's by William Smith, the British originator of geology as he colour-coded the types of rocks on a map of Great Britain. A example of rearrangement of the colors taken may be the following, like a image is taken at a planet like Mercury through 3 filters that transmit light with wavelengths of 1000, 700, and 430 nanometers (infrared, far red, and violet, respectively), as such separated images are placed in the red, green, and blue channels when treated and restituting a image on Earth. That is a way to accentuate differences of the planet's surface. Like another example, hydrogen atoms, oxygen atoms, and nitrogen ions may be imaged through three narrow wavelengths of red light

'Natural colors,' as far pictures taken by a rover on a planet are concerned, may also mean what the colors would look like if we were to view the scene ourselves on the planet as 'white-balancing' is a processing method which shows an estimate of the colors of the terrain as if illuminated under Earth-like conditions

The Case of the Infrared Pictures

infrared daytime and nighttime views

Infrared pictures are taken in the infrared (longer wavelenghts) part of the spectrum. Infrared pictures are working based on the heat released by a surface. Daytime infrared images mainly distinguish sun-facing and colder slopes (first are bright because warm, second are dark because colder) and between dust and rocky surface (at day, dusty areas warms more quickly than rocks hence appear brighter). Nighttime infrared images shows the latter aspect only as the sun-facing/colder slopes difference quickly dissipates at night. The infrared then shows the differente rates of cooling of materials. Rocks appear bright because they cool more slowly, retaining their heat as dusty and fine-grained material areas appear dark because they cool more quickly. Through instruments using infrared sensitivity generally, a mission can separate chemical compounds found at a planetary site, for example. As carbon dioxide, or water absorb and reflect characteristically different wavelengths of infrared, the instrument looks at the missing parts of the spectra received back from the surface. picture based on material NASA/JPL/Arizona State University

Website Manager: G. Guichard, site 'Amateur Astronomy,' http://stars5.6te.net. Page Editor: G. Guichard. last edited: 12/28/2010. contact us at ggwebsites@outlook.com
Free Web Hosting