Optical Navigation

The process of using images of celestial bodies (e.g. planets, moons, asteroids) against a starfield background is often referred to as optical navigation (OPNAV) by the spaceflight community. Images collected by cameras onboard a spacecraft have been used for navigation since the early days of planetary exploration. The concept was first demonstrated on the Mariner 6 and Mariner 7 missions to Mars in 1969. OPNAV images were first used to navigate on Mariner 9 in 1971 and were first required for mission success during the Voyager 1 and Voyager 2 flybys of Jupiter in 1979. Since then, OPNAV techniques have been used extensively for navigation during encounters with planets, moons, asteroids, and comets.

Despite its long history in spaceflight, current OPNAV capabilities are still unable to deliver the levels of performance and autonomy desired for future space exploration missions. Consequently, the field is rich with exciting basic research questions that have the promise fundamentally alter how we explore space.

Current research in Rensselaer’s SEAL addresses nearly all aspects of the OPNAV problem. Three areas of special note are as follows:

Horizon-Based Optical Navigation

Horizon-based OPNAV uses the location of a planet or moon’s lit horizon (or lit limb) in an image for navigation. The concept was demonstrated manually by astronauts during the Gemini, Apollo, and Skylab programs using a space sextant. Operationally, horizon-based methods have been used on camera images of planets or moons against starfield backgrounds.

Rensselaer’s SEAL has developed new methods for accurate horizon localization in an image and for using the measured horizon location to estimate the relative position between the spacecraft and observed body. Unlike past methods, our algorithms are non-iterative and are easily implemented without analyst supervision --- making them ideal for autonomous spacecraft navigation.

Our horizon-based OPNAV algorithms are expected to fly on the upcoming NASA Orion Exploration Mission 1 (EM-1). Additionally, we are also applying our methods to OPNAV images from legacy missions (e.g. Cassini) to improve the science return from those missions.

Our OPNAV algorithms autonomously find a body’s lit limb, accurately localize it in the image using knowledge of its shape, and ultimately determine the body-spacecraft relative position. This example shows a real image of Dione (one of Saturn’s moons) captured by the Cassini spacecraft on June 15, 2008 (raw image N1592196595, available on NASA Planetary Data System). Overlayed on the Dione image is: rays of incoming sunlight, some of which intersect the lit limb (white lines); the projection of the best-fit horizon (red ellipse); and the orientation of Dione’s principal axes (cyan). The 3D visualization showing Cassini-Dione-Saturn geometry was generated using Cosmographia and shows the actual geometry at this epoch.

Landmark-Based Optical Navigation

When closer to an object, it is often advantageous to navigate with landmarks instead of the lit limb. This approach has been used extensively for navigation relative to small bodies (e.g. asteroids and comets) and has more recently been used for precision landing applications.

Rensselaer’s SEAL is investigating landmark-based OPNAV methods for spacecraft in orbit about large central bodies. We are particularly interested in landmark re-identification under varying lighting conditions and in navigation with unknown landmarks.

Geometric Camera Calibration

Image-based navigation requires that we understand the relation between an object’s relative position and its apparent location in an image. We arrive at this understanding by performing a geometric camera calibration. The generation of such calibrations are standard practice during an optical system’s pre-flight testing, but launch vibrations and on-orbit thermal cycling often require the calibration to be repeated in space.

Rensselaer’s SEAL has developed new methods for autonomous geometric camera calibration using ensembles of starfield images. By comparing the observed locations of stars in many images to their expected locations from a star catalog, we may compute parameters describing both lens distortion and the camera’s projection geometry (sometimes called the camera’s intrinsic parameters).

These SEAL-developed camera calibration algorithms will be flown on NASA’s Orion EM-1 mission.