1st Annual RPI Workshop on Image-Based Modeling and Navigation for Space Applications

June 4-5, 2018 – Workshop
June 6-8, 2018 – Short Course
Rensselaer Polytechnic Institute, Troy, NY

IDEFIX/Ariane Rocket Body 42P

Poorly-resolved sequence of images of IDEFIX/Ariane Rocket Body 42P. Images from GT-SORT and provided courtesy of Dr. Marcus Holzinger.

Comet 67P/Churyumov-Gerasimenko

Comet 67P/Churyumov-Gerasimenko. Original image provided as .IMG file in the archive delivery from : ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA.

Photo of Dione

Dione. Original image provided as .IMG file from : Porco, C.P., CASSINI ORBITER SATURN ISSNA/ISSWA 2 EDR VERSION 1.0, CO-S-ISSNA/ISSWA-2-EDR-V1.0, 2005.

Abstracts Due: 2 February 2018
Author Notification: 23 February 2018
Manuscript Deadline: 23 May 2018

Registration is CLOSED.

DOWNLOAD FULL PROGRAM (10MB PDF)


Although images of space objects have been used for decades, there are many common challenges in the use of image data for planetary science, optical navigation, and space situational awareness. Many of these challenges may be addressed by recent advancements in imaging sensors, computer vision algorithms, and computing resources - all of which set the stage for a fundamental change in how we will exploit image data for future space missions. This workshop aims to facilitate the dissemination of ideas between all of the communities using space image data and to identify areas of common need. The unique workshop format will include oral presentations, poster sessions, and focused breakout sessions to discuss topics of interest.

Keynote Speakers

Dr. Shyam Bhaskaran
Dr. Shyam Bhaskaran
NASA Jet Propulsion Laboratory
Keynote Speaker

Image-Based Deep Space Navigation: Finding our Way Around the Solar System One Picture at a Time

Monday, June 4, 2018, 2:00-3:00, CBIS Auditorium

Since the dawn of the Space Age, spacecraft have carried cameras to take pictures of Solar System bodies they encountered, including planets, planetary satellites, asteroids, or comets. These images obviously have tremendous scientific value, as much of the knowledge we have gained about these bodies came from the pictures. They are also of great value for public relations and outreach; images of our neighbors in the Solar System spark the imagination and provide inspiration for generations of scientists and engineers to pursue a career in space science, and for the public at large to share in the excitement of space exploration. For space navigators, the images are also a powerful tool for precisely navigating a spacecraft to its intended target. This discipline, called Optical Navigation (OPNAV), has enabled remarkable accuracies; examples include the Cassini spacecraft’s ability to target flybys of Titan (Saturn’s largest moon) to less than 1 km, and the Deep Impact spacecraft’s collision with comet Tempel 1 at a speed of over 36,000 km/h. This talk will provide a brief background of deep space navigation, the fundamentals of OPNAV technique, and examples of its current usage. It will also touch on a history of OPNAV, how it has enabled autonomous spacecraft navigation, and its future potential.

Dr. Shyam Bhaskaran began his professional career at the Jet Propulsion Laboratory in January of 1992 as an orbit determination specialist on the Galileo mission. Since then, he has served as a member and lead of the navigation team for many missions. He is the one of the principal architects of the autonomous navigation system used on the Deep Space 1, STARDUST, and Deep Impact missions. He then moved into management, serving as the Mission Management Office Team Chief for Guidance, Navigation and Control, during which time he oversaw the successful navigation efforts of STARDUST's flyby of the comet Wild 2, the Genesis Earth return, and Deep Impact's encounter with comet Tempel 1. Since 2005, Dr. Bhaskaran has been the Supervisor for the Outer Planet Navigation Group in the Mission Design and Navigation Section.  As manager of this group, Dr. Bhaskaran is responsible for the technical and administrative oversight of 15 engineers responsible for navigation analysis and operations of several current and future missions, including Cassini, Dawn, Juno, and Europa Clipper.  Dr. Bhaskaran's honors include two NASA Exceptional Achievement Medals, awards for Technical Excellence, and a NASA Space Act Award for his work on the Deep Space 1 Autonomous Navigation System.  He has also received over 20 NASA Group Achievement Awards for his work on the various missions listed above.

Dr. Bhaskaran received a B.S (1985). and M.S. (1987) from the University of Texas at Austin, and a Ph.D (1991) from the University of Colorado at Boulder, all in Aerospace Engineering.

Dr. Jan-Michael Frahm
University of North Carolina at Chapel Hill
Keynote Speaker

Large-scale Robust Estimation of 3D World Models Leveraging Crowd-Sourced Data

Tuesday, June 5, 2018, 2:00–3:00, CBIS Auditorium

Crowd-sourced imagery (images and video) is the richest data source available for 3D reconstruction of the world. The tremendous amounts of available imagery provided by photo/video sharing websites, not only covers the world’s appearance, but also reflects the temporal evolution of the world, and its dynamic parts. It has long been a goal of computer vision to obtain lifelike virtual models from such rich imagery. The major current research challenges are the scale of the data, e.g. the Yahoo 100 million-image dataset (only presents a small fraction of what is needed to model our world), the robustness, the completeness of the registration, and the lack of data for dynamic elements. Specifically, we are currently facing significant challenges to process Internet scale crowd-sourced imagery within a reasonable time frame given limited computer resources. This is particularly true as we move toward automatically creating content for personal virtual and augmented reality. The talk discusses our work on highly efficient image registration for the reconstruction of static 3D models from world-scale photo collections on a single PC in the span of six days, as well as our related work on image-based search to address the scalability. It will also discuss the efforts to overcome the challenges of achieving registration completeness and robustness. Our novel streaming reconstruction approach aims to ease the above current challenges to estimate a 3D world model from unorganized image data. Additionally, our work towards overcoming the lack of observations for the reconstruction of scene dynamics will be presented. This includes, for example, reconstructing people and fountains, using crowd-sourced imagery and videos to achieve the goal of bringing the 3D models to life.

Jan-Michael Frahm is an Associate Professor at University of North Carolina at Chapel Hill, where he heads the 3D computer vision group. He received his Dr.-Ing. in computer vision in 2005 from the Christian-Albrechts University of Kiel, Germany and his Diploma in Computer Science from the University of Lübeck, Germany. His research interests include a variety of topics on the intersection of computer vision, computer graphics, robotics. He has worked on structure from motion for single/multi-camera systems for static and dynamic scenes to create 3D models of the scene; real-time multi-view stereo to create a dense scene geometry from camera images; improved robust and fast estimation methods from noisy data in various stages of the reconstruction process; high performance feature tracking for salient image-point motion extraction; high frame rate camera tracking; and the development of data-parallel algorithms for commodity graphics hardware for efficient 3D reconstruction. He has over 100 peer-reviewed publications and is editor in chief for the Elsevier Journal on Image and Vision Computing.

Schedule

DOWNLOAD FULL PROGRAM (10MB PDF)

Monday: June 4, 2018

7:30-8:30

Breakfast & Registration

 

EMPAC Main Lobby
EMPAC Evelyn’s Café

8:30-9:00

Welcome

  EMPAC Studio Beta

9:00-9:40

Fundamentals of Optical Navigation
William Owen (NASA JPL)

  EMPAC Studio Beta

9:40-10:00

Optical Navigation Preparations for the New Horizons Kuiper-Belt Extended Mission
Derek Nelson, Erik Lessac-Chenen, Coralie Jackman, Fred Pelletier, and John Pelgrift (KinetX)

Full Manuscript EMPAC Studio Beta

10:00-10:20

Testing of the Orion Optical Navigation Image Processing Algorithm
Rebecca Johanning and Christopher D’Souza (NASA JSC)

Abstract EMPAC Studio Beta

10:20-10:40

Optical Navigation Algorithm Performance
Cinnamon A. Wright, Andrew J. Liounis, and Benjamin Ashman (NASA GSFC)

Full Manuscript EMPAC Studio Beta

10:40-11:00

Break

   

11:00-11:20

Draper’s Current Vision Navigation Efforts for Space Applications
Stephen Steffes (Draper)

Abstract EMPAC Studio Beta

11:20-11:40

Image Processing and Attitude Estimation Performance of Star Camera with Extended Bodies in the Field of View
Malak Samaan (Odyssey Space Research), John Christian (RPI), Steve Lockhart (NASA JSC), and Greg Holt (NASA JSC)

Abstract EMPAC Studio Beta

11:40-12:00

Break

   

12:00-12:20

Tumbling Small Body Spin State Estimation Using Independently Simulated Images
Corwin Olson (UT-Austin), Ryan P. Russell (UT-Austin), and Shyam Bhaskaran (NASA JPL)

Full Manuscript EMPAC Studio Beta
12:20-12:40

Image Processing for Relative Navigation in Space Rendezvous, Small Body Navigation and Descent and Landing for Planets and Asteroids
Manual Sanchez-Gestido (ESA), Klaus Janschek (Technical University Dresden), Sergey Dyblenko (Technical University Dresden), Sabine Ludwig (Jena-Optronik), Uwe Schmidt (Jena-Optronik), Jesus Gil-Fernandez (ESA), Irene Huertas (ESA), Olivier Dubois-Matra (ESA), Guillermo Ortega (ESA)

Full Manuscript EMPAC Studio Beta

12:40-2:00

Lunch

  EMPAC Mezzanine

2:00-3:00

Day 1 Keynote: Shyam Bhaskaran (NASA JPL)

  CBIS Auditorium

3:00-3:20

Break

   

3:20-3:40

Autonomous Limb-based Shape Modeling and Optical Navigation
Jay McMahon and Daniel Scheeres (CU-Boulder)

Abstract EMPAC Studio Beta

3:40-4:00

Limb Based Optical Navigation for Irregular Bodies
Andrew Liounis (NASA GSFC)

Full Manuscript EMPAC Studio Beta

4:00-4:20

Fitting Performance for Horizon-Based Optical Navigation with Cassini Images of Dione and Rhea
Courtney Hollenberg and John Christian (RPI)

Abstract EMPAC Studio Beta

4:20-4:40

Break

   

4:40-5:00

Far-Field Object Characterization in the Presence of Measurement Noise
C. Frueh and S. Fan (Purdue)

Abstract EMPAC Studio Beta

5:00-5:20

Star Tracker Attitude Solution from Spinning Platform with Rapid Streak Detection
N. Houtz and C. Frueh (Purdue)

Abstract EMPAC Studio Beta

5:30-6:00

EMPAC Tour (optional)    

6:00-9:00

Dinner, Posters, Student Paper Competition

  EMPAC Main Lobby

 

Evening Poster Session

Rendering an Image from Stars to Asteroids
N.L. Mehta, P.S. Ericksen, and C. A. Sawyer (JHU APL)

Abstract

A Tool for Evaluating Surface Coverage
Andrew J. Liounis (NASA GSFC), Kenneth Getzandanner (NASA GSFC), John Weirich (PSI), Eric Palmer (PSI), and Olivier Barnouin (JHU APL)

 

Astrobotic Research and Development: Vision-Based Navigation Solutions for Spacecraft
Kerry Snyder, Eric Amoroso, Fraser Kitchell, and Andrew Horchler (Astrobotic Technology, Inc.)

Abstract

Software Architecture for Closed-loop Autonomous Optical Navigation Scenarios
Thibaud Teil and Hanspeter Schaub (CU-Boulder)

Full Manuscript

Selection of Opportunistic Landmarks for Vision-Based Terrain Relative Navigation Above a Planetary Surface
Kevin Kobylka and John Christian (RPI)

Full Manuscript

Camera Calibration and State Estimation for the Cislunar Explorers Optical Navigation Subsystem
Michael Wang, Anshuman Das, Kyle Doyle, and Mason Peck (Cornell)

Full Manuscript

 

Tuesday: June 5 2018

7:30-8:30

Breakfast & Registration

 

EMPAC Main Lobby
EMPAC Evelyn’s Café

8:30-8:50

SPC at 30
Robert Gaskell (PSI)

Abstract EMPAC Studio Beta

8:50-9:10

The Small Body Mapping Tool (SBMT) for Accessing, Visualizing, and Analyzing Spacecraft Data in Three Dimensions
O. S. Barnouin, C. M. Ernst, R. T. Daly, M.I. Zimmerman, J.M. Peachy, and the Small Body Mapping Tool Team (JHU APL)

Abstract EMPAC Studio Beta

9:10-9:30

Theoretical and Empirical Performance Evaluation of Stereophotoclinometry in Support of OSIRIS-REx
E. E. Palmer (PSI), J. R. Weirich (PSI), R. W. Gaskell (PSI), O. S. Barnouin (JHU APL), M. Daly (York Univ.), and D. S. Lauretta (Univ. of Arizona)

Abstract EMPAC Studio Beta

9:30-9:50

Break

   
9:50–10:10

In-Flight Calibration of the OSIRIS-REx Optical Navigation Imagers
John Y. Pelgrift (KinetX), Eric M. Sahr (KinetX), Derek S. Nelson (KinetX), Coralie D. Jackman (KinetX), Lylia Benhacine (Draper), Brent J. Bos (NASA GSFC), Bashar Rizk (Univ. of Arizona), Christian d’Aubigny (Univ. of Arizona),  Dathon Golish (Univ. of Arizona), Daniella DellaGiustina (Univ. of Arizona), and Dante S. Lauretta (Univ. of Arizona)

Full Manuscript EMPAC Studio Beta
10:10–10:30

Optical Navigation Simulation and Performance Analysis for OSIRIS-REx Proximity Operations
Coralie Jackman (KinetX), Leilah McCarthy (KinetX), Eric Sahr (KinetX), Derek Nelson (KinetX), Jason Leonard (KinetX), Peter Antreasian (KinetX), Eric Palmer (PSI), John Weirich (PSI), and Andrew Liounis (NASA GSFC)

Abstract EMPAC Studio Beta
10:30–10:50

Image Correlation in Unknown Environments for OSIRIS-REx Asteroid Sample Collection
Courtney Mario (Draper)

Abstract EMPAC Studio Beta
10:50–11:10

Break

   
11:10–11:30

Mapping Bennu with Sunlight and Lasers: The SPCOLA methods
J. H. Roberts (JHU APL), O. S. Barnouin (JHU APL), R. W. Gaskell (PSI), E. E. Palmer (PSI), J. R. Weirich (PSI), M. Daly (York Univ.), J. Seabrook (York Univ.), R. C. Espiritu (JHU APL), A. H. Nair (JHU APL), M. E. Perry (JHU APL), and D. S. Lauretta (Univ. of Arizona)

Abstract EMPAC Studio Beta
11:30–11:50

Vision Navigation using the ISS Selfie video data
Fredy Monterroza (Draper)

Abstract EMPAC Studio Beta
11:50–1:00

Lunch

  EMPAC Mezzanine
1:00–2:00

Breakout Sessions

 

EMPAC Studio Beta
CBIS Meeting Rooms

2:00–3:00

Day 2 Keynote: Jan-Michael Frahm (UNC at Chapel Hill)

  CBIS Auditorium
3:00–3:30

Break

   
3:30–3:50

Assessment of Vision-Based Terrain Relative Navigation Algorithms
Lylia Benhacine (Draper)

Abstract EMPAC Studio Beta
3:50–4:10

RVS3000-3D: LIDAR meets Neural Networks
Christoph Schmitt, Johannes Both, and Florian Kolb (Jena-Optronik)

  EMPAC Studio Beta
4:10–4:30

Optical High-Dynamic-Range Spacecraft Relative Navigation: Advances in Algorithms and Hardware-in-the-Loop Validation
Joshua Sullivan, Sumant Sharma, Connor Beierle, and Simone D’Amico (Stanford)

  EMPAC Studio Beta
4:30–4:40

Closing Remarks

  EMPAC Studio Beta

 

Breakout Sessions

There will be three parallel breakout sessions from 1:00–2:00 on June 5. Each session will begin with a brief introduction by the moderator to frame the topic, followed by an open dialogue amongst session participants. Everyone is encouraged to actively participate in the conversation! These informal sessions are an opportunity for workshop attendees to explore challenges of interest to the community, share new developments of note, and identify important areas of future work. We hope that the discussions will enable the exchange of new ideas and be a catalyst for new collaborations.

Breakout Session #1

Cis-Lunar Optical Navigation (OPNAV)

Moderator: Scott Cryan, NASA Johnson Space Center

With plans for cis-lunar spacecraft (manned and un-manned) to become less reliant on Earth assets, desire/need exists to perform GNC autonomously. Ability to use passive sensors, such as visible and infrared cameras, to support navigation determination is highly desirable. Performing absolute navigation as well as relative navigation (monitoring of incoming visiting vehicles) with target vehicle based  desired accuracies to support absolute navigation and relative navigation?

Breakout Session #2

Information Overload: Challenges for Data Management in Small Body Modeling

Moderator: James Roberts, Johns Hopkins University Applied Physics Laboratory

Over the past 20 years, tremendous advances have been made in the collection, analysis, and visualization of spacecraft data used to model the topography of small planetary bodies. Shape models of a dozen near-earth and main-belt asteroids, a few comets, and several small moons are widely available, from ground-based radar and in-flight LIDAR; as well as from visible images using both stereophotogrammetry and stereophotoclinometry.

This session will explore a new set of challenges we face: dealing with all this data! Our data management approach must be able to handle both data-rich and data-poor scenarios. Orbital campaigns often produce a wealth of data from a variety of sensors that must be combined, interpreted, and visualized to produce meaningful science products. Conversely, flybys of small bodies on the way to another target often provide only limited observations during the brief encounter. This leads to a strong need for making the most of one flyby’s worth of data so that we can get the best shape models possible of these fleeting targets. In both cases (orbit and flyby), there is a need to manage data from disparate sensors in the most efficient, insightful, and transparent way possible. This is the focus of our breakout session.

Breakout Session #3

Identifying the Next Generation of Computer Vision Problems for Space Applications

Moderator: John Christian, Rensselaer Polytechnic Institute

Space-based imagery is often expensive to obtain and limited in geometric diversity. This has led the broader space community to develop sophisticated methods for extracting the maximum amount of information from limited collections of images. Meanwhile, the field of computer vision has matured considerably, and what was first thought doable as a simple summer project at MIT in 1966 has blossomed into one of the most fruitful fields of research of the last half-century. Despite these parallel histories, the space exploration and computer vision communities have not interacted as much as would be hoped. As a result, we find that many classic computer vision algorithms cannot be effectively applied to space imagery due to implicit (and sometimes very subtle) assumptions about operating in a terrestrial environment.

In this breakout session participants are encouraged to share the challenges they’ve encountered while implementing standard computer vision methods in the space domain. Our objective is to better understand the current shortcomings and to identify a fundamental set of computer vision problems that need to be solved. This may be used to better engage the computer science community with challenging problem sets of direct relevance to space science and space exploration.

Technical Papers, Presentations, and Posters:

Abstracts are sought in all aspects of image analysis for space applications, and may be submitted to one of the following three categories: presentation (manuscript optional), poster (manuscript optional), student competition (manuscript required). Example topics of particular interest include:

  • Image processing techniques as applied to space imagery
  • Estimation of a body’s photometric properties from images
  • Stereophotoclinometry (SPC)
  • Lightcurve inversion
  • Optical navigation (OPNAV)
  • 3D modeling of space objects from 2D imagery
  • Image-based terrain relative navigation (TRN)
  • Simultaneous localization and mapping (SLAM)
  • Space imaging hardware

Outstanding technical papers from the workshop will be invited for submission to a special issue of the Journal of the Astronautical Sciences.

Breakout Session Proposals:

In addition to the technical presentations and posters, the workshop will feature a series of breakout sessions where attendees can meet in small groups to discuss topics of interest to the community. We are accepting proposals for breakout session topics. Breakout session proposals should be no more than one (1) page in length.

Student Paper Competition:

Papers with students as the primary author are eligible for submission to the student paper competition. Students authors must attend the workshop and will present their work in a special poster session. Papers will be evaluated on technical merit and impact to the field, with the top papers receiving awards.

 

Submit Your Abstract: 
Abstract submission is now closed.

Stereophotoclinometry (SPC) Short Course

Instructors: Dr. Eric Palmer and Dr. Bob Gaskell, Planetary Science Institute
June 6-8, 2018 at Rensselaer Polytechnic Institute

Stereophotoclinometry (SPC) is a technique for building digital terrain models that combines the accuracy of stereophotogrammetry with the high-resolution capabilities of photoclinometry.  It has been used on numerous missions to provide support for navigation and scientific topographic analysis (Dawn, Rosetta, Cassini, Hayabusa, NEAR).

This short course will provide training on both the theoretical background of SPC and provide pragmatic training on how to run the software.  It will teach the students to create and change LANDMARKS/MAPLETS, register and use images, evaluate the quality of SPC products, and learn techniques to fix errors. By the end of the class, the student will produce a low-resolution shape model of an asteroid and have the skills to continue this work to the limit of the source images.

This course is predominantly hands-on, so it is imperative that the software installed on their laptop prior to training.  SPC runs in a unix environment, with the procedures and tools optimized for a Mac OS.  PC are suggested to dual-boot into linux.  Familiarity with unix command line will be significantly beneficial.

Registration and participation in the SPC short course is limited to U.S. Persons.

Registration

Registration for the Workshop is now closed. If you have any questions about registration, please contact John Christian at chrisj9@rpi.edu

Travel & Lodging

EMPAC @RPI

EMPAC PHOTO © PETER AARON / ESTO

Venue: 

The workshop will be held on the campus of Rensselaer Polytechnic Institute (RPI) at the state-of-the-art Curtis R. Priem Experimental Media and Performing Arts Center (EMPAC).

Lodging: 

The block of rooms at the Hilton Garden Inn Troy is now gone. As of this update (5/17/18), there are still a number of regular rooms available. If you find the hotel is full, we recommend you consider hotels in Latham area (a 10-minute drive). Here are a few back-up hotels that we recommend, both of which have easy access to the NY-7 bypass to Troy:

Transportation: 

Rensselaer Polytechnic Institute is located in Troy, NY, with easy access to Albany International Airport (ALB) and the Albany-Rensselaer Amtrak station.

Sponsors

Rensselaer Polytechnic Institue
http://www.rpi.edu
American Astronautical Society
http://astronautical.org/
New York Space Grant Consortium
http://astro.cornell.edu/specialprograms/spacegrant/

Planning Committee

Workshop General Chair:

Workshop Planning Committee:

  • Lylia Benhacine, C.S. Draper Laboratory
  • Shyam Bhaskaran, NASA Jet Propulsion Laboratory
  • Christopher D’Souza, NASA Johnson Space Center
  • Marcus Holzinger, Georgia Institute of Technology
  • Coralie Jackman, KinetX
  • Andrew Liounis, NASA Goddard Space Flight Center
  • Jim Masciarelli, Ball Aerospace & Technology Corp.
  • Sandipan Mishra, Rensselaer Polytechnic Institute
  • Richard Radke, Rensselaer Polytechnic Institute
  • James Roberts, JHU Applied Physics Laboratory