Navigating surgical fluorescence cameras using near-infrared optical tracking.
van Oosterom, Matthias; den Houting, David; van de Velde, Cornelis; van Leeuwen, Fijs
2018-05-01
Fluorescence guidance facilitates real-time intraoperative visualization of the tissue of interest. However, due to attenuation, the application of fluorescence guidance is restricted to superficial lesions. To overcome this shortcoming, we have previously applied three-dimensional surgical navigation to position the fluorescence camera in reach of the superficial fluorescent signal. Unfortunately, in open surgery, the near-infrared (NIR) optical tracking system (OTS) used for navigation also induced an interference during NIR fluorescence imaging. In an attempt to support future implementation of navigated fluorescence cameras, different aspects of this interference were characterized and solutions were sought after. Two commercial fluorescence cameras for open surgery were studied in (surgical) phantom and human tissue setups using two different NIR OTSs and one OTS simulating light-emitting diode setup. Following the outcome of these measurements, OTS settings were optimized. Measurements indicated the OTS interference was caused by: (1) spectral overlap between the OTS light and camera, (2) OTS light intensity, (3) OTS duty cycle, (4) OTS frequency, (5) fluorescence camera frequency, and (6) fluorescence camera sensitivity. By optimizing points 2 to 4, navigation of fluorescence cameras during open surgery could be facilitated. Optimization of the OTS and camera compatibility can be used to support navigated fluorescence guidance concepts. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Interplanetary approach optical navigation with applications
NASA Technical Reports Server (NTRS)
Jerath, N.
1978-01-01
The use of optical data from onboard television cameras for the navigation of interplanetary spacecraft during the planet approach phase is investigated. Three optical data types were studied: the planet limb with auxiliary celestial references, the satellite-star, and the planet-star two-camera methods. Analysis and modelling issues related to the nature and information content of the optical methods were examined. Dynamic and measurement system modelling, data sequence design, measurement extraction, model estimation and orbit determination, as relating optical navigation, are discussed, and the various error sources were analyzed. The methodology developed was applied to the Mariner 9 and the Viking Mars missions. Navigation accuracies were evaluated at the control and knowledge points, with particular emphasis devoted to the combined use of radio and optical data. A parametric probability analysis technique was developed to evaluate navigation performance as a function of system reliabilities.
NASA Astrophysics Data System (ADS)
Theil, S.; Ammann, N.; Andert, F.; Franz, T.; Krüger, H.; Lehner, H.; Lingenauber, M.; Lüdtke, D.; Maass, B.; Paproth, C.; Wohlfeil, J.
2018-03-01
Since 2010 the German Aerospace Center is working on the project Autonomous Terrain-based Optical Navigation (ATON). Its objective is the development of technologies which allow autonomous navigation of spacecraft in orbit around and during landing on celestial bodies like the Moon, planets, asteroids and comets. The project developed different image processing techniques and optical navigation methods as well as sensor data fusion. The setup—which is applicable to many exploration missions—consists of an inertial measurement unit, a laser altimeter, a star tracker and one or multiple navigation cameras. In the past years, several milestones have been achieved. It started with the setup of a simulation environment including the detailed simulation of camera images. This was continued by hardware-in-the-loop tests in the Testbed for Robotic Optical Navigation (TRON) where images were generated by real cameras in a simulated downscaled lunar landing scene. Data were recorded in helicopter flight tests and post-processed in real-time to increase maturity of the algorithms and to optimize the software. Recently, two more milestones have been achieved. In late 2016, the whole navigation system setup was flying on an unmanned helicopter while processing all sensor information onboard in real time. For the latest milestone the navigation system was tested in closed-loop on the unmanned helicopter. For that purpose the ATON navigation system provided the navigation state for the guidance and control of the unmanned helicopter replacing the GPS-based standard navigation system. The paper will give an introduction to the ATON project and its concept. The methods and algorithms of ATON are briefly described. The flight test results of the latest two milestones are presented and discussed.
Autonomous Vision Navigation for Spacecraft in Lunar Orbit
NASA Astrophysics Data System (ADS)
Bader, Nolan A.
NASA aims to achieve unprecedented navigational reliability for the first manned lunar mission of the Orion spacecraft in 2023. A technique for accomplishing this is to integrate autonomous feature tracking as an added means of improving position and velocity estimation. In this thesis, a template matching algorithm and optical sensor are tested onboard three simulated lunar trajectories using linear covariance techniques under various conditions. A preliminary characterization of the camera gives insight into its ability to determine azimuth and elevation angles to points on the surface of the Moon. A navigation performance analysis shows that an optical camera sensor can aid in decreasing position and velocity errors, particularly in a loss of communication scenario. Furthermore, it is found that camera quality and computational capability are driving factors affecting the performance of such a system.
Stray light lessons learned from the Mars reconnaissance orbiter's optical navigation camera
NASA Astrophysics Data System (ADS)
Lowman, Andrew E.; Stauder, John L.
2004-10-01
The Optical Navigation Camera (ONC) is a technical demonstration slated to fly on NASA"s Mars Reconnaissance Orbiter in 2005. Conventional navigation methods have reduced accuracy in the days immediately preceding Mars orbit insertion. The resulting uncertainty in spacecraft location limits rover landing sites to relatively safe areas, away from interesting features that may harbor clues to past life on the planet. The ONC will provide accurate navigation on approach for future missions by measuring the locations of the satellites of Mars relative to background stars. Because Mars will be a bright extended object just outside the camera"s field of view, stray light control at small angles is essential. The ONC optomechanical design was analyzed by stray light experts and appropriate baffles were implemented. However, stray light testing revealed significantly higher levels of light than expected at the most critical angles. The primary error source proved to be the interface between ground glass surfaces (and the paint that had been applied to them) and the polished surfaces of the lenses. This paper will describe troubleshooting and correction of the problem, as well as other lessons learned that affected stray light performance.
Terminal navigation analysis for the 1980 comet Encke slow flyby mission
NASA Technical Reports Server (NTRS)
Jacobson, R. A.; Mcdanell, J. P.; Rinker, G. C.
1973-01-01
The initial results of a terminal navigation analysis for the proposed 1980 solar electric slow flyby mission to the comet Encke are presented. The navigation technique employs onboard optical measurements with the scientific television camera, groundbased observations of the spacecraft and comet, and groundbased orbit determination and thrust vector update computation. The knowledge and delivery accuracies of the spacecraft are evaluated as a function of the important parameters affecting the terminal navigation. These include optical measurement accuracy, thruster noise level, duration of the planned terminal coast period, comet ephemeris uncertainty, guidance initiation time, guidance update frequency, and optical data rate.
Optical Navigation Preparations for New Horizons Pluto Flyby
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.; Dumont, Philip J.; Jackman, Coralie D.
2012-01-01
The New Horizons spacecraft will encounter Pluto and its satellites in July 2015. As was the case for the Voyager encounters with Jupiter, Saturn, Uranus and Neptune, mission success will depend heavily on accurate spacecraft navigation, and accurate navigation will be impossible without the use of pictures of the Pluto system taken by the onboard cameras. We describe the preparations made by the New Horizons optical navigators: picture planning, image processing algorithms, software development and testing, and results from in-flight imaging.
Optical designs for the Mars '03 rover cameras
NASA Astrophysics Data System (ADS)
Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.
2001-12-01
In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.
GPS free navigation inspired by insects through monocular camera and inertial sensors
NASA Astrophysics Data System (ADS)
Liu, Yi; Liu, J. G.; Cao, H.; Huang, Y.
2015-12-01
Navigation without GPS and other knowledge of environment have been studied for many decades. Advance technology have made sensors more compact and subtle that can be easily integrated into micro and hand-hold device. Recently researchers found that bee and fruit fly have an effectively and efficiently navigation mechanism through optical flow information and process only with their miniature brain. We present a navigation system inspired by the study of insects through a calibrated camera and other inertial sensors. The system utilizes SLAM theory and can be worked in many GPS denied environment. Simulation and experimental results are presented for validation and quantification.
Orion Optical Navigation Progress Toward Exploration: Mission 1
NASA Technical Reports Server (NTRS)
Holt, Greg N.; D'Souza, Christopher N.; Saley, David
2018-01-01
Optical navigation of human spacecraft was proposed on Gemini and implemented successfully on Apollo as a means of autonomously operating the vehicle in the event of lost communication with controllers on Earth. It shares a history with the "method of lunar distances" that was used in the 18th century and gained some notoriety after its use by Captain James Cook during his 1768 Pacific voyage of the HMS Endeavor. The Orion emergency return system utilizing optical navigation has matured in design over the last several years, and is currently undergoing the final implementation and test phase in preparation for Exploration Mission 1 (EM-1) in 2019. The software development is being worked as a Government Furnished Equipment (GFE) project delivered as an application within the Core Flight Software of the Orion camera controller module. The mathematical formulation behind the initial ellipse fit in the image processing is detailed in Christian. The non-linear least squares refinement then follows the technique of Mortari as an estimation process of the planetary limb using the sigmoid function. The Orion optical navigation system uses a body fixed camera, a decision that was driven by mass and mechanism constraints. The general concept of operations involves a 2-hour pass once every 24 hours, with passes specifically placed before all maneuvers to supply accurate navigation information to guidance and targeting. The pass lengths are limited by thermal constraints on the vehicle since the OpNav attitude generally deviates from the thermally stable tail-to-sun attitude maintained during the rest of the orbit coast phase. Calibration is scheduled prior to every pass due to the unknown nature of thermal effects on the lens distortion and the mounting platform deformations between the camera and star trackers. The calibration technique is described in detail by Christian, et al. and simultaneously estimates the Brown-Conrady coefficients and the Star Tracker/Camera interlock angles. Accurate attitude information is provided by the star trackers during each pass. Figure 1 shows the various phases of lunar return navigation when the vehicle is in autonomous operation with lost ground communication. The midcourse maneuvers are placed to control the entry interface conditions to the desired corridor for safe landing. The general form of optical navigation on Orion is where still images of the Moon or Earth are processed to find the apparent angular diameter and centroid in the camera focal plane. This raw data is transformed into range and bearing angle measurements using planetary data and precise star tracker inertial attitude. The measurements are then sent to the main flight computer's Kalman filter to update the onboard state vector. The images are, of course, collected over an arc to converge the state and estimate velocity. The same basic technique was used by Apollo to satisfy loss-of-comm, but Apollo used manual crew sightings with a vehicle-integral sextant instead of autonomously processing optical imagery. The software development is past its Critical Design Review, and is progressing through test and certification for human rating. In support of this, a hardware-in-the-loop test rig was developed in the Johnson Space Center Electro-Optics Lab to exercise the OpNav system prior to integrated testing on the Orion vehicle. Figure 2 shows the rig, which the test team has dubbed OCILOT (Orion Camera In the Loop Optical Testbed). Analysis performed to date shows a delivery that satisfies an allowable entry corridor as shown in Figure 3.
Relative optical navigation around small bodies via Extreme Learning Machine
NASA Astrophysics Data System (ADS)
Law, Andrew M.
To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.
Optical guidance vidicon test program
NASA Technical Reports Server (NTRS)
Eiseman, A. R.; Stanton, R. H.; Voge, C. C.
1976-01-01
A laboratory and field test program was conducted to quantify the optical navigation parameters of the Mariner vidicons. A scene simulator and a camera were designed and built for vidicon tests under a wide variety of conditions. Laboratory tests characterized error sources important to the optical navigation process and field tests verified star sensitivity and characterized comet optical guidance parameters. The equipment, tests and data reduction techniques used are described. Key test results are listed. A substantial increase in the understanding of the use of selenium vidicons as detectors for spacecraft optical guidance was achieved, indicating a reduction in residual offset errors by a factor of two to four to the single pixel level.
Yang, Xiaofeng; Wu, Wei; Wang, Guoan
2015-04-01
This paper presents a surgical optical navigation system with non-invasive, real-time, and positioning characteristics for open surgical procedure. The design was based on the principle of near-infrared fluorescence molecular imaging. The in vivo fluorescence excitation technology, multi-channel spectral camera technology and image fusion software technology were used. Visible and near-infrared light ring LED excitation source, multi-channel band pass filters, spectral camera 2 CCD optical sensor technology and computer systems were integrated, and, as a result, a new surgical optical navigation system was successfully developed. When the near-infrared fluorescence was injected, the system could display anatomical images of the tissue surface and near-infrared fluorescent functional images of surgical field simultaneously. The system can identify the lymphatic vessels, lymph node, tumor edge which doctor cannot find out with naked eye intra-operatively. Our research will guide effectively the surgeon to remove the tumor tissue to improve significantly the success rate of surgery. The technologies have obtained a national patent, with patent No. ZI. 2011 1 0292374. 1.
Evaluation of optical data for Mars approach navigation.
NASA Technical Reports Server (NTRS)
Jerath, N.
1972-01-01
Investigation of several optical data types which can be obtained from science and engineering instruments normally aboard interplanetary spacecraft. TV cameras are assumed to view planets or satellites and stars for celestial references. Also, spacecraft attitude sensors are assumed to yield celestial references. The investigation of approach phases of typical Mars missions showed that the navigation accuracy was greatly enhanced with the addition of optical data to radio data. Viewing stars and the planet Mars was found most advantageous ten days before Mars encounter, and viewing Deimos or Phobos and stars was most advantageous within ten days of encounter.
Orion Optical Navigation Progress Toward Exploration Mission 1
NASA Technical Reports Server (NTRS)
Holt, Greg N.; D'Souza, Christopher N.; Saley, David
2018-01-01
Optical navigation of human spacecraft was proposed on Gemini and implemented successfully on Apollo as a means of autonomously operating the vehicle in the event of lost communication with controllers on Earth. The Orion emergency return system utilizing optical navigation has matured in design over the last several years, and is currently undergoing the final implementation and test phase in preparation for Exploration Mission 1 (EM-1) in 2019. The software development is past its Critical Design Review, and is progressing through test and certification for human rating. The filter architecture uses a square-root-free UDU covariance factorization. Linear Covariance Analysis (LinCov) was used to analyze the measurement models and the measurement error models on a representative EM-1 trajectory. The Orion EM-1 flight camera was calibrated at the Johnson Space Center (JSC) electro-optics lab. To permanently stake the focal length of the camera a 500 mm focal length refractive collimator was used. Two Engineering Design Unit (EDU) cameras and an EDU star tracker were used for a live-sky test in Denver. In-space imagery with high-fidelity truth metadata is rare so these live-sky tests provide one of the closest real-world analogs to operational use. A hardware-in-the-loop test rig was developed in the Johnson Space Center Electro-Optics Lab to exercise the OpNav system prior to integrated testing on the Orion vehicle. The software is verified with synthetic images. Several hundred off-nominal images are also used to analyze robustness and fault detection in the software. These include effects such as stray light, excess radiation damage, and specular reflections, and are used to help verify the tuning parameters chosen for the algorithms such as earth atmosphere bias, minimum pixel intensity, and star detection thresholds.
Visual Odometry for Autonomous Deep-Space Navigation
NASA Technical Reports Server (NTRS)
Robinson, Shane; Pedrotty, Sam
2016-01-01
Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.
Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System
NASA Astrophysics Data System (ADS)
Bethmann, F.; Luhmann, T.
2012-07-01
The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.
Plenoptic Imager for Automated Surface Navigation
NASA Technical Reports Server (NTRS)
Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael
2010-01-01
An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.
Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio
2010-01-01
This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559
Model-based software engineering for an optical navigation system for spacecraft
NASA Astrophysics Data System (ADS)
Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.
2017-09-01
The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.
Model-based software engineering for an optical navigation system for spacecraft
NASA Astrophysics Data System (ADS)
Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.
2018-06-01
The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.
Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.
2018-02-01
NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
NASA Astrophysics Data System (ADS)
Ou, Yangwei; Zhang, Hongbo; Li, Bin
2018-04-01
The purpose of this paper is to show that absolute orbit determination can be achieved based on spacecraft formation. The relative position vectors expressed in the inertial frame are used as measurements. In this scheme, the optical camera is applied to measure the relative line-of-sight (LOS) angles, i.e., the azimuth and elevation. The LIDAR (Light radio Detecting And Ranging) or radar is used to measure the range and we assume that high-accuracy inertial attitude is available. When more deputies are included in the formation, the formation configuration is optimized from the perspective of the Fisher information theory. Considering the limitation on the field of view (FOV) of cameras, the visibility of spacecraft and the installation of cameras are investigated. In simulations, an extended Kalman filter (EKF) is used to estimate the position and velocity. The results show that the navigation accuracy can be enhanced by using more deputies and the installation of cameras significantly affects the navigation performance.
NASA Astrophysics Data System (ADS)
Bu, Yanlong; Zhang, Qiang; Ding, Chibiao; Tang, Geshi; Wang, Hang; Qiu, Rujin; Liang, Libo; Yin, Hejun
2017-02-01
This paper presents an interplanetary optical navigation algorithm based on two spherical celestial bodies. The remarkable characteristic of the method is that key navigation parameters can be estimated depending entirely on known sizes and ephemerides of two celestial bodies, especially positioning is realized through a single image and does not rely on traditional terrestrial radio tracking any more. Actual Earth-Moon group photos captured by China's Chang'e-5T1 probe were used to verify the effectiveness of the algorithm. From 430,000 km away from the Earth, the camera pointing accuracy reaches 0.01° (one sigma) and the inertial positioning error is less than 200 km, respectively; meanwhile, the cost of the ground control and human resources are greatly reduced. The algorithm is flexible, easy to implement, and can provide reference to interplanetary autonomous navigation in the solar system.
Miniaturized camera system for an endoscopic capsule for examination of the colonic mucosa
NASA Astrophysics Data System (ADS)
Wippermann, Frank; Müller, Martin; Wäny, Martin; Voltz, Stephan
2014-09-01
Todaýs standard procedure for the examination of the colon uses a digital endoscope located at the tip of a tube encasing wires for camera read out, fibers for illumination, and mechanical structures for steering and navigation. On the other hand, there are swallowable capsules incorporating a miniaturized camera which are more cost effective, disposable, and less unpleasant for the patient during examination but cannot be navigated along the path through the colon. We report on the development of a miniaturized endoscopic camera as part of a completely wireless capsule which can be safely and accurately navigated and controlled from the outside using an electromagnet. The endoscope is based on a global shutter CMOS-imager with 640x640 pixels and a pixel size of 3.6μm featuring through silicon vias. Hence, the required electronic connectivity is done at its back side using a ball grid array enabling smallest lateral dimensions. The layout of the f/5-objective with 100° diagonal field of view aims for low production cost and employs polymeric lenses produced by injection molding. Due to the need of at least one-time autoclaving, high temperature resistant polymers were selected. Optical and mechanical design considerations are given along with experimental data obtained from realized demonstrators.
Completely optical orientation determination for an unstabilized aerial three-line camera
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2010-10-01
Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
GPS/Optical/Inertial Integration for 3D Navigation Using Multi-Copter Platforms
NASA Technical Reports Server (NTRS)
Dill, Evan T.; Young, Steven D.; Uijt De Haag, Maarten
2017-01-01
In concert with the continued advancement of a UAS traffic management system (UTM), the proposed uses of autonomous unmanned aerial systems (UAS) have become more prevalent in both the public and private sectors. To facilitate this anticipated growth, a reliable three-dimensional (3D) positioning, navigation, and mapping (PNM) capability will be required to enable operation of these platforms in challenging environments where global navigation satellite systems (GNSS) may not be available continuously. Especially, when the platform's mission requires maneuvering through different and difficult environments like outdoor opensky, outdoor under foliage, outdoor-urban and indoor, and may include transitions between these environments. There may not be a single method to solve the PNM problem for all environments. The research presented in this paper is a subset of a broader research effort, described in [1]. The research is focused on combining data from dissimilar sensor technologies to create an integrated navigation and mapping method that can enable reliable operation in both an outdoor and structured indoor environment. The integrated navigation and mapping design is utilizes a Global Positioning System (GPS) receiver, an Inertial Measurement Unit (IMU), a monocular digital camera, and three short to medium range laser scanners. This paper describes specifically the techniques necessary to effectively integrate the monocular camera data within the established mechanization. To evaluate the developed algorithms a hexacopter was built, equipped with the discussed sensors, and both hand-carried and flown through representative environments. This paper highlights the effect that the monocular camera has on the aforementioned sensor integration scheme's reliability, accuracy and availability.
The NEAR Multispectral Imager.
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1998-06-01
Multispectral Imager, one of the primary instruments on the Near Earth Asteroid Rendezvous (NEAR) spacecraft, uses a five-element refractive optics telescope, an eight-position filter wheel, and a charge-coupled device detector to acquire images over its sensitive wavelength range of ≍400 - 1100 nm. The primary science objectives of the Multispectral Imager are to determine the morphology and composition of the surface of asteroid 433 Eros. The camera will have a critical role in navigating to the asteroid. Seven narrowband spectral filters have been selected to provide multicolor imaging for comparative studies with previous observations of asteroids in the same class as Eros. The eighth filter is broadband and will be used for optical navigation. An overview of the instrument is presented, and design parameters and tradeoffs are discussed.
OSIRIS-REx Asteroid Sample Return Mission Image Analysis
NASA Astrophysics Data System (ADS)
Chevres Fernandez, Lee Roger; Bos, Brent
2018-01-01
NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.
Intraocular camera for retinal prostheses: Refractive and diffractive lens systems
NASA Astrophysics Data System (ADS)
Hauer, Michelle Christine
The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.
NASA Tech Briefs, October 2005
NASA Technical Reports Server (NTRS)
2005-01-01
Topics covered include: Insect-Inspired Optical-Flow Navigation Sensors; Chemical Sensors Based on Optical Ring Resonators; A Broad-Band Phase-Contrast Wave-Front Sensor; Progress in Insect-Inspired Optical Navigation Sensors; Portable Airborne Laser System Measures Forest-Canopy Height; Deployable Wide-Aperture Array Antennas; Faster Evolution of More Multifunctional Logic Circuits; Video-Camera-Based Position-Measuring System; N-Type delta Doping of High-Purity Silicon Imaging Arrays; Avionics System Architecture Tool; Updated Chemical Kinetics and Sensitivity Analysis Code; Predicting Flutter and Forced Response in Turbomachinery; Upgrades of Two Computer Codes for Analysis of Turbomachinery; Program Facilitates CMMI Appraisals; Grid Visualization Tool; Program Computes Sound Pressures at Rocket Launches; Solar-System Ephemeris Toolbox; Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras; Corrosion-Prevention Capabilities of a Water-Borne, Silicone-Based, Primerless Coating; Sol-Gel Process for Making Pt-Ru Fuel-Cell Catalysts; Making Activated Carbon for Storing Gas; System Regulates the Water Contents of Fuel-Cell Streams; Five-Axis, Three-Magnetic-Bearing Dynamic Spin Rig; Modifications of Fabrication of Vibratory Microgyroscopes; Chamber for Growing and Observing Fungi; Electroporation System for Sterilizing Water; Thermoelectric Air/Soil Energy-Harvesting Device; Flexible Metal-Fabric Radiators; Actuated Hybrid Mirror Telescope; Optical Design of an Optical Communications Terminal; Algorithm for Identifying Erroneous Rain-Gauge Readings; Condition Assessment and End-of-Life Prediction System for Electric Machines and Their Loads; Lightweight Thermal Insulation for a Liquid-Oxygen Tank; Stellar Gyroscope for Determining Attitude of a Spacecraft; and Lifting Mechanism for the Mars Explorer Rover.
Navigation and Remote Sensing Payloads and Methods of the Sarvant Unmanned Aerial System
NASA Astrophysics Data System (ADS)
Molina, P.; Fortuny, P.; Colomina, I.; Remy, M.; Macedo, K. A. C.; Zúnigo, Y. R. C.; Vaz, E.; Luebeck, D.; Moreira, J.; Blázquez, M.
2013-08-01
In a large number of scenarios and missions, the technical, operational and economical advantages of UAS-based photogrammetry and remote sensing over traditional airborne and satellite platforms are apparent. Airborne Synthetic Aperture Radar (SAR) or combined optical/SAR operation in remote areas might be a case of a typical "dull, dirty, dangerous" mission suitable for unmanned operation - in harsh environments such as for example rain forest areas in Brazil, topographic mapping of small to medium sparsely inhabited remote areas with UAS-based photogrammetry and remote sensing seems to be a reasonable paradigm. An example of such a system is the SARVANT platform, a fixed-wing aerial vehicle with a six-meter wingspan and a maximumtake- of-weight of 140 kilograms, able to carry a fifty-kilogram payload. SARVANT includes a multi-band (X and P) interferometric SAR payload, as the P-band enables the topographic mapping of densely tree-covered areas, providing terrain profile information. Moreover, the combination of X- and P-band measurements can be used to extract biomass estimations. Finally, long-term plan entails to incorporate surveying capabilities also at optical bands and deliver real-time imagery to a control station. This paper focuses on the remote-sensing concept in SARVANT, composed by the aforementioned SAR sensor and envisioning a double optical camera configuration to cover the visible and the near-infrared spectrum. The flexibility on the optical payload election, ranging from professional, medium-format cameras to mass-market, small-format cameras, is discussed as a driver in the SARVANT development. The paper also focuses on the navigation and orientation payloads, including the sensors (IMU and GNSS), the measurement acquisition system and the proposed navigation and orientation methods. The latter includes the Fast AT procedure, which performs close to traditional Integrated Sensor Orientation (ISO) and better than Direct Sensor Orientation (DiSO), and features the advantage of not requiring the massive image processing load for the generation of tie points, although it does require some Ground Control Points (GCPs). This technique is further supported by the availability of a high quality INS/GNSS trajectory, motivated by single-pass and repeat-pass SAR interferometry requirements.
Design and Development of the WVU Advanced Technology Satellite for Optical Navigation
NASA Astrophysics Data System (ADS)
Straub, Miranda
In order to meet the demands of future space missions, it is beneficial for spacecraft to have the capability to support autonomous navigation. This is true for both crewed and uncrewed vehicles. For crewed vehicles, autonomous navigation would allow the crew to safely navigate home in the event of a communication system failure. For uncrewed missions, autonomous navigation reduces the demand on ground-based infrastructure and could allow for more flexible operation. One promising technique for achieving these goals is through optical navigation. To this end, the present work considers how camera images of the Earth's surface could enable autonomous navigation of a satellite in low Earth orbit. Specifically, this study will investigate the use of coastlines and other natural land-water boundaries for navigation. Observed coastlines can be matched to a pre-existing coastline database in order to determine the location of the spacecraft. This paper examines how such measurements may be processed in an on-board extended Kalman filter (EKF) to provide completely autonomous estimates of the spacecraft state throughout the duration of the mission. In addition, future work includes implementing this work on a CubeSat mission within the WVU Applied Space Exploration Lab (ASEL). The mission titled WVU Advanced Technology Satellite for Optical Navigation (WATSON) will provide students with an opportunity to experience the life cycle of a spacecraft from design through operation while hopefully meeting the primary and secondary goals defined for mission success. The spacecraft design process, although simplified by CubeSat standards, will be discussed in this thesis as well as the current results of laboratory testing with the CubeSat model in the ASEL.
NASA Astrophysics Data System (ADS)
Daly, Michael J.; Muhanna, Nidal; Chan, Harley; Wilson, Brian C.; Irish, Jonathan C.; Jaffray, David A.
2014-02-01
A freehand, non-contact diffuse optical tomography (DOT) system has been developed for multimodal imaging with intraoperative cone-beam CT (CBCT) during minimally-invasive cancer surgery. The DOT system is configured for near-infrared fluorescence imaging with indocyanine green (ICG) using a collimated 780 nm laser diode and a nearinfrared CCD camera (PCO Pixelfly USB). Depending on the intended surgical application, the camera is coupled to either a rigid 10 mm diameter endoscope (Karl Storz) or a 25 mm focal length lens (Edmund Optics). A prototype flatpanel CBCT C-Arm (Siemens Healthcare) acquires low-dose 3D images with sub-mm spatial resolution. A 3D mesh is extracted from CBCT for finite-element DOT implementation in NIRFAST (Dartmouth College), with the capability for soft/hard imaging priors (e.g., segmented lymph nodes). A stereoscopic optical camera (NDI Polaris) provides real-time 6D localization of reflective spheres mounted to the laser and camera. Camera calibration combined with tracking data is used to estimate intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) lens parameters. Source/detector boundary data is computed from the tracked laser/camera positions using radiometry models. Target registration errors (TRE) between real and projected boundary points are ~1-2 mm for typical acquisition geometries. Pre-clinical studies using tissue phantoms are presented to characterize 3D imaging performance. This translational research system is under investigation for clinical applications in head-and-neck surgery including oral cavity tumour resection, lymph node mapping, and free-flap perforator assessment.
Simulation-based camera navigation training in laparoscopy-a randomized trial.
Nilsson, Cecilia; Sorensen, Jette Led; Konge, Lars; Westen, Mikkel; Stadeager, Morten; Ottesen, Bent; Bjerrum, Flemming
2017-05-01
Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera navigation skills during a laparoscopic cholecystectomy. The secondary outcome was technical skills after training, using a previously developed model for testing camera navigational skills. The exploratory outcome measured participants' motivation toward the task as an operating assistant. Thirty-six participants were randomized. No significant difference was found in the primary outcome between the three groups (p = 0.279). The secondary outcome showed no significant difference between the interventions groups, total time 167 s (95% CI, 118-217) and 194 s (95% CI, 152-236) for the camera group and the procedure group, respectively (p = 0.369). Both interventions groups were significantly faster than the control group, 307 s (95% CI, 202-412), p = 0.018 and p = 0.045, respectively. On the exploratory outcome, the control group for two dimensions, interest/enjoyment (p = 0.030) and perceived choice (p = 0.033), had a higher score. Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher interest/enjoyment and perceived choice than the camera group.
Miniature wide field-of-view star trackers for spacecraft attitude sensing and navigation
NASA Technical Reports Server (NTRS)
Mccarty, William; Curtis, Eric; Hull, Anthony; Morgan, William
1993-01-01
Introducing a family of miniature, wide field-of-view star trackers for low cost, high performance spacecraft attitude determination and navigation applications. These devices, derivative of the WFOV Star Tracker Camera developed cooperatively by OCA Applied Optics and the Lawrence Livermore National Laboratory for the Brilliant Pebbles program, offer a suite of options addressing a wide range of spacecraft attitude measurement and control requirements. These sensors employ much wider fields than are customary (ranging between 20 and 60 degrees) to assure enough bright stars for quick and accurate attitude determinations without long integration intervals. The key benefit of this approach are light weight, low power, reduced data processing loads and high information carrier rates for wide ACS bandwidths. Devices described range from the proven OCA/LLNL WFOV Star Tracker Camera (a low-cost, space-qualified star-field imager utilizing the spacecraft's own computer and centroiding and position-finding), to a new autonomous subsystem design featuring dual-redundant cameras and completely self-contained star-field data processing with output quaternion solutions accurate to 100 micro-rad, 3 sigma, for stand-alone applications.
Estimation of velocities via optical flow
NASA Astrophysics Data System (ADS)
Popov, A.; Miller, A.; Miller, B.; Stepanyan, K.
2017-02-01
This article presents an approach to the optical flow (OF) usage as a general navigation means providing the information about the linear and angular vehicle's velocities. The term of "OF" came from opto-electronic devices where it corresponds to a video sequence of images related to the camera motion either over static surfaces or set of objects. Even if the positions of these objects are unknown in advance, one can estimate the camera motion provided just by video sequence itself and some metric information, such as distance between the objects or the range to the surface. This approach is applicable to any passive observation system which is able to produce a sequence of images, such as radio locator or sonar. Here the UAV application of the OF is considered since it is historically
Image-based path planning for automated virtual colonoscopy navigation
NASA Astrophysics Data System (ADS)
Hong, Wei
2008-03-01
Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.
Indoor integrated navigation and synchronous data acquisition method for Android smartphone
NASA Astrophysics Data System (ADS)
Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng
2015-08-01
Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.
Autonomous Navigation for Deep Space Missions
NASA Technical Reports Server (NTRS)
Bhaskaran, Shyam
2012-01-01
Navigation (determining where the spacecraft is at any given time, controlling its path to achieve desired targets), performed using ground-in- the-loop techniques: (1) Data includes 2-way radiometric (Doppler, range), interferometric (Delta- Differential One-way Range), and optical (images of natural bodies taken by onboard camera) (2) Data received on the ground, processed to determine orbit, commands sent to execute maneuvers to control orbit. A self-contained, onboard, autonomous navigation system can: (1) Eliminate delays due to round-trip light time (2) Eliminate the human factors in ground-based processing (3) Reduce turnaround time from navigation update to minutes, down to seconds (4) React to late-breaking data. At JPL, we have developed the framework and computational elements of an autonomous navigation system, called AutoNav. It was originally developed as one of the technologies for the Deep Space 1 mission, launched in 1998; subsequently used on three other spacecraft, for four different missions. The primary use has been on comet missions to track comets during flybys, and impact one comet.
Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)
NASA Astrophysics Data System (ADS)
Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy
2013-05-01
GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.
Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.
Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing
2016-01-01
The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.
High-Resolution Mars Camera Test Image of Moon (Infrared)
NASA Technical Reports Server (NTRS)
2005-01-01
This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.Augmented Reality-Based Navigation System for Wrist Arthroscopy: Feasibility
Zemirline, Ahmed; Agnus, Vincent; Soler, Luc; Mathoulin, Christophe L.; Liverneaux, Philippe A.; Obdeijn, Miryam
2013-01-01
Purpose In video surgery, and more specifically in arthroscopy, one of the major problems is positioning the camera and instruments within the anatomic environment. The concept of computer-guided video surgery has already been used in ear, nose, and throat (ENT), gynecology, and even in hip arthroscopy. These systems, however, rely on optical or mechanical sensors, which turn out to be restricting and cumbersome. The aim of our study was to develop and evaluate the accuracy of a navigation system based on electromagnetic sensors in video surgery. Methods We used an electromagnetic localization device (Aurora, Northern Digital Inc., Ontario, Canada) to track the movements in space of both the camera and the instruments. We have developed a dedicated application in the Python language, using the VTK library for the graphic display and the OpenCV library for camera calibration. Results A prototype has been designed and evaluated for wrist arthroscopy. It allows display of the theoretical position of instruments onto the arthroscopic view with useful accuracy. Discussion The augmented reality view represents valuable assistance when surgeons want to position the arthroscope or locate their instruments. It makes the maneuver more intuitive, increases comfort, saves time, and enhances concentration. PMID:24436832
Augmented reality-based navigation system for wrist arthroscopy: feasibility.
Zemirline, Ahmed; Agnus, Vincent; Soler, Luc; Mathoulin, Christophe L; Obdeijn, Miryam; Liverneaux, Philippe A
2013-11-01
In video surgery, and more specifically in arthroscopy, one of the major problems is positioning the camera and instruments within the anatomic environment. The concept of computer-guided video surgery has already been used in ear, nose, and throat (ENT), gynecology, and even in hip arthroscopy. These systems, however, rely on optical or mechanical sensors, which turn out to be restricting and cumbersome. The aim of our study was to develop and evaluate the accuracy of a navigation system based on electromagnetic sensors in video surgery. We used an electromagnetic localization device (Aurora, Northern Digital Inc., Ontario, Canada) to track the movements in space of both the camera and the instruments. We have developed a dedicated application in the Python language, using the VTK library for the graphic display and the OpenCV library for camera calibration. A prototype has been designed and evaluated for wrist arthroscopy. It allows display of the theoretical position of instruments onto the arthroscopic view with useful accuracy. The augmented reality view represents valuable assistance when surgeons want to position the arthroscope or locate their instruments. It makes the maneuver more intuitive, increases comfort, saves time, and enhances concentration.
Optic flow-based collision-free strategies: From insects to robots.
Serres, Julien R; Ruffier, Franck
2017-09-01
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P.; Martin, Edward W.; Hitchcock, Charles L.; Yilmaz, Alper; Tweedle, Michael F.; Shao, Pengfei; Xu, Ronald X.
2016-01-01
Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)—fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting. PMID:27367051
Zhang, Zeshu; Pei, Jing; Wang, Dong; Gan, Qi; Ye, Jian; Yue, Jian; Wang, Benzhong; Povoski, Stephen P; Martin, Edward W; Hitchcock, Charles L; Yilmaz, Alper; Tweedle, Michael F; Shao, Pengfei; Xu, Ronald X
2016-01-01
Surgical resection remains the primary curative treatment for many early-stage cancers, including breast cancer. The development of intraoperative guidance systems for identifying all sites of disease and improving the likelihood of complete surgical resection is an area of active ongoing research, as this can lead to a decrease in the need of subsequent additional surgical procedures. We develop a wearable goggle navigation system for dual-mode optical and ultrasound imaging of suspicious lesions. The system consists of a light source module, a monochromatic CCD camera, an ultrasound system, a Google Glass, and a host computer. It is tested in tissue-simulating phantoms and an ex vivo human breast tissue model. Our experiments demonstrate that the surgical navigation system provides useful guidance for localization and core needle biopsy of simulated tumor within the tissue-simulating phantom, as well as a core needle biopsy and subsequent excision of Indocyanine Green (ICG)-fluorescing sentinel lymph nodes. Our experiments support the contention that this wearable goggle navigation system can be potentially very useful and fully integrated by the surgeon for optimizing many aspects of oncologic surgery. Further engineering optimization and additional in vivo clinical validation work is necessary before such a surgical navigation system can be fully realized in the everyday clinical setting.
A goggle navigation system for cancer resection surgery
NASA Astrophysics Data System (ADS)
Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald
2014-02-01
We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation
NASA Astrophysics Data System (ADS)
Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team
2002-12-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.
NASA Astrophysics Data System (ADS)
Suzuki, H.; Yamada, M.; Kouyama, T.; Tatsumi, E.; Kameda, S.; Honda, R.; Sawada, H.; Ogawa, N.; Morota, T.; Honda, C.; Sakatani, N.; Hayakawa, M.; Yokota, Y.; Yamamoto, Y.; Sugita, S.
2018-01-01
Hayabusa2, the first sample return mission to a C-type asteroid was launched by the Japan Aerospace Exploration Agency (JAXA) on December 3, 2014 and will arrive at the asteroid in the middle of 2018 to collect samples from its surface, which may contain both hydrated minerals and organics. The optical navigation camera (ONC) system on board the Hayabusa2 consists of three individual framing CCD cameras, ONC-T for a telescopic nadir view, ONC-W1 for a wide-angle nadir view, and ONC-W2 for a wide-angle slant view will be used to observe the surface of Ryugu. The cameras will be used to measure the global asteroid shape, local morphologies, and visible spectroscopic properties. Thus, image data obtained by ONC will provide essential information to select landing (sampling) sites on the asteroid. This study reports the results of initial inflight calibration based on observations of Earth, Mars, Moon, and stars to verify and characterize the optical performance of the ONC, such as flat-field sensitivity, spectral sensitivity, point-spread function (PSF), distortion, and stray light of ONC-T, and distortion for ONC-W1 and W2. We found some potential problems that may influence our science observations. This includes changes in sensitivity of flat fields for all bands from those that were measured in the pre-flight calibration and existence of a stray light that arises under certain conditions of spacecraft attitude with respect to the sun. The countermeasures for these problems were evaluated by using data obtained during initial in-flight calibration. The results of our inflight calibration indicate that the error of spectroscopic measurements around 0.7 μm using 0.55, 0.70, and 0.86 μm bands of the ONC-T can be lower than 0.7% after these countermeasures and pixel binning. This result suggests that our ONC-T would be able to detect typical strength (∼3%) of the serpentine absorption band often found on CM chondrites and low albedo asteroids with ≥ 4σ confidence.
Altair Navigation During Trans-Lunar Cruise, Lunar Orbit, Descent and Landing
NASA Technical Reports Server (NTRS)
Ely, Todd A.; Heyne, Martin; Riedel, Joseph E.
2010-01-01
The Altair lunar lander navigation system is driven by a set of requirements that not only specify a need to land within 100 m of a designated spot on the Moon, but also be capable of a safe return to an orbiting Orion capsule in the event of loss of Earth ground support. These requirements lead to the need for a robust and capable on-board navigation system that works in conjunction with an Earth ground navigation system that uses primarily ground-based radiometric tracking. The resulting system relies heavily on combining a multiplicity of data types including navigation state updates from the ground based navigation system, passive optical imaging from a gimbaled camera, a stable inertial measurement unit, and a capable radar altimeter and velocimeter. The focus of this paper is on navigation performance during the trans-lunar cruise, lunar orbit, and descent/landing mission phases with the goal of characterizing knowledge and delivery errors to key mission events, bound the statistical delta V costs for executing the mission, as well as the determine the landing dispersions due to navigation. This study examines the nominal performance that can be obtained using the current best estimate of the vehicle, sensor, and environment models. Performance of the system under a variety sensor outages and parametric trades is also examined.
Landmark-aided localization for air vehicles using learned object detectors
NASA Astrophysics Data System (ADS)
DeAngelo, Mark Patrick
This research presents two methods to localize an aircraft without GPS using fixed landmarks observed from an optical sensor. Onboard absolute localization is useful for vehicle navigation free from an external network. The objective is to achieve practical navigation performance using available autopilot hardware and a downward pointing camera. The first method uses computer vision cascade object detectors, which are trained to detect predetermined, distinct landmarks prior to a flight. The first method also concurrently explores aircraft localization using roads between landmark updates. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement updates when landmarks are detected. The sensor measurements and landmark coordinates extracted from the aircraft's camera images are combined into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities. The second method uses computer vision object detectors to detect abundant generic landmarks referred as buildings, fields, trees, and road intersections from aerial perspectives. Various landmark attributes and spatial relationships to other landmarks are used to help associate observed landmarks with reference landmarks. The computer vision algorithms automatically extract reference landmarks from maps, which are processed offline before a flight. During a flight, the aircraft navigates with attitude, heading, airspeed, and altitude measurements and obtains measurement corrections by processing aerial photos with similar generic landmark detection techniques. The method also combines sensor measurements and landmark coordinates into an unscented Kalman filter to obtain an estimate of the aircraft's position and wind velocities.
Construct and face validity of a virtual reality-based camera navigation curriculum.
Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J
2012-10-01
Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.
Localization and Mapping Using a Non-Central Catadioptric Camera System
NASA Astrophysics Data System (ADS)
Khurana, M.; Armenakis, C.
2018-05-01
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
Piao, Jin-Chun; Kim, Shin-Dug
2017-11-07
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.
Tracking multiple surgical instruments in a near-infrared optical system.
Cai, Ken; Yang, Rongqian; Lin, Qinyong; Wang, Zhigang
2016-12-01
Surgical navigation systems can assist doctors in performing more precise and more efficient surgical procedures to avoid various accidents. The near-infrared optical system (NOS) is an important component of surgical navigation systems. However, several surgical instruments are used during surgery, and effectively tracking all of them is challenging. A stereo matching algorithm using two intersecting lines and surgical instrument codes is proposed in this paper. In our NOS, the markers on the surgical instruments can be captured by two near-infrared cameras. After automatically searching and extracting their subpixel coordinates in the left and right images, the coordinates of the real and pseudo markers are determined by the two intersecting lines. Finally, the pseudo markers are removed to achieve accurate stereo matching by summing the codes for the distances between a specific marker with the other two markers on the surgical instrument. Experimental results show that the markers on the different surgical instruments can be automatically and accurately recognized. The NOS can accurately track multiple surgical instruments.
Fluorescence spectroscopy using indocyanine green for lymph node mapping
NASA Astrophysics Data System (ADS)
Haj-Hosseini, Neda; Behm, Pascal; Shabo, Ivan; Wârdell, Karin
2014-02-01
The principles of cancer treatment has for years been radical resection of the primary tumor. In the oncologic surgeries where the affected cancer site is close to the lymphatic system, it is as important to detect the draining lymph nodes for metastasis (lymph node mapping). As a replacement for conventional radioactive labeling, indocyanine green (ICG) has shown successful results in lymph node mapping; however, most of the ICG fluorescence detection techniques developed are based on camera imaging. In this work, fluorescence spectroscopy using a fiber-optical probe was evaluated on a tissue-like ICG phantom with ICG concentrations of 6-64 μM and on breast tissue from five patients. Fiber-optical based spectroscopy was able to detect ICG fluorescence at low intensities; therefore, it is expected to increase the detection threshold of the conventional imaging systems when used intraoperatively. The probe allows spectral characterization of the fluorescence and navigation in the tissue as opposed to camera imaging which is limited to the view on the surface of the tissue.
Clementine Observes the Moon, Solar Corona, and Venus
NASA Technical Reports Server (NTRS)
1997-01-01
In 1994, during its flight, the Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon.
In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame.Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis
2012-01-01
Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.
Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments
Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis
2012-01-01
Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations. PMID:22736999
Real-time optical flow estimation on a GPU for a skied-steered mobile robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-04-01
Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.
Generation of High-Resolution Geo-referenced Photo-Mosaics From Navigation Data
NASA Astrophysics Data System (ADS)
Delaunoy, O.; Elibol, A.; Garcia, R.; Escartin, J.; Fornari, D.; Humphris, S.
2006-12-01
Optical images of the ocean floor are a rich source of data to understand biological and geological processes. However, due to the attenuation of light in sea water, the area covered by the optical systems is very reduced, and a large number of images are then needed in order to cover an area of interest, as individually they do not provide a global view of the surveyed area. Therefore, generating a composite view (or photo-mosaic) from multiple overlapping images is usually the most practical and flexible solution to visually cover a wide area, allowing the analysis of the site in one single representation of the ocean floor. In most of the camera surveys which are carried out nowadays, some sort of positioning information is available (e.g., USBL, DVL, INS, gyros, etc). If it is a towed camera an estimation of the length of the tether and the mother ship GPS reading could also serve as navigation data. In any case, a photo-mosaic can be build just by taking into account the position and orientation of the camera. On the other hand, most of the regions of interest for the scientific community are quite large (>1Km2) and since better resolution is always required, the final photo-mosaic can be very large (>1,000,000 × 1,000,000 pixels), and cannot be handled by commonly available software. For this reason, we have developed a software package able to load a navigation file and the sequence of acquired images to automatically build a geo-referenced mosaic. This navigated mosaic provides a global view of the interest site, at the maximum available resolution. The developed package includes a viewer, allowing the user to load, view and annotate these geo-referenced photo-mosaics on a personal computer. A software library has been developed to allow the viewer to manage such very big images. Therefore, the size of the resulting mosaic is now only limited by the size of the hard drive. Work is being carried out to apply image processing techniques to the navigated mosaic, with the intention of locally improving image alignment. Tests have been conducted using the data acquired during the cruise LUSTRE'96 (LUcky STRike Exploration, 37°17'N 32°17'W) by WHOI. During this cruise, the ARGO-II tethered vehicle acquired ~21,000 images in a ~1Km2 area of the seafloor to map at high resolution the geology of this hydrothermal field. The obtained geo-referenced photo-mosaic has a resolution of 1.5cm per pixel, with a coverage of ~25% of the Lucky Strike area. Data and software will be made publicly available.
Underwater Multi-Vehicle Trajectory Alignment and Mapping Using Acoustic and Optical Constraints
Campos, Ricard; Gracias, Nuno; Ridao, Pere
2016-01-01
Multi-robot formations are an important advance in recent robotic developments, as they allow a group of robots to merge their capacities and perform surveys in a more convenient way. With the aim of keeping the costs and acoustic communications to a minimum, cooperative navigation of multiple underwater vehicles is usually performed at the control level. In order to maintain the desired formation, individual robots just react to simple control directives extracted from range measurements or ultra-short baseline (USBL) systems. Thus, the robots are unaware of their global positioning, which presents a problem for the further processing of the collected data. The aim of this paper is two-fold. First, we present a global alignment method to correct the dead reckoning trajectories of multiple vehicles to resemble the paths followed during the mission using the acoustic messages passed between vehicles. Second, we focus on the optical mapping application of these types of formations and extend the optimization framework to allow for multi-vehicle geo-referenced optical 3D mapping using monocular cameras. The inclusion of optical constraints is not performed using the common bundle adjustment techniques, but in a form improving the computational efficiency of the resulting optimization problem and presenting a generic process to fuse optical reconstructions with navigation data. We show the performance of the proposed method on real datasets collected within the Morph EU-FP7 project. PMID:26999144
An Intraocular Camera for Retinal Prostheses: Restoring Sight to the Blind
NASA Astrophysics Data System (ADS)
Stiles, Noelle R. B.; McIntosh, Benjamin P.; Nasiatka, Patrick J.; Hauer, Michelle C.; Weiland, James D.; Humayun, Mark S.; Tanguay, Armand R., Jr.
Implantation of an intraocular retinal prosthesis represents one possible approach to the restoration of sight in those with minimal light perception due to photoreceptor degenerating diseases such as retinitis pigmentosa and age-related macular degeneration. In such an intraocular retinal prosthesis, a microstimulator array attached to the retina is used to electrically stimulate still-viable retinal ganglion cells that transmit retinotopic image information to the visual cortex by means of the optic nerve, thereby creating an image percept. We describe herein an intraocular camera that is designed to be implanted in the crystalline lens sac and connected to the microstimulator array. Replacement of an extraocular (head-mounted) camera with the intraocular camera restores the natural coupling of head and eye motion associated with foveation, thereby enhancing visual acquisition, navigation, and mobility tasks. This research is in no small part inspired by the unique scientific style and research methodologies that many of us have learned from Prof. Richard K. Chang of Yale University, and is included herein as an example of the extent and breadth of his impact and legacy.
Clementine Observes the Moon, Solar Corona, and Venus
1999-06-12
In 1994, during its flight, NASA's Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon. In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame. http://photojournal.jpl.nasa.gov/catalog/PIA00434
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
ULTOR(Registered TradeMark) Passive Pose and Position Engine For Spacecraft Relative Navigation
NASA Technical Reports Server (NTRS)
Hannah, S. Joel
2008-01-01
The ULTOR(Registered TradeMark) Passive Pose and Position Engine (P3E) technology, developed by Advanced Optical Systems, Inc (AOS), uses real-time image correlation to provide relative position and pose data for spacecraft guidance, navigation, and control. Potential data sources include a wide variety of sensors, including visible and infrared cameras. ULTOR(Registered TradeMark) P3E has been demonstrated on a number of host processing platforms. NASA is integrating ULTOR(Registerd TradeMark) P3E into its Relative Navigation System (RNS), which is being developed for the upcoming Hubble Space Telescope (HST) Servicing Mission 4 (SM4). During SM4 ULTOR(Registered TradeMark) P3E will perform realtime pose and position measurements during both the approach and departure phases of the mission. This paper describes the RNS implementation of ULTOR(Registered TradeMark) P3E, and presents results from NASA's hardware-in-the-loop simulation testing against the HST mockup.
Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P
2017-01-01
Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.
Piao, Jin-Chun; Kim, Shin-Dug
2017-01-01
Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143
Terrain shape estimation from optical flow, using Kalman filtering
NASA Astrophysics Data System (ADS)
Hoff, William A.; Sklair, Cheryl W.
1990-01-01
As one moves through a static environment, the visual world as projected on the retina seems to flow past. This apparent motion, called optical flow, can be an important source of depth perception for autonomous robots. An important application is in planetary exploration -the landing vehicle must find a safe landing site in rugged terrain, and an autonomous rover must be able to navigate safely through this terrain. In this paper, we describe a solution to this problem. Image edge points are tracked between frames of a motion sequence, and the range to the points is calculated from the displacement of the edge points and the known motion of the camera. Kalman filtering is used to incrementally improve the range estimates to those points, and provide an estimate of the uncertainty in each range. Errors in camera motion and image point measurement can also be modelled with Kalman filtering. A surface is then interpolated to these points, providing a complete map from which hazards such as steeply sloping areas can be detected. Using the method of extended Kalman filtering, our approach allows arbitrary camera motion. Preliminary results of an implementation are presented, and show that the resulting range accuracy is on the order of 1-2% of the range.
Faces of the Fleet | Navy Live
annual training exercise at Ft. Knox, Ky. (U.S. Navy Combat Camera photo by Mass Communication Specialist ), navigates a waterway during an annual training exercise at Ft. Knox, Ky. (U.S. Navy Combat Camera photo by , coxswain assigned to Coastal Riverine Squadron Four (CRS-4), navigates a waterway during an annual training
Evaluation of the ROSA™ Spine robot for minimally invasive surgical procedures.
Lefranc, M; Peltier, J
2016-10-01
The ROSA® robot (Medtech, Montpellier, France) is a new medical device designed to assist the surgeon during minimally invasive spine procedures. The device comprises a patient-side cart (bearing the robotic arm and a workstation) and an optical navigation camera. The ROSA® Spine robot enables accurate pedicle screw placement. Thanks to its robotic arm and navigation abilities, the robot monitors movements of the spine throughout the entire surgical procedure and thus enables accurate, safe arthrodesis for the treatment of degenerative lumbar disc diseases, exactly as planned by the surgeon. Development perspectives include (i) assistance at all levels of the spine, (ii) improved planning abilities (virtualization of the entire surgical procedure) and (iii) use for almost any percutaneous spinal procedures not limited in screw positioning such as percutaneous endoscopic lumbar discectomy, intracorporeal implant positioning, over te top laminectomy or radiofrequency ablation.
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas
2018-01-01
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites. PMID:29673230
Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.
Gakne, Paul Verlaine; O'Keefe, Kyle
2018-04-17
This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
van Oosterom, Matthias N; van der Poel, Henk G; Navab, Nassir; van de Velde, Cornelis J H; van Leeuwen, Fijs W B
2018-03-01
To provide an overview of the developments made for virtual- and augmented-reality navigation procedures in urological interventions/surgery. Navigation efforts have demonstrated potential in the field of urology by supporting guidance for various disorders. The navigation approaches differ between the individual indications, but seem interchangeable to a certain extent. An increasing number of pre- and intra-operative imaging modalities has been used to create detailed surgical roadmaps, namely: (cone-beam) computed tomography, MRI, ultrasound, and single-photon emission computed tomography. Registration of these surgical roadmaps with the real-life surgical view has occurred in different forms (e.g. electromagnetic, mechanical, vision, or near-infrared optical-based), whereby the combination of approaches was suggested to provide superior outcome. Soft-tissue deformations demand the use of confirmatory interventional (imaging) modalities. This has resulted in the introduction of new intraoperative modalities such as drop-in US, transurethral US, (drop-in) gamma probes and fluorescence cameras. These noninvasive modalities provide an alternative to invasive technologies that expose the patients to X-ray doses. Whereas some reports have indicated navigation setups provide equal or better results than conventional approaches, most trials have been performed in relatively small patient groups and clear follow-up data are missing. The reported computer-assisted surgery research concepts provide a glimpse in to the future application of navigation technologies in the field of urology.
Gundle, Kenneth R.; White, Jedediah K.; Conrad, Ernest U.; Ching, Randal P.
2017-01-01
Introduction: Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Materials and Methods: Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Results: Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97) Conclusion: In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system. PMID:28694888
Asteroid approach covariance analysis for the Clementine mission
NASA Technical Reports Server (NTRS)
Ionasescu, Rodica; Sonnabend, David
1993-01-01
The Clementine mission is designed to test Strategic Defense Initiative Organization (SDIO) technology, the Brilliant Pebbles and Brilliant Eyes sensors, by mapping the moon surface and flying by the asteroid Geographos. The capability of two of the instruments available on board the spacecraft, the lidar (laser radar) and the UV/Visible camera is used in the covariance analysis to obtain the spacecraft delivery uncertainties at the asteroid. These uncertainties are due primarily to asteroid ephemeris uncertainties. On board optical navigation reduces the uncertainty in the knowledge of the spacecraft position in the direction perpendicular to the incoming asymptote to a one-sigma value of under 1 km, at the closest approach distance of 100 km. The uncertainty in the knowledge of the encounter time is about 0.1 seconds for a flyby velocity of 10.85 km/s. The magnitude of these uncertainties is due largely to Center Finding Errors (CFE). These systematic errors represent the accuracy expected in locating the center of the asteroid in the optical navigation images, in the absence of a topographic model for the asteroid. The direction of the incoming asymptote cannot be estimated accurately until minutes before the asteroid flyby, and correcting for it would require autonomous navigation. Orbit determination errors dominate over maneuver execution errors, and the final delivery accuracy attained is basically the orbit determination uncertainty before the final maneuver.
Crew-Aided Autonomous Navigation
NASA Technical Reports Server (NTRS)
Holt, Greg N.
2015-01-01
A sextant provides manual capability to perform star/planet-limb sightings and offers a cheap, simple, robust backup navigation source for exploration missions independent from the ground. Sextant sightings from spacecraft were first exercised in Gemini and flew as the lost-communication backup for all Apollo missions. This study characterized error sources of navigation-grade sextants for feasibility of taking star and planetary limb sightings from inside a spacecraft. A series of similar studies was performed in the early/mid-1960s in preparation for Apollo missions. This study modernized and updated those findings in addition to showing feasibility using Linear Covariance analysis techniques. The human eyeball is a remarkable piece of optical equipment and provides many advantages over camera-based systems, including dynamic range and detail resolution. This technique utilizes those advantages and provides important autonomy to the crew in the event of lost communication with the ground. It can also provide confidence and verification of low-TRL automated onboard systems. The technique is extremely flexible and is not dependent on any particular vehicle type. The investigation involved procuring navigation-grade sextants and characterizing their performance under a variety of conditions encountered in exploration missions. The JSC optical sensor lab and Orion mockup were the primary testing locations. For the accuracy assessment, a group of test subjects took sextant readings on calibrated targets while instrument/operator precision was measured. The study demonstrated repeatability of star/planet-limb sightings with bias and standard deviation around 10 arcseconds, then used high-fidelity simulations to verify those accuracy levels met the needs for targeting mid-course maneuvers in preparation for Earth reen.
Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio
2016-04-14
The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.
Overview of the Multi-Spectral Imager on the NEAR spacecraft
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1996-07-01
The Multi-Spectral Imager on the Near Earth Asteroid Rendezvous (NEAR) spacecraft is a 1 Hz frame rate CCD camera sensitive in the visible and near infrared bands (~400-1100 nm). MSI is the primary instrument on the spacecraft to determine morphology and composition of the surface of asteroid 433 Eros. In addition, the camera will be used to assist in navigation to the asteroid. The instrument uses refractive optics and has an eight position spectral filter wheel to select different wavelength bands. The MSI optical focal length of 168 mm gives a 2.9 ° × 2.25 ° field of view. The CCD is passively cooled and the 537×244 pixel array output is digitized to 12 bits. Electronic shuttering increases the effective dynamic range of the instrument by more than a factor of 100. A one-time deployable cover protects the instrument during ground testing operations and launch. A reduced aperture viewport permits full field of view imaging while the cover is in place. A Data Processing Unit (DPU) provides the digital interface between the spacecraft and the Camera Head and uses an RTX2010 processor. The DPU provides an eight frame image buffer, lossy and lossless data compression routines, and automatic exposure control. An overview of the instrument is presented and design parameters and trade-offs are discussed.
Lagudi, Antonio; Bianco, Gianfranco; Muzzupappa, Maurizio; Bruno, Fabio
2016-01-01
The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera. PMID:27089344
Optical navigation during the Voyager Neptune encounter
NASA Technical Reports Server (NTRS)
Riedel, J. E.; Owen, W. M., Jr.; Stuve, J. A.; Synnott, S. P.; Vaughan, R. M.
1990-01-01
Optical navigation techniques were required to successfully complete the planetary exploration phase of the NASA deep-space Voyager mission. The last of Voyager's planetary encounters, with Neptune, posed unique problems from an optical navigation standpoint. In this paper we briefly review general aspects of the optical navigation process as practiced during the Voyager mission, and discuss in detail particular features of the Neptune encounter which affected optical navigation. New approaches to the centerfinding problem were developed for both stars and extended bodies, and these are described. Results of the optical navigation data analysis are presented, as well as a description of the optical orbit determination system and results of its use during encounter. Partially as a result of the optical navigation processing, results of scientific significance were obtained. These results include the discovery and orbit determination of several new satellites of Neptune and the determination of the size of Triton, Neptune's largest moon.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Water Ice Clouds as Seen from the Mars Exploration Rovers
NASA Astrophysics Data System (ADS)
Wolff, M. J.; Clancy, R. T.; Banfield, D.; Cuozzo, K.
2005-12-01
Water ice clouds that bear a striking resemblance to terrestrial cirrus (e.g., "Mare's tails") have been observed by the Panoramic Camera (Pancam), the Navigation Camera (Navcam), the Hazard Camera (Hazcam), and the Minature Thermal Emission Spectrometer (Mini-TES) on board the Mars Exploration Rovers (MER). Such phenomena represent an opportunity to characterize local and regional scale meteorology as well as our understanding of the processes involved. However, a necessary first-step is to adequately describe some basic properties of the detected clouds: 1) when are the clouds present (i.e., local time, season, etc.)? 2) where are the clouds present? That is to say, what is the relative frequency between the two rover sites as well as the connection to detections from orbiting spacecraft. 3) what are the observed morphologies? 4) what are the projected velocities (i.e., wind speeds and directions) associated with the clouds? 5) what is the abundance of water ice nuclei (i.e., optical depth)? Our talk will summarize our progress in answering the above questions, as well as provide initial results in connecting the observations to more global behavior in the Martian climate.
Augmentation method of XPNAV in Mars orbit based on Phobos and Deimos observations
NASA Astrophysics Data System (ADS)
Rong, Jiao; Luping, Xu; Zhang, Hua; Cong, Li
2016-11-01
Autonomous navigation for Mars probe spacecraft is required to reduce the operation costs and enhance the navigation performance in the future. X-ray pulsar-based navigation (XPNAV) is a potential candidate to meet this requirement. This paper addresses the use of the Mars' natural satellites to improve XPNAV for Mars probe spacecraft. Two observation variables of the field angle and natural satellites' direction vectors of Mars are added into the XPNAV positioning system. The measurement model of field angle and direction vectors is formulated by processing satellite image of Mars obtained from optical camera. This measurement model is integrated into the spacecraft orbit dynamics to build the filter model. In order to estimate position and velocity error of the spacecraft and reduce the impact of the system noise on navigation precision, an adaptive divided difference filter (ADDF) is applied. Numerical simulation results demonstrate that the performance of ADDF is better than Unscented Kalman Filter (UKF) DDF and EKF. In view of the invisibility of Mars' natural satellites in some cases, a visibility condition analysis is given and the augmented XPNAV in a different visibility condition is numerically simulated. The simulation results show that the navigation precision is evidently improved by using the augmented XPNAV based on the field angle and natural satellites' direction vectors of Mars in a comparison with the conventional XPNAV.
High resolution hybrid optical and acoustic sea floor maps (Invited)
NASA Astrophysics Data System (ADS)
Roman, C.; Inglis, G.
2013-12-01
This abstract presents a method for creating hybrid optical and acoustic sea floor reconstructions at centimeter scale grid resolutions with robotic vehicles. Multibeam sonar and stereo vision are two common sensing modalities with complementary strengths that are well suited for data fusion. We have recently developed an automated two stage pipeline to create such maps. The steps can be broken down as navigation refinement and map construction. During navigation refinement a graph-based optimization algorithm is used to align 3D point clouds created with both the multibeam sonar and stereo cameras. The process combats the typical growth in navigation error that has a detrimental affect on map fidelity and typically introduces artifacts at small grid sizes. During this process we are able to automatically register local point clouds created by each sensor to themselves and to each other where they overlap in a survey pattern. The process also estimates the sensor offsets, such as heading, pitch and roll, that describe how each sensor is mounted to the vehicle. The end results of the navigation step is a refined vehicle trajectory that ensures the points clouds from each sensor are consistently aligned, and the individual sensor offsets. In the mapping step, grid cells in the map are selectively populated by choosing data points from each sensor in an automated manner. The selection process is designed to pick points that preserve the best characteristics of each sensor and honor some specific map quality criteria to reduce outliers and ghosting. In general, the algorithm selects dense 3D stereo points in areas of high texture and point density. In areas where the stereo vision is poor, such as in a scene with low contrast or texture, multibeam sonar points are inserted in the map. This process is automated and results in a hybrid map populated with data from both sensors. Additional cross modality checks are made to reject outliers in a robust manner. The final hybrid map retains the strengths of both sensors and shows improvement over the single modality maps and a naively assembled multi-modal map where all the data points are included and averaged. Results will be presented from marine geological and archaeological applications using a 1350 kHz BlueView multibeam sonar and 1.3 megapixel digital still cameras.
Relative attitude dynamics and control for a satellite inspection mission
NASA Astrophysics Data System (ADS)
Horri, Nadjim M.; Kristiansen, Kristian U.; Palmer, Phil; Roberts, Mark
2012-02-01
The problem of conducting an inspection mission from a chaser satellite orbiting a target spaceraft is considered. It is assumed that both satellites follow nearly circular orbits. The relative orbital motion is described by the Hill-Clohessy-Wiltshire equation. In the case of an elliptic relative orbit, it is shown that an inspection mission is feasible when the chaser is inertially pointing, provided that the camera mounted on the chaser satellite has sufficiently large field of view. The same possibility is shown when the optical axis of the chaser's camera points in, or opposite to, the tangential direction of the local vertical local horizontal frame. For an arbitrary relative orbit and arbitrary initial conditions, the concept of relative Euler angles is defined for this inspection mission. The expression of the desired relative angular velocity vector is derived as a function of Cartesian coordinates of the relative orbit. A quaternion feedback controller is then designed and shown to perform relative attitude control with admissible internal torques. Three different types of relative orbits are considered, namely the elliptic, Pogo and drifting relative orbits. Measurements of the relative orbital motion are assumed to be available from optical navigation.
VCSELs in short-pulse operation for time-of-flight applications
NASA Astrophysics Data System (ADS)
Moench, Holger; Gronenborn, Stephan; Gu, Xi; Gudde, Ralph; Herper, Markus; Kolb, Johanna; Miller, Michael; Smeets, Michael; Weigl, Alexander
2018-02-01
VCSEL arrays are the ideal light source for 3D imaging applications. The narrow emission spectrum and the ability for short pulses make them superior to LEDs. Combined with fast photodiodes or special camera chips spatial information can be obtained which is needed in diverse applications like camera autofocus, indoor navigation, 3D-object recognition, augmented reality or autonomously driving vehicles. Pulse operation at the ns scale and at low duty cycle can work with significantly higher current than traditionally used for VCSELs in continuous wave operation. With reduced thermal limitations at low average heat dissipation very high currents become feasible and tens of Watts output power have been realized with small VCSEL chips. The optical emission pattern of VCSELs can be tailored to the desired field of view using beam shaping elements. Such optical elements also enable laser safe class 1 products. A detailed analysis of the complete system and the operation mode is required to calculate the maximum permitted power for a safe system. The good VCSEL properties like robustness, stability over temperature and the potential for integrated solutions open a huge potential for VCSELs in new mass applications in the consumer and automotive markets.
HyMoTrack: A Mobile AR Navigation System for Complex Indoor Environments.
Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes
2015-12-24
Navigating in unknown big indoor environments with static 2D maps is a challenge, especially when time is a critical factor. In order to provide a mobile assistant, capable of supporting people while navigating in indoor locations, an accurate and reliable localization system is required in almost every corner of the building. We present a solution to this problem through a hybrid tracking system specifically designed for complex indoor spaces, which runs on mobile devices like smartphones or tablets. The developed algorithm only uses the available sensors built into standard mobile devices, especially the inertial sensors and the RGB camera. The combination of multiple optical tracking technologies, such as 2D natural features and features of more complex three-dimensional structures guarantees the robustness of the system. All processing is done locally and no network connection is needed. State-of-the-art indoor tracking approaches use mainly radio-frequency signals like Wi-Fi or Bluetooth for localizing a user. In contrast to these approaches, the main advantage of the developed system is the capability of delivering a continuous 3D position and orientation of the mobile device with centimeter accuracy. This makes it usable for localization and 3D augmentation purposes, e.g. navigation tasks or location-based information visualization.
HyMoTrack: A Mobile AR Navigation System for Complex Indoor Environments
Gerstweiler, Georg; Vonach, Emanuel; Kaufmann, Hannes
2015-01-01
Navigating in unknown big indoor environments with static 2D maps is a challenge, especially when time is a critical factor. In order to provide a mobile assistant, capable of supporting people while navigating in indoor locations, an accurate and reliable localization system is required in almost every corner of the building. We present a solution to this problem through a hybrid tracking system specifically designed for complex indoor spaces, which runs on mobile devices like smartphones or tablets. The developed algorithm only uses the available sensors built into standard mobile devices, especially the inertial sensors and the RGB camera. The combination of multiple optical tracking technologies, such as 2D natural features and features of more complex three-dimensional structures guarantees the robustness of the system. All processing is done locally and no network connection is needed. State-of-the-art indoor tracking approaches use mainly radio-frequency signals like Wi-Fi or Bluetooth for localizing a user. In contrast to these approaches, the main advantage of the developed system is the capability of delivering a continuous 3D position and orientation of the mobile device with centimeter accuracy. This makes it usable for localization and 3D augmentation purposes, e.g. navigation tasks or location-based information visualization. PMID:26712755
Mars Science Laboratory Engineering Cameras
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.
2012-01-01
NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.
Optical surgical navigation system causes pulse oximeter malfunction.
Satoh, Masaaki; Hara, Tetsuhito; Tamai, Kenji; Shiba, Juntaro; Hotta, Kunihisa; Takeuchi, Mamoru; Watanabe, Eiju
2015-01-01
An optical surgical navigation system is used as a navigator to facilitate surgical approaches, and pulse oximeters provide valuable information for anesthetic management. However, saw-tooth waves on the monitor of a pulse oximeter and the inability of the pulse oximeter to accurately record the saturation of a percutaneous artery were observed when a surgeon started an optical navigation system. The current case is thought to be the first report of this navigation system interfering with pulse oximetry. The causes of pulse jamming and how to manage an optical navigation system are discussed.
Infrared thermal imagers for avionic applications
NASA Astrophysics Data System (ADS)
Uda, Gianni; Livi, Massimo; Olivieri, Monica; Sabatini, Maurizio; Torrini, Daniele; Baldini, Stefano; Bardazzi, Riccardo; Falli, Pietro; Maestrini, Mauro
1999-07-01
This paper deals with the design of two second generation thermal imagers that Alenia Difesa OFFICINE GALILEO has successfully developed for the Navigation FLIR of the NH90 Tactical Transportation Helicopter (NH90 TTH) and for the Electro-Optical Surveillance and Tracking System for the Italian 'Guardia di Finanza' ATR42 Maritime Patrol Aircraft (ATR42 MPA). Small size, lightweight and low power consumption have been the main design goals of the two programs. In particular the NH90 TTH Thermal Imager is a compact camera operating in the 8 divided by 12 micrometers bandwidth with a single wide field of view. The thermal imager developed for the ATR42 MPA features a three remotely switchable fields of view objective equipped with diffractive optics. Performance goals, innovative design aspects and test results of these two thermal imagers are reported.
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
Artificial vision support system (AVS(2)) for improved prosthetic vision.
Fink, Wolfgang; Tarbell, Mark A
2014-11-01
State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.
Left Panorama of Spirit's Landing Site
NASA Technical Reports Server (NTRS)
2004-01-01
Left Panorama of Spirit's Landing Site
This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.Vision Aided Inertial Navigation System Augmented with a Coded Aperture
2011-03-24
as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask
ERIC Educational Resources Information Center
Ruiz, Michael J.
1982-01-01
The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…
A Bionic Camera-Based Polarization Navigation Sensor
Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai
2014-01-01
Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.
Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F
2016-09-16
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.
NASA Technical Reports Server (NTRS)
Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.
2003-01-01
Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.
Miniaturized GPS/MEMS IMU integrated board
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2012-01-01
This invention documents the efforts on the research and development of a miniaturized GPS/MEMS IMU integrated navigation system. A miniaturized GPS/MEMS IMU integrated navigation system is presented; Laser Dynamic Range Imager (LDRI) based alignment algorithm for space applications is discussed. Two navigation cameras are also included to measure the range and range rate which can be integrated into the GPS/MEMS IMU system to enhance the navigation solution.
NASA Technical Reports Server (NTRS)
Stuart, J. R.
1984-01-01
The evolution of NASA's planetary navigation techniques is traced, and radiometric and optical data types are described. Doppler navigation; the Deep Space Network; differenced two-way range techniques; differential very long base interferometry; and optical navigation are treated. The Doppler system enables a spacecraft in cruise at high absolute declination to be located within a total angular uncertainty of 1/4 microrad. The two-station range measurement provides a 1 microrad backup at low declinations. Optical data locate the spacecraft relative to the target to an angular accuracy of 5 microrad. Earth-based radio navigation and its less accurate but target-relative counterpart, optical navigation, thus form complementary measurement sources, which provide a powerful sensory system to produce high-precision orbit estimates.
Lensless imaging for wide field of view
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Yagi, Yasushi
2015-02-01
It is desirable to engineer a small camera with a wide field of view (FOV) because of current developments in the field of wearable cameras and computing products, such as action cameras and Google Glass. However, typical approaches for achieving wide FOV, such as attaching a fisheye lens and convex mirrors, require a trade-off between optics size and the FOV. We propose camera optics that achieve a wide FOV, and are at the same time small and lightweight. The proposed optics are a completely lensless and catoptric design. They contain four mirrors, two for wide viewing, and two for focusing the image on the camera sensor. The proposed optics are simple and can be simply miniaturized, since we use only mirrors for the proposed optics and the optics are not susceptible to chromatic aberration. We have implemented the prototype optics of our lensless concept. We have attached the optics to commercial charge-coupled device/complementary metal oxide semiconductor cameras and conducted experiments to evaluate the feasibility of our proposed optics.
Kotze, Ben; Jordaan, Gerrit
2014-08-25
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.
Kotze, Ben; Jordaan, Gerrit
2014-01-01
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548
Detection of obstacles on runway using Ego-Motion compensation and tracking of significant features
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar (Principal Investigator); Camps, Octavia (Principal Investigator); Gandhi, Tarak; Devadiga, Sadashiva
1996-01-01
This report describes a method for obstacle detection on a runway for autonomous navigation and landing of an aircraft. Detection is done in the presence of extraneous features such as tiremarks. Suitable features are extracted from the image and warping using approximately known camera and plane parameters is performed in order to compensate ego-motion as far as possible. Residual disparity after warping is estimated using an optical flow algorithm. Features are tracked from frame to frame so as to obtain more reliable estimates of their motion. Corrections are made to motion parameters with the residual disparities using a robust method, and features having large residual disparities are signaled as obstacles. Sensitivity analysis of the procedure is also studied. Nelson's optical flow constraint is proposed to separate moving obstacles from stationary ones. A Bayesian framework is used at every stage so that the confidence in the estimates can be determined.
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
Navigation-guided optic canal decompression for traumatic optic neuropathy: Two case reports.
Bhattacharjee, Kasturi; Serasiya, Samir; Kapoor, Deepika; Bhattacharjee, Harsha
2018-06-01
Two cases of traumatic optic neuropathy presented with profound loss of vision. Both cases received a course of intravenous corticosteroids elsewhere but did not improve. They underwent Navigation guided optic canal decompression via external transcaruncular approach, following which both cases showed visual improvement. Postoperative Visual Evoked Potential and optical coherence technology of Retinal nerve fibre layer showed improvement. These case reports emphasize on the role of stereotactic navigation technology for optic canal decompression in cases of traumatic optic neuropathy.
NASA Astrophysics Data System (ADS)
Chi, Chongwei; Zhang, Qian; Kou, Deqiang; Ye, Jinzuo; Mao, Yamin; Qiu, Jingdan; Wang, Jiandong; Yang, Xin; Du, Yang; Tian, Jie
2014-02-01
Currently, it has been an international focus on intraoperative precise positioning and accurate resection of tumor and metastases. The methods such as X-rays, computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) have played an important role in preoperative accurate diagnosis. However, most of them are inapplicable for intraoperative surgery. We have proposed a surgical navigation system based on optical molecular imaging technology for intraoperative detection of tumors and metastasis. This system collects images from two CCD cameras for real-time fluorescent and color imaging. For image processing, the template matching algorithm is used for multispectral image fusion. For the application of tumor detection, the mouse breast cancer cell line 4T1-luc, which shows highly metastasis, was used for tumor model establishment and a model of matrix metalloproteinase (MMP) expressing breast cancer. The tumor-bearing nude mice were given tail vein injection of MMP 750FAST (PerkinElmer, Inc. USA) probe and imaged with both bioluminescence and fluorescence to assess in vivo binding of the probe to the tumor and metastases sites. Hematoxylin and eosin (H&E) staining was performed to confirm the presence of tumor and metastasis. As a result, one tumor can be observed visually in vivo. However liver metastasis has been detected under surgical navigation system and all were confirmed by histology. This approach helps surgeons to find orthotopic tumors and metastasis during intraoperative resection and visualize tumor borders for precise positioning. Further investigation is needed for future application in clinics.
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riot, V J; Olivier, S; Bauman, B
2012-05-24
The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less
Prol, Fabricio dos Santos; El Issaoui, Aimad; Hakala, Teemu
2018-01-01
The use of Personal Mobile Terrestrial System (PMTS) has increased considerably for mobile mapping applications because these systems offer dynamic data acquisition with ground perspective in places where the use of wheeled platforms is unfeasible, such as forests and indoor buildings. PMTS has become more popular with emerging technologies, such as miniaturized navigation sensors and off-the-shelf omnidirectional cameras, which enable low-cost mobile mapping approaches. However, most of these sensors have not been developed for high-accuracy metric purposes and therefore require rigorous methods of data acquisition and data processing to obtain satisfactory results for some mapping applications. To contribute to the development of light, low-cost PMTS and potential applications of these off-the-shelf sensors for forest mapping, this paper presents a low-cost PMTS approach comprising an omnidirectional camera with off-the-shelf navigation systems and its evaluation in a forest environment. Experimental assessments showed that the integrated sensor orientation approach using navigation data as the initial information can increase the trajectory accuracy, especially in covered areas. The point cloud generated with the PMTS data had accuracy consistent with the Ground Sample Distance (GSD) range of omnidirectional images (3.5–7 cm). These results are consistent with those obtained for other PMTS approaches. PMID:29522467
A projective surgical navigation system for cancer resection
NASA Astrophysics Data System (ADS)
Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald
2016-03-01
Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.
Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles
Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F.
2016-01-01
Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results. PMID:27649203
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
NASA Technical Reports Server (NTRS)
Coulbourn, W. C.; Egan, W. G. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Attempts to correlate optical aircraft remote sensing of water quality with the optical data from the ERTS-1 satellite using calibrated imagery of Charlotte Amalie harbor, St. Thomas, Virgin Islands are reported. The harbor at Charlotte Amalie has a concentration of a number of factors affecting water quality: untreated sewage, land runoff, and sediment from navigation and dredging operations. Calibration procedures have been originated and applied to ERTS-1 and I2S camera imagery. The results indicate that the ERTS-1 and I2S imagery are correlated with optical in situ measurements of the harbor water. The aircraft green photographic and ERTS-1 MSS-4 bands have been found most suitable for monitoring the scattered light levels under the conditions of the investigation. The chemical parameters of the harbor water were found to be correlated to the optical properties for two stations investigated in detail. The biological properties of the harbor water (chlorophyll and carotenoids), correlate inversely with the optical data near the pollution sources compared to further away. Calibration procedures developed in this investigation were essential to the interpretation of the photographic and ERTS-1 photometric responses.
Rhee, Seung Joon; Park, Shi Hwan; Cho, He Myung
2014-01-01
Purpose The purpose of this study is to compare and analyze the precision of optical and electromagnetic navigation systems in total knee arthroplasty (TKA). Materials and Methods We retrospectively reviewed 60 patients who underwent TKA using an optical navigation system and 60 patients who underwent TKA using an electromagnetic navigation system from June 2010 to March 2012. The mechanical axis that was measured on preoperative radiographs and by the intraoperative navigation systems were compared between the groups. The postoperative positions of the femoral and tibial components in the sagittal and coronal plane were assessed. Results The difference of the mechanical axis measured on the preoperative radiograph and by the intraoperative navigation systems was 0.6 degrees more varus in the electromagnetic navigation system group than in the optical navigation system group, but showed no statistically significant difference between the two groups (p>0.05). The positions of the femoral and tibial components in the sagittal and coronal planes on the postoperative radiographs also showed no statistically significant difference between the two groups (p>0.05). Conclusions In TKA, both optical and electromagnetic navigation systems showed high accuracy and reproducibility, and the measurements from the postoperative radiographs showed no significant difference between the two groups. PMID:25505703
Runway Detection From Map, Video and Aircraft Navigational Data
2016-03-01
FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA by Jose R. Espinosa Gloria March 2016 Thesis Advisor: Roberto Cristi Co-Advisor: Oleg...COVERED Master’s thesis 4. TITLE AND SUBTITLE RUNWAY DETECTION FROM MAP, VIDEO AND AIRCRAFT NAVIGATIONAL DATA 5. FUNDING NUMBERS 6. AUTHOR...Mexican Navy, unmanned aerial vehicles (UAV) have been equipped with daylight and infrared cameras. Processing the video information obtained from these
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary
2011-01-01
TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
Satellite Imagery Assisted Road-Based Visual Navigation System
NASA Astrophysics Data System (ADS)
Volkova, A.; Gibbens, P. W.
2016-06-01
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used
NASA Astrophysics Data System (ADS)
Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.
2018-04-01
Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
Autonomous vision-based navigation for proximity operations around binary asteroids
NASA Astrophysics Data System (ADS)
Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo
2018-02-01
Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.
Autonomous vision-based navigation for proximity operations around binary asteroids
NASA Astrophysics Data System (ADS)
Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo
2018-06-01
Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.
Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation
Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.
2003-01-01
The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Celik, Koray
This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.
Optical Transient Monitor (OTM) for BOOTES Project
NASA Astrophysics Data System (ADS)
Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.
2003-04-01
The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.
Insect-Inspired Optical-Flow Navigation Sensors
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven
2005-01-01
Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.
University of Pennsylvania MAGIC 2010 Final Report
2011-01-10
and mapping ( SLAM ) techniques are employed to build a local map of the environment surrounding the robot. Readings from the two complementary LIDAR sen...IMU, LIDAR , Cameras Localization Disrupter UGV Local Navigation Sensors: GPS, IMU, LIDAR , Cameras Laser Control Localization Task Planner Strategy/Plan...various components shown in Figure 2. This is comprised of the following subsystems: • Sensor UGV: Mobile UGVs with LIDAR and camera sensors, GPS, and
Lu, Hao; Zhao, Kaichun; Wang, Xiaochu; You, Zheng; Huang, Kaoli
2016-01-01
Bio-inspired imaging polarization navigation which can provide navigation information and is capable of sensing polarization information has advantages of high-precision and anti-interference over polarization navigation sensors that use photodiodes. Although all types of imaging polarimeters exist, they may not qualify for the research on the imaging polarization navigation algorithm. To verify the algorithm, a real-time imaging orientation determination system was designed and implemented. Essential calibration procedures for the type of system that contained camera parameter calibration and the inconsistency of complementary metal oxide semiconductor calibration were discussed, designed, and implemented. Calibration results were used to undistort and rectify the multi-camera system. An orientation determination experiment was conducted. The results indicated that the system could acquire and compute the polarized skylight images throughout the calibrations and resolve orientation by the algorithm to verify in real-time. An orientation determination algorithm based on image processing was tested on the system. The performance and properties of the algorithm were evaluated. The rate of the algorithm was over 1 Hz, the error was over 0.313°, and the population standard deviation was 0.148° without any data filter. PMID:26805851
NASA Technical Reports Server (NTRS)
2007-01-01
On sol 1149 (March 28, 2007) of its mission, NASA's Mars Exploration Rover Spirit caught a wind gust with its navigation camera. A series of navigation camera images were strung together to create this movie. The front of the gust is observable because it was strong enough to lift up dust. From assessing the trajectory of this gust, the atmospheric science team concludes that it is possible that it passed over the rover. There was, however, no noticeable increase in power associated with this gust. In the past, dust devils and gusts have wiped the solar panels of dust, making it easier for the solar panels to absorb sunlight.Real-time polarization imaging algorithm for camera-based polarization navigation sensors.
Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli
2017-04-10
Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.
Exploring the imaging properties of thin lenses for cryogenic infrared cameras
NASA Astrophysics Data System (ADS)
Druart, Guillaume; Verdet, Sebastien; Guerineau, Nicolas; Magli, Serge; Chambon, Mathieu; Grulois, Tatiana; Matallah, Noura
2016-05-01
Designing a cryogenic camera is a good strategy to miniaturize and simplify an infrared camera using a cooled detector. Indeed, the integration of optics inside the cold shield allows to simply athermalize the design, guarantees a cold pupil and releases the constraint on having a high back focal length for small focal length systems. By this way, cameras made of a single lens or two lenses are viable systems with good optical features and a good stability in image correction. However it involves a relatively significant additional optical mass inside the dewar and thus increases the cool down time of the camera. ONERA is currently exploring a minimalist strategy consisting in giving an imaging function to thin optical plates that are found in conventional dewars. By this way, we could make a cryogenic camera that has the same cool down time as a traditional dewar without an imagery function. Two examples will be presented: the first one is a camera using a dual-band infrared detector made of a lens outside the dewar and a lens inside the cold shield, the later having the main optical power of the system. We were able to design a cold plano-convex lens with a thickness lower than 1mm. The second example is an evolution of a former cryogenic camera called SOIE. We replaced the cold meniscus by a plano-convex Fresnel lens with a decrease of the optical thermal mass of 66%. The performances of both cameras will be compared.
Optical registration of spaceborne low light remote sensing camera
NASA Astrophysics Data System (ADS)
Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long
2018-02-01
For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.
Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami
2016-11-01
A cadaveric laboratory study. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P < 0.05), specifically significant increases of perfectly placed screws (51% vs. 30%, P < 0.05) and reductions in breaches beyond 4 mm (2% vs. 25%, P < 0.05). All morphometric dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. N/A.
NASA Astrophysics Data System (ADS)
Caress, D. W.; Hobson, B.; Thomas, H. J.; Henthorn, R.; Martin, E. J.; Bird, L.; Rock, S. M.; Risi, M.; Padial, J. A.
2013-12-01
The Monterey Bay Aquarium Research Institute is developing a low altitude, high-resolution seafloor mapping capability that combines multibeam sonar with stereo photographic imagery. The goal is to obtain spatially quantitative, repeatable renderings of the seafloor with fidelity at scales of 5 cm or better from altitudes of 2-3 m. The initial test surveys using this sensor system are being conducted from a remotely operated vehicle (ROV). Ultimately we intend to field this survey system from an autonomous underwater vehicle (AUV). This presentation focuses on the current sensor configuration, methods for data processing, and results from recent test surveys. Bathymetry data are collected using a 400-kHz Reson 7125 multibeam sonar. This configuration produces 512 beams across a 135° wide swath; each beam has a 0.5° acrosstrack by 1.0° alongtrack angular width. At a 2-m altitude, the nadir beams have a 1.7-cm acrosstrack and 3.5 cm alongtrack footprint. Dual Allied Vision Technology GX1920 2.8 Mpixel color cameras provide color stereo photography of the seafloor. The camera housings have been fitted with corrective optics achieving a 90° field of view through a dome port. Illumination is provided by dual 100J xenon strobes. Position, depth, and attitude data are provided by a Kearfott SeaDevil Inertial Navigation System (INS) integrated with a 300 kHz RDI Doppler velocity log (DVL). A separate Paroscientific pressure sensor is mounted adjacent to the INS. The INS Kalman filter is aided by the DVL velocity and pressure data, achieving navigational drift rates less than 0.05% of the distance traveled during surveys. The sensors are mounted onto a toolsled fitted below MBARI's ROV Doc Ricketts with the sonars, cameras and strobes all pointed vertically down. During surveys the ROV flies at a 2-m altitude at speeds of 0.1-0.2 m/s. During a four-day R/V Western Flyer cruise in June 2013, we successfully collected multibeam and camera survey data from a 2-m altitude at three sites in the deep Monterey Canyon axis. The surveys lines were spaced 1.5-m and were flown at speeds of 0.1-0.2-m/s while the sonars pinged at 3 Hz and the cameras operated at 0.5 Hz. All three low-altitude surveys are at ~2850 m depth and lie within the 1-m lateral resolution bathymetry of a 2009, 50-m altitude MBARI Mapping AUV survey. Site 1 has the greatest topography, being centered on a 15 m diameter, 7 m high flat boulder surrounded by an 80 m diameter, 6 m deep scour pit. Site 2 is located within a field of ~3-m high apparent sediment waves with ~80-m wavelengths. Site 0 is flat and includes chemosynthetic clam communities. At a 2 m altitude, the multibeam bathymetry swath is more than 7 m wide and the camera images are 4 m wide. Following navigation adjustment to match features in overlapping bathymetry swaths, we achieve 5-cm lateral resolution topography overlain with ~1-mm scale photographic imagery.
Thomas Leps Internship Abstract
NASA Technical Reports Server (NTRS)
Leps, Thomas
2016-01-01
An optical navigation system is being flown as the backup system to the primary Deep Space Network telemetry for navigation and guidance purposes on Orion. This is required to ensure Orion can recover from a loss of communication, which would simultaneously cause a loss of DSN telemetry. Images taken of the Moon and Earth are used to give range and position information to the navigation computer for trajectory calculations and maneuver execution. To get telemetry data from these images, the size and location of the moon need to be calculated with high accuracy and precision. The reentry envelope for the Orion EM-1 mission requires the centroid and radius of the moon images to be determined within 1/3 of a pixel 3 sigma. In order to ensure this accuracy and precision can be attained, I was tasked with building precise dot grid images for camera calibration as well as building a hardware in the loop test stand for flight software and hardware proofing. To calibrate the Op-Nav camera a dot grid is imaged with the camera, the error between the image dot location and the actual dot location can be used to build a distortion map of the camera and lens system so that images can be fixed to display truth locations. To build the dot grid images I used the Electro Optics Lab optical bench Bright Object Simulator System, and gimbal. The gimbal was slewed to a series of elevations and azimuths. An image of the collimated single point light source was then taken at each position. After a series of 99 images were taken at different locations the single light spots were extracted from each image and added to a composite image containing all 99 points. During the development of these grids it was noticed that an intermittent error in the artificial "star" locations occurred. Prior to the summer this error was attributed to the gimbal having glitches in it's pointing direction and was going to be replaced, however after further examining the issue I determined it to be a software issue. I have since narrowed the likely source of the error down to a Software Development Kit released by the camera supplier PixeLink. I have since developed a workaround in order to build star grids for calibration until the software bug can be isolated and fixed. I was also tasked with building a Hardware in the Loop test stand in order to test the full Op-Nav system. A 4k screen displays simulated Lunar and Terrestrial images from a possible Orion trajectory. These images are then projected through a collimator and then captured with an Op-Nav camera controlled by an Intel NUC computer running flight software. The flight software then analyzes the images to determine attitude and position, this data is then reconstructed into a trajectory and matched to the simulated trajectory in order to determine the accuracy of the attitude and position estimates. In order for the system to work it needs to be precisely and accurately aligned. I developed an alignment procedure that allows the screen, collimator and camera to be squared, centered and collinear with each other within a micron spatially and 5 arcseconds in rotation. I also designed a rigid mount for the screen that was machined on site in Building 10 by another intern. While I was working in the EOL we received a $500k Orion startracker for alignment procedure testing. Due to my prior experience in electronics development, as an ancillary duty, I was tasked with building the cables required to operate and power the startracker. If any errors are made building these cables the startracker would be destroyed, I was honored that the director of the lab entrusted such a critical component with me. This internship has cemented my view on public space exploration. I always preferred public sector to privatization because, as a scientist, the most interesting aspects of space for me are not necessarily the most profitable. I was concerned that the public sector was faltering however, and that in order to improve human space exploration I would be forced into private sector. I now know that, at least at JSC, human spaceflight is still progressing, and exciting work is still being done. I am now actively seeking employment at JSC after I complete my Ph.D and have met with my branch chiefs and mentor to discuss transitioning to a grad Co-op position.
NASA Technical Reports Server (NTRS)
1982-01-01
A design concept that will implement a mapping capability for the Orbital Camera Payload System (OCPS) when ground control points are not available is discussed. Through the use of stellar imagery collected by a pair of cameras whose optical axis are structurally related to the large format camera optical axis, such pointing information is made available.
Cameras Improve Navigation for Pilots, Drivers
NASA Technical Reports Server (NTRS)
2012-01-01
Advanced Scientific Concepts Inc. (ASC), of Santa Barbara, California, received SBIR awards and other funding from the Jet Propulsion Laboratory, Johnson Space Center, and Langley Research Center to develop and refine its 3D flash LIDAR technologies for space applications. Today, ASC's NASA-derived technology is sold to assist with collision avoidance, navigation, and object tracking.
Angle of sky light polarization derived from digital images of the sky under various conditions.
Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Yang, Yi; Ning, Yu
2017-01-20
Skylight polarization is used for navigation by some birds and insects. Skylight polarization also has potential for human navigation applications. Its advantages include relative immunity from interference and the absence of error accumulation over time. However, there are presently few examples of practical applications for polarization navigation technology. The main reason is its weak robustness during cloudy weather conditions. In this paper, the real-time measurement of the sky light polarization pattern across the sky has been achieved with a wide field of view camera. The images were processed under a new reference coordinate system to clearly display the symmetrical distribution of angle of polarization with respect to the solar meridian. A new algorithm for the extraction of the image axis of symmetry is proposed, in which the real-time azimuth angle between the camera and the solar meridian is accurately calculated. Our experimental results under different weather conditions show that polarization navigation has high accuracy, is strongly robust, and performs well during fog and haze, clouds, and strong sunlight.
The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality.
Watanabe, Eiju; Satoh, Makoto; Konno, Takehiko; Hirai, Masahiro; Yamaguchi, Takashi
2016-03-01
The neuronavigator has become indispensable for brain surgery and works in the manner of point-to-point navigation. Because the positional information is indicated on a personal computer (PC) monitor, surgeons are required to rotate the dimension of the magnetic resonance imaging/computed tomography scans to match the surgical field. In addition, they must frequently alternate their gaze between the surgical field and the PC monitor. To overcome these difficulties, we developed an augmented reality-based navigation system with whole-operation-room tracking. A tablet PC is used for visualization. The patient's head is captured by the back-face camera of the tablet. Three-dimensional images of intracranial structures are extracted from magnetic resonance imaging/computed tomography and are superimposed on the video image of the head. When viewed from various directions around the head, intracranial structures are displayed with corresponding angles as viewed from the camera direction, thus giving the surgeon the sensation of seeing through the head. Whole-operation-room tracking is realized using a VICON tracking system with 6 cameras. A phantom study showed a spatial resolution of about 1 mm. The present system was evaluated in 6 patients who underwent tumor resection surgery, and we showed that the system is useful for planning skin incisions as well as craniotomy and the localization of superficial tumors. The main advantage of the present system is that it achieves volumetric navigation in contrast to conventional point-to-point navigation. It extends augmented reality images directly onto real surgical images, thus helping the surgeon to integrate these 2 dimensions intuitively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
2010-01-01
open garage leading to the building interior. The UAV is positioned north of a potential ingress to the building. As the mission begins, the UAV...camera, the difficulty in detecting and navigating around obstacles using this non- stereo camera necessitated a precomputed map of all obstacles and
Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras
Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong
2014-01-01
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679
Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael
2014-12-01
Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.
Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera
NASA Astrophysics Data System (ADS)
Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.
2011-12-01
This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Deep-space navigation applications of improved ground-based optical astrometry
NASA Technical Reports Server (NTRS)
Null, G. W.; Owen, W. M., Jr.; Synnott, S. P.
1992-01-01
Improvements in ground-based optical astrometry will eventually be required for navigation of interplanetary spacecraft when these spacecraft communicate at optical wavelengths. Although such spacecraft may be some years off, preliminary versions of the astrometric technology can also be used to obtain navigational improvements for the Galileo and Cassini missions. This article describes a technology-development and observational program to accomplish this, including a cooperative effort with U.S. Naval Observatory Flagstaff Station. For Galileo, Earth-based astrometry of Jupiter's Galilean satellites may improve their ephemeris accuracy by a factor of 3 to 6. This would reduce the requirements for onboard optical navigation pictures, so that more of the data transmission capability (currently limited by high-gain antenna deployment problems) can be used for science data. Also, observations of European Space Agency (ESA) Hipparcos stars with asteroid 243 Ida may provide significantly improved navigation accuracy for a planned August 1993 Galileo spacecraft encounter.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-04-21
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.
Wang, Hao; Jiang, Jie; Zhang, Guangjun
2017-01-01
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
Mach-zehnder based optical marker/comb generator for streak camera calibration
Miller, Edward Kirk
2015-03-03
This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.
Autonomous Deep-Space Optical Navigation Project
NASA Technical Reports Server (NTRS)
D'Souza, Christopher
2014-01-01
This project will advance the Autonomous Deep-space navigation capability applied to Autonomous Rendezvous and Docking (AR&D) Guidance, Navigation and Control (GNC) system by testing it on hardware, particularly in a flight processor, with a goal of limited testing in the Integrated Power, Avionics and Software (IPAS) with the ARCM (Asteroid Retrieval Crewed Mission) DRO (Distant Retrograde Orbit) Autonomous Rendezvous and Docking (AR&D) scenario. The technology, which will be harnessed, is called 'optical flow', also known as 'visual odometry'. It is being matured in the automotive and SLAM (Simultaneous Localization and Mapping) applications but has yet to be applied to spacecraft navigation. In light of the tremendous potential of this technique, we believe that NASA needs to design a optical navigation architecture that will use this technique. It is flexible enough to be applicable to navigating around planetary bodies, such as asteroids.
Orion Optical Navigation for Loss of Communication Lunar Return Contingencies
NASA Technical Reports Server (NTRS)
Getchius, Joel; Hanak, Chad; Kubitschek, Daniel G.
2010-01-01
The Orion Crew Exploration Vehicle (CEV) will replace the Space Shuttle and serve as the next-generation spaceship to carry humans back to the Moon for the first time since the Apollo program. For nominal lunar mission operations, the Mission Control Navigation team will utilize radiometric measurements to determine the position and velocity of Orion and uplink state information to support Lunar return. However, in the loss of communications contingency return scenario, Orion must safely return the crew to the Earth's surface. The navigation design solution for this loss of communications scenario is optical navigation consisting of lunar landmark tracking in low lunar orbit and star- horizon angular measurements coupled with apparent planetary diameter for Earth return trajectories. This paper describes the optical measurement errors and the navigation filter that will process those measurements to support navigation for safe crew return.
Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund
NASA Technical Reports Server (NTRS)
Hagyard, Mona J.
1992-01-01
The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.
Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.
2001-01-01
The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.
Method used to test the imaging consistency of binocular camera's left-right optical system
NASA Astrophysics Data System (ADS)
Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui
2016-09-01
To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.
Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA
NASA Astrophysics Data System (ADS)
Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki
2017-11-01
SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.
Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph
2017-09-26
Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.
Comet Wild 2 Up Close and Personal
NASA Technical Reports Server (NTRS)
2004-01-01
On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced 'Vilt-2'). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but 'stretched' so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter.
NASA Technical Reports Server (NTRS)
2004-01-01
Dubbed 'Carousel,' the rock in this image was the target of the Mars Exploration Rover Opportunity science team's outcrop 'scuff test.' The image on the left, taken by the rover's navigation camera on sol 48 of the mission (March 12, 2004), shows the rock pre-scuff. On sol 51 (March 15, 2004), Opportunity slowly rotated its left front wheel on the rock, abrading it in the same way that geology students use a scratch test to determine the hardness of minerals. The image on the right, taken by the rover's navigation camera on sol 51, shows the rock post-scuff. In this image, it is apparent that Opportunity scratched the surface of 'Carousel' and deposited dirt that it was carrying in its wheel rims.
Feedback from video for virtual reality Navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V
2000-10-27
Important preconditions for wide acceptance of virtual reality (VR) systems include their comfort, ease and naturalness to use. Most existing trackers super from discomfort-related issues. For example, body-based trackers (hand controllers, joysticks, helmet attachments, etc.) restrict spontaneity and naturalness of motion, while ground-based devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. There are similar problems with controls. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. Camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and amore » robust skin-color segmentation for accounting illumination variations.« less
Optical fiducial timing system for X-ray streak cameras with aluminum coated optical fiber ends
Nilson, David G.; Campbell, E. Michael; MacGowan, Brian J.; Medecki, Hector
1988-01-01
An optical fiducial timing system is provided for use with interdependent groups of X-ray streak cameras (18). The aluminum coated (80) ends of optical fibers (78) are positioned with the photocathodes (20, 60, 70) of the X-ray streak cameras (18). The other ends of the optical fibers (78) are placed together in a bundled array (90). A fiducial optical signal (96), that is comprised of 2.omega. or 1.omega. laser light, after introduction to the bundled array (90), travels to the aluminum coated (82) optical fiber ends and ejects quantities of electrons (84) that are recorded on the data recording media (52) of the X-ray streak cameras (18). Since both 2.omega. and 1.omega. laser light can travel long distances in optical fiber with only a slight attenuation, the initial arial power density of the fiducial optical signal (96) is well below the damage threshold of the fused silica or other material that comprises the optical fibers (78, 90). Thus the fiducial timing system can be repeatably used over long durations of time.
Three dimensional modelling for the target asteroid of HAYABUSA
NASA Astrophysics Data System (ADS)
Demura, H.; Kobayashi, S.; Asada, N.; Hashimoto, T.; Saito, J.
Hayabusa program is the first sample return mission of Japan. This was launched at May 9 2003, and will arrive at the target asteroid 25143 Itokawa on June 2005. The spacecraft has three optical navigation cameras, which are two wide angle ones and a telescopic one. The telescope with a filter wheel was named AMICA (Asteroid Multiband Imaging CAmera). We are going to model a shape of the target asteroid by this telescope; expected resolution: 1m/pixel at 10 km in distanc, field of view: 5.7 squared degrees, MPP-type CCD with 1024 x 1000 pixels. Because size of the Hayabusa is about 1x1x1 m, our goal is shape modeling with about 1m in precision on the basis of a camera system with scanning by rotation of the asteroid. This image-based modeling requires sequential images via AMICA and a history of distance between the asteroid and Hayabusa provided by a Laser Range Finder. We established a system of hierarchically recursive search with sub-pixel matching of Ground Control Points, which are picked up with Susan Operator. The matched dataset is restored with a restriction of epipolar geometry, and the obtained a group of three dimensional points are converted to a polygon model with Delaunay Triangulation. The current status of our development for the shape modeling is displayed.
Scalar wave-optical reconstruction of plenoptic camera images.
Junker, André; Stenau, Tim; Brenner, Karl-Heinz
2014-09-01
We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.
NASA Astrophysics Data System (ADS)
Figl, Michael; Birkfellner, Wolfgang; Watzinger, Franz; Wanschitz, Felix; Hummel, Johann; Hanel, Rudolf A.; Ewers, Rolf; Bergmann, Helmar
2002-05-01
Two main concepts of Head Mounted Displays (HMD) for augmented reality (AR) visualization exist, the optical and video-see through type. Several research groups have pursued both approaches for utilizing HMDs for computer aided surgery. While the hardware requirements for a video see through HMD to achieve acceptable time delay and frame rate seem to be enormous the clinical acceptance of such a device is doubtful from a practical point of view. Starting from previous work in displaying additional computer-generated graphics in operating microscopes, we have adapted a miniature head mounted operating microscope for AR by integrating two very small computer displays. To calibrate the projection parameters of this so called Varioscope AR we have used Tsai's Algorithm for camera calibration. Connection to a surgical navigation system was performed by defining an open interface to the control unit of the Varioscope AR. The control unit consists of a standard PC with a dual head graphics adapter to render and display the desired augmentation of the scene. We connected this control unit to a computer aided surgery (CAS) system by the TCP/IP interface. In this paper we present the control unit for the HMD and its software design. We tested two different optical tracking systems, the Flashpoint (Image Guided Technologies, Boulder, CO), which provided about 10 frames per second, and the Polaris (Northern Digital, Ontario, Canada) which provided at least 30 frames per second, both with a time delay of one frame.
Optical design of endoscopic shape-tracker using quantum dots embedded in fiber bundles
NASA Astrophysics Data System (ADS)
Eisenstein, Jessica; Gavalis, Robb; Wong, Peter Y.; Cao, Caroline G. L.
2009-08-01
Colonoscopy is the current gold standard for colon cancer screening and diagnosis. However, the near-blind navigation process employed during colonoscopy results in endoscopist disorientation and scope looping, leading to missed detection of tumors, incorrect localization, and pain for the patient. A fiber optic bend sensor, which would fit into the working channel of a colonoscope, is developed to aid navigation through the colon during colonoscopy. The bend sensor is comprised of a bundle of seven fibers doped with quantum dots (QDs). Each fiber within the bundle contains a unique region made up of three zones with differently-colored QDs, spaced 120° apart circumferentially on the fiber. During bending at the QD region, light lost from the fiber's core is coupled into one of the QD zones, inducing fluorescence of the corresponding color whose intensity is proportional to the degree of bending. A complementary metal oxide semiconductor camera is used to obtain an image of the fluorescing end faces of the fiber bundle. The location of the fiber within the bundle, the color of fluorescence, and the fluorescence intensity are used to determine the bundle's bending location, direction, and degree of curvature, respectively. Preliminary results obtained using a single fiber with three QD zones and a seven-fiber bundle containing one active fiber with two QDs (180° apart) demonstrate the feasibility of the concept. Further developments on fiber orientation during bundling and the design of a graphical user interface to communicate bending information are also discussed.
NASA Astrophysics Data System (ADS)
Armstrong, Roy A.; Singh, Hanumant
2006-09-01
Optical imaging of coral reefs and other benthic communities present below one attenuation depth, the limit of effective airborne and satellite remote sensing, requires the use of in situ platforms such as autonomous underwater vehicles (AUVs). The Seabed AUV, which was designed for high-resolution underwater optical and acoustic imaging, was used to characterize several deep insular shelf reefs of Puerto Rico and the US Virgin Islands using digital imagery. The digital photo transects obtained by the Seabed AUV provided quantitative data on living coral, sponge, gorgonian, and macroalgal cover as well as coral species richness and diversity. Rugosity, an index of structural complexity, was derived from the pencil-beam acoustic data. The AUV benthic assessments could provide the required information for selecting unique areas of high coral cover, biodiversity and structural complexity for habitat protection and ecosystem-based management. Data from Seabed sensors and related imaging technologies are being used to conduct multi-beam sonar surveys, 3-D image reconstruction from a single camera, photo mosaicking, image based navigation, and multi-sensor fusion of acoustic and optical data.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Multi-color pyrometry imaging system and method of operating the same
Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde
2017-03-21
A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.
Optical Design and Optimization of Translational Reflective Adaptive Optics Ophthalmoscopes
NASA Astrophysics Data System (ADS)
Sulai, Yusufu N. B.
The retina serves as the primary detector for the biological camera that is the eye. It is composed of numerous classes of neurons and support cells that work together to capture and process an image formed by the eye's optics, which is then transmitted to the brain. Loss of sight due to retinal or neuro-ophthalmic disease can prove devastating to one's quality of life, and the ability to examine the retina in vivo is invaluable in the early detection and monitoring of such diseases. Adaptive optics (AO) ophthalmoscopy is a promising diagnostic tool in early stages of development, still facing significant challenges before it can become a clinical tool. The work in this thesis is a collection of projects with the overarching goal of broadening the scope and applicability of this technology. We begin by providing an optical design approach for AO ophthalmoscopes that reduces the aberrations that degrade the performance of the AO correction. Next, we demonstrate how to further improve image resolution through the use of amplitude pupil apodization and non-common path aberration correction. This is followed by the development of a viewfinder which provides a larger field of view for retinal navigation. Finally, we conclude with the development of an innovative non-confocal light detection scheme which improves the non-invasive visualization of retinal vasculature and reveals the cone photoreceptor inner segments in healthy and diseased eyes.
Validation of Inertial and Optical Navigation Techniques for Space Applications with UAVS
NASA Astrophysics Data System (ADS)
Montaño, J.; Wis, M.; Pulido, J. A.; Latorre, A.; Molina, P.; Fernández, E.; Angelats, E.; Colomina, I.
2015-09-01
PERIGEO is an R&D project, funded by the INNPRONTA 2011-2014 programme from Spanish CDTI, which aims to investigate the use of UAV technologies and processes for the validation of space oriented technologies. For this purpose, among different space missions and technologies, a set of activities for absolute and relative navigation are being carried out to deal with the attitude and position estimation problem from a temporal image sequence from a camera on the visible spectrum and/or Light Detection and Ranging (LIDAR) sensor. The process is covered entirely: from sensor measurements and data acquisition (images, LiDAR ranges and angles), data pre-processing (calibration and co-registration of camera and LIDAR data), features and landmarks extraction from the images and image/LiDAR-based state estimation. In addition to image processing area, classical navigation system based on inertial sensors is also included in the research. The reason of combining both approaches is to enable the possibility to keep navigation capability in environments or missions where the radio beacon or reference signal as the GNSS satellite is not available (as for example an atmospheric flight in Titan). The rationale behind the combination of those systems is that they complement each other. The INS is capable of providing accurate position, velocity and full attitude estimations at high data rates. However, they need an absolute reference observation to compensate the time accumulative errors caused by inertial sensor inaccuracies. On the other hand, imaging observables can provide absolute and relative positioning and attitude estimations. However they need that the sensor head is pointing toward ground (something that may not be possible if the carrying platform is maneuvering) to provide accurate estimations and they are not capable of provide some hundreds of Hz that can deliver an INS. This mutual complementarity has been observed in PERIGEO and because of this they are combined into one system. The inertial navigation system implemented in PERIGEO is based on a classical loosely coupled INS/GNSS approach that is very similar to the implementation of the INS/Imaging navigation system that is mentioned above. The activities envisaged in PERIGEO cover the algorithms development and validation and technology testing on UAVs under representative conditions. Past activities have covered the design and development of the algorithms and systems. This paper presents the most recent activities and results on the area of image processing for robust estimation within PERIGEO, which are related with the hardware platforms definition (including sensors) and its integration in UAVs. Results for the tests performed during the flight campaigns in representative outdoor environments will be also presented (at the time of the full paper submission the tests will be performed), as well as analyzed, together with a roadmap definition for future developments.
RESTORATION OF ATMOSPHERICALLY DEGRADED IMAGES. VOLUME 3.
AERIAL CAMERAS, LASERS, ILLUMINATION, TRACKING CAMERAS, DIFFRACTION, PHOTOGRAPHIC GRAIN, DENSITY, DENSITOMETERS, MATHEMATICAL ANALYSIS, OPTICAL SCANNING, SYSTEMS ENGINEERING, TURBULENCE, OPTICAL PROPERTIES, SATELLITE TRACKING SYSTEMS.
Experiment D009: Simple navigation
NASA Technical Reports Server (NTRS)
Silva, R. M.; Jorris, T. R.; Vallerie, E. M., III
1971-01-01
Space position-fixing techniques have been investigated by collecting data on the observable phenomena of space flight that could be used to solve the problem of autonomous navigation by the use of optical data and manual computations to calculate the position of a spacecraft. After completion of the developmental and test phases, the product of the experiment would be a manual-optical technique of orbital space navigation that could be used as a backup to onboard and ground-based spacecraft-navigation systems.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes
ERIC Educational Resources Information Center
Browning, N. Andrew; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
Visually-based navigation is a key competence during spatial cognition. Animals avoid obstacles and approach goals in novel cluttered environments using optic flow to compute heading with respect to the environment. Most navigation models try either explain data, or to demonstrate navigational competence in real-world environments without regard…
Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption
NASA Astrophysics Data System (ADS)
Mathavaraj, S.; Pandiyan, R.; Padhi, R.
2017-12-01
A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.
Using Arago's spot to monitor optical axis shift in a Petzval refractor.
Bruns, Donald G
2017-03-10
Measuring the change in the optical alignment of a camera attached to a telescope is necessary to perform astrometric measurements. Camera movement when the telescope is refocused changes the plate constants, invalidating the calibration. Monitoring the shift in the optical axis requires a stable internal reference source. This is easily implemented in a Petzval refractor by adding an illuminated pinhole and a small obscuration that creates a spot of Arago on the camera. Measurements of the optical axis shift for a commercial telescope are given as an example.
Real-time full-motion color Flash lidar for target detection and identification
NASA Astrophysics Data System (ADS)
Nelson, Roy; Coppock, Eric; Craig, Rex; Craner, Jeremy; Nicks, Dennis; von Niederhausern, Kurt
2015-05-01
Greatly improved understanding of areas and objects of interest can be gained when real time, full-motion Flash LiDAR is fused with inertial navigation data and multi-spectral context imagery. On its own, full-motion Flash LiDAR provides the opportunity to exploit the z dimension for improved intelligence vs. 2-D full-motion video (FMV). The intelligence value of this data is enhanced when it is combined with inertial navigation data to produce an extended, georegistered data set suitable for a variety of analysis. Further, when fused with multispectral context imagery the typical point cloud now becomes a rich 3-D scene which is intuitively obvious to the user and allows rapid cognitive analysis with little or no training. Ball Aerospace has developed and demonstrated a real-time, full-motion LIDAR system that fuses context imagery (VIS to MWIR demonstrated) and inertial navigation data in real time, and can stream these information-rich geolocated/fused 3-D scenes from an airborne platform. In addition, since the higher-resolution context camera is boresighted and frame synchronized to the LiDAR camera and the LiDAR camera is an array sensor, techniques have been developed to rapidly interpolate the LIDAR pixel values creating a point cloud that has the same resolution as the context camera, effectively creating a high definition (HD) LiDAR image. This paper presents a design overview of the Ball TotalSight™ LIDAR system along with typical results over urban and rural areas collected from both rotary and fixed-wing aircraft. We conclude with a discussion of future work.
Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation
NASA Astrophysics Data System (ADS)
Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.
2018-05-01
A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.
New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing
NASA Astrophysics Data System (ADS)
Kemper, G.
2012-07-01
A huge number of small and medium sized sensors entered the market. Today's mid format sensors reach 80 MPix and allow to run projects of medium size, comparable with the first big format digital cameras about 6 years ago. New high quality lenses and new developments in the integration prepared the market for photogrammetric work. Companies as Phase One or Hasselblad and producers or integrators as Trimble, Optec, and others utilized these cameras for professional image production. In combination with small camera stabilizers they can be used also in small aircraft and make the equipment small and easy transportable e.g. for rapid assessment purposes. The combination of different camera sensors enables multi or hyper-spectral installations e.g. useful for agricultural or environmental projects. Arrays of oblique viewing cameras are in the market as well, in many cases these are small and medium format sensors combined as rotating or shifting devices or just as a fixed setup. Beside the proper camera installation and integration, also the software that controls the hardware and guides the pilot has to solve much more tasks than a normal FMS did in the past. Small and relatively cheap Laser Scanners (e.g. Riegl) are in the market and a proper combination with MS Cameras and an integrated planning and navigation is a challenge that has been solved by different softwares. Turnkey solutions are available e.g. for monitoring power line corridors where taking images is just a part of the job. Integration of thermal camera systems with laser scanner and video capturing must be combined with specific information of the objects stored in a database and linked when approaching the navigation point.
Effect of Olfactory Stimulus on the Flight Course of a Honeybee, Apis mellifera, in a Wind Tunnel.
Ikeno, Hidetoshi; Akamatsu, Tadaaki; Hasegawa, Yuji; Ai, Hiroyuki
2013-12-31
It is known that the honeybee, Apis mellifera, uses olfactory stimulus as important information for orienting to food sources. Several studies on olfactory-induced orientation flight have been conducted in wind tunnels and in the field. From these studies, optical sensing is used as the main information with the addition of olfactory signals and the navigational course followed by these sensory information. However, it is not clear how olfactory information is reflected in the navigation of flight. In this study, we analyzed the detailed properties of flight when oriented to an odor source in a wind tunnel. We recorded flying bees with a video camera to analyze the flight area, speed, angular velocity and trajectory. After bees were trained to be attracted to a feeder, the flight trajectories with or without the olfactory stimulus located upwind of the feeder were compared. The results showed that honeybees flew back and forth in the proximity of the odor source, and the search range corresponded approximately to the odor spread area. It was also shown that the angular velocity was different inside and outside the odor spread area, and trajectories tended to be bent or curved just outside the area.
Comparison of Factorization-Based Filtering for Landing Navigation
NASA Technical Reports Server (NTRS)
McCabe, James S.; Brown, Aaron J.; DeMars, Kyle J.; Carson, John M., III
2017-01-01
This paper develops and analyzes methods for fusing inertial navigation data with external data, such as data obtained from an altimeter and a star camera. The particular filtering techniques are based upon factorized forms of the Kalman filter, specifically the UDU and Cholesky factorizations. The factorized Kalman filters are utilized to ensure numerical stability of the navigation solution. Simulations are carried out to compare the performance of the different approaches along a lunar descent trajectory using inertial and external data sources. It is found that the factorized forms improve upon conventional filtering techniques in terms of ensuring numerical stability for the investigated landing navigation scenario.
Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin
2015-10-20
Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.
ERIC Educational Resources Information Center
Hoge, Robert Joaquin
2010-01-01
Within the sphere of education, navigating throughout a digital world has become a matter of necessity for the developing professional, as with the advent of Document Camera Technology (DCT). This study explores the pedagogical implications of implementing DCT; to see if there is a relationship between teachers' comfort with DCT and to the…
Comet Wild 2 Up Close and Personal
2004-01-02
On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced "Vilt-2"). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but "stretched" so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter. http://photojournal.jpl.nasa.gov/catalog/PIA05571
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Image Intensifier Modules For Use With Commercially Available Solid State Cameras
NASA Astrophysics Data System (ADS)
Murphy, Howard; Tyler, Al; Lake, Donald W.
1989-04-01
A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.
Mini AERCam: A Free-Flying Robot for Space Inspection
NASA Technical Reports Server (NTRS)
Fredrickson, Steven
2001-01-01
The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.
Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring
2009-11-01
We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.
Application of PLZT electro-optical shutter to diaphragm of visible and mid-infrared cameras
NASA Astrophysics Data System (ADS)
Fukuyama, Yoshiyuki; Nishioka, Shunji; Chonan, Takao; Sugii, Masakatsu; Shirahata, Hiromichi
1997-04-01
Pb0.9La0.09(Zr0.65,Ti0.35)0.9775O3 9/65/35) commonly used as an electro-optical shutter exhibits large phase retardation with low applied voltage. This shutter features as follows; (1) high shutter speed, (2) wide optical transmittance, and (3) high optical density in 'OFF'-state. If the shutter is applied to a diaphragm of video-camera, it could protect its sensor from intense lights. We have tested the basic characteristics of the PLZT electro-optical shutter and resolved power of imaging. The ratio of optical transmittance at 'ON' and 'OFF'-states was 1.1 X 103. The response time of the PLZT shutter from 'ON'-state to 'OFF'-state was 10 micro second. MTF reduction when putting the PLZT shutter in from of the visible video- camera lens has been observed only with 12 percent at a spatial frequency of 38 cycles/mm which are sensor resolution of the video-camera. Moreover, we took the visible image of the Si-CCD video-camera. The He-Ne laser ghost image was observed at 'ON'-state. On the contrary, the ghost image was totally shut out at 'OFF'-state. From these teste, it has been found that the PLZT shutter is useful for the diaphragm of the visible video-camera. The measured optical transmittance of PLZT wafer with no antireflection coating was 78 percent over the range from 2 to 6 microns.
Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor
NASA Astrophysics Data System (ADS)
Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui
2018-05-01
At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.
Demonstrations of Optical Spectra with a Video Camera
ERIC Educational Resources Information Center
Kraftmakher, Yaakov
2012-01-01
The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…
Optical Design of the LSST Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olivier, S S; Seppala, L; Gilmore, K
2008-07-16
The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, modified Paul-Baker design, with an 8.4-meter primary mirror, a 3.4-m secondary, and a 5.0-m tertiary feeding a camera system that includes a set of broad-band filters and refractive corrector lenses to produce a flat focal plane with a field of view of 9.6 square degrees. Optical design of the camera lenses and filters is integrated with optical design of telescope mirrors to optimize performance, resulting in excellent image quality over the entire field from ultra-violet to near infra-red wavelengths. The LSST camera optics design consists of three refractive lenses withmore » clear aperture diameters of 1.55 m, 1.10 m and 0.69 m and six interchangeable, broad-band, filters with clear aperture diameters of 0.75 m. We describe the methodology for fabricating, coating, mounting and testing these lenses and filters, and we present the results of detailed tolerance analyses, demonstrating that the camera optics will perform to the specifications required to meet their performance goals.« less
Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf
2011-03-01
We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Automated sea floor extraction from underwater video
NASA Astrophysics Data System (ADS)
Kelly, Lauren; Rahmes, Mark; Stiver, James; McCluskey, Mike
2016-05-01
Ocean floor mapping using video is a method to simply and cost-effectively record large areas of the seafloor. Obtaining visual and elevation models has noteworthy applications in search and recovery missions. Hazards to navigation are abundant and pose a significant threat to the safety, effectiveness, and speed of naval operations and commercial vessels. This project's objective was to develop a workflow to automatically extract metadata from marine video and create image optical and elevation surface mosaics. Three developments made this possible. First, optical character recognition (OCR) by means of two-dimensional correlation, using a known character set, allowed for the capture of metadata from image files. Second, exploiting the image metadata (i.e., latitude, longitude, heading, camera angle, and depth readings) allowed for the determination of location and orientation of the image frame in mosaic. Image registration improved the accuracy of mosaicking. Finally, overlapping data allowed us to determine height information. A disparity map was created using the parallax from overlapping viewpoints of a given area and the relative height data was utilized to create a three-dimensional, textured elevation map.
NASA Astrophysics Data System (ADS)
Hertel, R. J.; Hoilman, K. A.
1982-01-01
The effects of model vibration, camera and window nonlinearities, and aerodynamic disturbances in the optical path on the measurement of target position is examined. Window distortion, temperature and pressure changes, laminar and turbulent boundary layers, shock waves, target intensity and, target vibration are also studied. A general computer program was developed to trace optical rays through these disturbances. The use of a charge injection device camera as an alternative to the image dissector camera was examined.
Gaspra Optical Navigation Image
1996-02-08
This time-exposure picture of the asteroid Gaspra and background stars is one of four optical navigation images made by NASA Galileo imaging system to improve knowledge of Gaspra location for the spacecraft flyby. http://photojournal.jpl.nasa.gov/catalog/PIA00229
NASA Technical Reports Server (NTRS)
1976-01-01
Development of the F/48, F/96 Planetary Camera for the Large Space Telescope is discussed. Instrument characteristics, optical design, and CCD camera submodule thermal design are considered along with structural subsystem and thermal control subsystem. Weight, electrical subsystem, and support equipment requirements are also included.
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
Visual homing with a pan-tilt based stereo camera
NASA Astrophysics Data System (ADS)
Nirmal, Paramesh; Lyons, Damian M.
2013-01-01
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
Mariner Mars 1971 optical navigation demonstration
NASA Technical Reports Server (NTRS)
Born, G. H.; Duxbury, T. C.; Breckenridge, W. G.; Acton, C. H.; Mohan, S.; Jerath, N.; Ohtakay, H.
1974-01-01
The feasibility of using a combination of spacecraft-based optical data and earth-based Doppler data to perform near-real-time approach navigation was demonstrated by the Mariner Mars 71 Project. The important findings, conclusions, and recommendations are documented. A summary along with publications and papers giving additional details on the objectives of the demonstration are provided. Instrument calibration and performance as well as navigation and science results are reported.
Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging
Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami
2016-01-01
Study Design. A cadaveric laboratory study. Objective. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Summary of Background Data. Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. Methods. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. Results. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P < 0.05), specifically significant increases of perfectly placed screws (51% vs. 30%, P < 0.05) and reductions in breaches beyond 4 mm (2% vs. 25%, P < 0.05). All morphometric dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. Conclusion. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. Level of Evidence: N/A PMID:27513166
NASA Astrophysics Data System (ADS)
Leroux, B.; Cali, J.; Verdun, J.; Morel, L.; He, H.
2017-08-01
Airborne LiDAR systems require the use of Direct Georeferencing (DG) in order to compute the coordinates of the surveyed point in the mapping frame. An UAV platform does not derogate to this need, but its payload has to be lighter than this installed onboard so the manufacturer needs to find an alternative to heavy sensors and navigation systems. For the georeferencing of these data, a possible solution could be to replace the Inertial Measurement Unit (IMU) by a camera and record the optical flow. The different frames would then be processed thanks to photogrammetry so as to extract the External Orientation Parameters (EOP) and, therefore, the path of the camera. The major advantages of this method called Visual Odometry (VO) is low cost, no drifts IMU-induced, option for the use of Ground Control Points (GCPs) such as on airborne photogrammetry surveys. In this paper we shall present a test bench designed to assess the reliability and accuracy of the attitude estimated from VO outputs. The test bench consists of a trolley which embeds a GNSS receiver, an IMU sensor and a camera. The LiDAR is replaced by a tacheometer in order to survey the control points already known. We have also developped a methodology applied to this test bench for the calibration of the external parameters and the computation of the surveyed point coordinates. Several tests have revealed a difference about 2-3 centimeters between the control point coordinates measured and those already known.
NASA Astrophysics Data System (ADS)
Darwiesh, M.; El-Sherif, Ashraf F.; El-Ghandour, Hatem; Aly, Hussein A.; Mokhtar, A. M.
2011-03-01
Optical imaging systems are widely used in different applications include tracking for portable scanners; input pointing devices for laptop computers, cell phones, and cameras, fingerprint-identification scanners, optical navigation for target tracking, and in optical computer mouse. We presented an experimental work to measure and analyze the laser speckle pattern (LSP) produced from different optical sources (i.e. various color LEDs, 3 mW diode laser, and 10mW He-Ne laser) with different produced operating surfaces (Gabor hologram diffusers), and how they affects the performance of the optical imaging systems; speckle size and signal-to-noise ratio (signal is represented by the patches of the speckles that contain or carry information, and noise is represented by the whole remaining part of the selected image). The theoretical and experimental studies of the colorimetry (color correction is done in the color images captured by the optical imaging system to produce realistic color images which contains most of the information in the image by selecting suitable gray scale which contains most of the informative data in the image, this is done by calculating the accurate Red-Green-Blue (RGB) color components making use of the measured spectrum for light sources, and color matching functions of International Telecommunication Organization (ITU-R709) for CRT phosphorus, Tirinton-SONY Model ) for the used optical sources are investigated and introduced to present the relations between the signal-to-noise ratios with different diffusers for each light source. The source surface coupling has been discussed and concludes that the performance of the optical imaging system for certain source varies from worst to best based on the operating surface. The sensor /surface coupling has been studied and discussed for the case of He-Ne laser and concludes the speckle size is ranged from 4.59 to 4.62 μm, which are slightly different or approximately the same for all produced diffusers (which satisfies the fact that the speckle size is independent on the illuminating surface). But, the calculated value of signal-tonoise ratio takes different values ranged from 0.71 to 0.92 for different diffuser. This means that the surface texture affects the performance of the optical sensor because, all images captured for all diffusers under the same conditions [same source (He-Ne laser), same distances of the experimental set-up, and the same sensor (CCD camera)].
1999-08-01
Electro - Optic Sensor Integration Technology (NEOSIT) software application. The design is highly modular and based on COTS tools to facilitate integration with sensors, navigation and digital data sources already installed on different host
Optical Navigation Image of Ganymede
1996-06-06
NASA Galileo spacecraft, now in orbit around Jupiter, returned this optical navigation image June 3, 1996, showing that the spacecraft is accurately targeted for its first flyby of the giant moon Ganymede on June 27. http://photojournal.jpl.nasa.gov/catalog/PIA00273
Image navigation as a means to expand the boundaries of fluorescence-guided surgery
NASA Astrophysics Data System (ADS)
Brouwer, Oscar R.; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L.; Wendler, Thomas; Valdés-Olmos, Renato A.; van der Poel, Henk G.; van Leeuwen, Fijs W. B.
2012-05-01
Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.
View From Camera Not Used During Curiosity's First Six Months on Mars
2017-12-08
This view of Curiosity's left-front and left-center wheels and of marks made by wheels on the ground in the "Yellowknife Bay" area comes from one of six cameras used on Mars for the first time more than six months after the rover landed. The left Navigation Camera (Navcam) linked to Curiosity's B-side computer took this image during the 223rd Martian day, or sol, of Curiosity's work on Mars (March 22, 2013). The wheels are 20 inches (50 centimeters) in diameter. Curiosity carries a pair of main computers, redundant to each other, in order to have a backup available if one fails. Each of the computers, A-side and B-side, also has other redundant subsystems linked to just that computer. Curiosity operated on its A-side from before the August 2012 landing until Feb. 28, when engineers commanded a switch to the B-side in response to a memory glitch on the A-side. One set of activities after switching to the B-side computer has been to check the six engineering cameras that are hard-linked to that computer. The rover's science instruments, including five science cameras, can each be operated by either the A-side or B-side computer, whichever is active. However, each of Curiosity's 12 engineering cameras is linked to just one of the computers. The engineering cameras are the Navigation Camera (Navcam), the Front Hazard-Avoidance Camera (Front Hazcam) and Rear Hazard-Avoidance Camera (Rear Hazcam). Each of those three named cameras has four cameras as part of it: two stereo pairs of cameras, with one pair linked to each computer. Only the pairs linked to the active computer can be used, and the A-side computer was active from before landing, in August, until Feb. 28. All six of the B-side engineering cameras have been used during March 2013 and checked out OK. Image Credit: NASA/JPL-Caltech NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Fusion of laser and image sensory data for 3-D modeling of the free navigation space
NASA Technical Reports Server (NTRS)
Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.
1994-01-01
A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.
NASA Astrophysics Data System (ADS)
Moore, Lori
Plenoptic cameras and Shack-Hartmann wavefront sensors are lenslet-based optical systems that do not form a conventional image. The addition of a lens array into these systems allows for the aberrations generated by the combination of the object and the optical components located prior to the lens array to be measured or corrected with post-processing. This dissertation provides a ray selection method to determine the rays that pass through each lenslet in a lenslet-based system. This first-order, ray trace method is developed for any lenslet-based system with a well-defined fore optic, where in this dissertation the fore optic is all of the optical components located prior to the lens array. For example, in a plenoptic camera the fore optic is a standard camera lens. Because a lens array at any location after the exit pupil of the fore optic is considered in this analysis, it is applicable to both plenoptic cameras and Shack-Hartmann wavefront sensors. Only a generic, unaberrated fore optic is considered, but this dissertation establishes a framework for considering the effect of an aberrated fore optic in lenslet-based systems. The rays from the fore optic that pass through a lenslet placed at any location after the fore optic are determined. This collection of rays is reduced to three rays that describe the entire lenslet ray set. The lenslet ray set is determined at the object, image, and pupil planes of the fore optic. The consideration of the apertures that define the lenslet ray set for an on-axis lenslet leads to three classes of lenslet-based systems. Vignetting of the lenslet rays is considered for off-axis lenslets. Finally, the lenslet ray set is normalized into terms similar to the field and aperture vector used to describe the aberrated wavefront of the fore optic. The analysis in this dissertation is complementary to other first-order models that have been developed for a specific plenoptic camera layout or Shack-Hartmann wavefront sensor application. This general analysis determines the location where the rays of each lenslet pass through the fore optic establishing a framework to consider the effect of an aberrated fore optic in a future analysis.
Close-range photogrammetry with video cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Close-Range Photogrammetry with Video Cameras
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.
Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks
NASA Astrophysics Data System (ADS)
Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min
2015-10-01
Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.
Target Acquisition for Projectile Vision-Based Navigation
2014-03-01
Future Work 20 8. References 21 Appendix A. Simulation Results 23 Appendix B. Derivation of Ground Resolution for a Diffraction-Limited Pinhole Camera...results for visual acquisition (left) and target recognition (right). ..........19 Figure B-1. Differential object and image areas for pinhole camera...projectile and target (measured in terms of the angle ) will depend on target heading. In particular, because we have aligned the x axis along the
Optical performance analysis of plenoptic camera systems
NASA Astrophysics Data System (ADS)
Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas
2014-09-01
Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.
The Sensor Test for Orion RelNav Risk Mitigation (STORRM) Development Test Objective
NASA Technical Reports Server (NTRS)
Christian, John A.; Hinkel, Heather; D'Souza, Christopher N.; Maguire, Sean; Patangan, Mogi
2011-01-01
The Sensor Test for Orion Relative-Navigation Risk Mitigation (STORRM) Development Test Objective (DTO) flew aboard the Space Shuttle Endeavour on STS-134 in May- June 2011, and was designed to characterize the performance of the flash LIDAR and docking camera being developed for the Orion Multi-Purpose Crew Vehicle. The flash LIDAR, called the Vision Navigation Sensor (VNS), will be the primary navigation instrument used by the Orion vehicle during rendezvous, proximity operations, and docking. The DC will be used by the Orion crew for piloting cues during docking. This paper provides an overview of the STORRM test objectives and the concept of operations. It continues with a description of STORRM's major hardware components, which include the VNS, docking camera, and supporting avionics. Next, an overview of crew and analyst training activities will describe how the STORRM team prepared for flight. Then an overview of in-flight data collection and analysis is presented. Key findings and results from this project are summarized. Finally, the paper concludes with lessons learned from the STORRM DTO.
A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor
Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.
2015-01-01
For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-08-30
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.
A Fully Sensorized Cooperative Robotic System for Surgical Interventions
Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.
2012-01-01
In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551
Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation
Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin
2014-01-01
Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780
You are here: Earth as seen from Mars
2004-03-11
This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. The inset shows a combination of four panoramic camera images zoomed in on Earth. The arrow points to Earth. Earth was too faint to be detected in images taken with the panoramic camera's color filters. http://photojournal.jpl.nasa.gov/catalog/PIA05547
Designing a wearable navigation system for image-guided cancer resection surgery
Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald
2015-01-01
A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure. PMID:24980159
Designing a wearable navigation system for image-guided cancer resection surgery.
Shao, Pengfei; Ding, Houzhu; Wang, Jinkun; Liu, Peng; Ling, Qiang; Chen, Jiayu; Xu, Junbin; Zhang, Shiwu; Xu, Ronald
2014-11-01
A wearable surgical navigation system is developed for intraoperative imaging of surgical margin in cancer resection surgery. The system consists of an excitation light source, a monochromatic CCD camera, a host computer, and a wearable headset unit in either of the following two modes: head-mounted display (HMD) and Google glass. In the HMD mode, a CMOS camera is installed on a personal cinema system to capture the surgical scene in real-time and transmit the image to the host computer through a USB port. In the Google glass mode, a wireless connection is established between the glass and the host computer for image acquisition and data transport tasks. A software program is written in Python to call OpenCV functions for image calibration, co-registration, fusion, and display with augmented reality. The imaging performance of the surgical navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex vivo tissue model. Surgical margins identified by the wearable navigation system are co-incident with those acquired by a standard small animal imaging system, indicating the technical feasibility for intraoperative surgical margin detection. The proposed surgical navigation system combines the sensitivity and specificity of a fluorescence imaging system and the mobility of a wearable goggle. It can be potentially used by a surgeon to identify the residual tumor foci and reduce the risk of recurrent diseases without interfering with the regular resection procedure.
Analysis of the effect on optical equipment caused by solar position in target flight measure
NASA Astrophysics Data System (ADS)
Zhu, Shun-hua; Hu, Hai-bin
2012-11-01
Optical equipment is widely used to measure flight parameters in target flight performance test, but the equipment is sensitive to the sun's rays. In order to avoid the disadvantage of sun's rays directly shines to the optical equipment camera lens when measuring target flight parameters, the angle between observation direction and the line which connects optical equipment camera lens and the sun should be kept at a big range, The calculation method of the solar azimuth and altitude to the optical equipment at any time and at any place on the earth, the equipment observation direction model and the calculating model of angle between observation direction and the line which connects optical equipment camera lens are introduced in this article. Also, the simulation of the effect on optical equipment caused by solar position at different time, different date, different month and different target flight direction is given in this article.
Potential for application of an acoustic camera in particle tracking velocimetry.
Wu, Fu-Chun; Shao, Yun-Chuan; Wang, Chi-Kuei; Liou, Jim
2008-11-01
We explored the potential and limitations for applying an acoustic camera as the imaging instrument of particle tracking velocimetry. The strength of the acoustic camera is its usability in low-visibility environments where conventional optical cameras are ineffective, while its applicability is limited by lower temporal and spatial resolutions. We conducted a series of experiments in which acoustic and optical cameras were used to simultaneously image the rotational motion of tracer particles, allowing for a comparison of the acoustic- and optical-based velocities. The results reveal that the greater fluctuations associated with the acoustic-based velocities are primarily attributed to the lower temporal resolution. The positive and negative biases induced by the lower spatial resolution are balanced, with the positive ones greater in magnitude but the negative ones greater in quantity. These biases reduce with the increase in the mean particle velocity and approach minimum as the mean velocity exceeds the threshold value that can be sensed by the acoustic camera.
Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system
NASA Astrophysics Data System (ADS)
Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo
2010-02-01
A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.
Przemyslaw, Baranski; Pawel, Strumillo
2012-01-01
The paper presents an algorithm for estimating a pedestrian location in an urban environment. The algorithm is based on the particle filter and uses different data sources: a GPS receiver, inertial sensors, probability maps and a stereo camera. Inertial sensors are used to estimate a relative displacement of a pedestrian. A gyroscope estimates a change in the heading direction. An accelerometer is used to count a pedestrian's steps and their lengths. The so-called probability maps help to limit GPS inaccuracy by imposing constraints on pedestrian kinematics, e.g., it is assumed that a pedestrian cannot cross buildings, fences etc. This limits position inaccuracy to ca. 10 m. Incorporation of depth estimates derived from a stereo camera that are compared to the 3D model of an environment has enabled further reduction of positioning errors. As a result, for 90% of the time, the algorithm is able to estimate a pedestrian location with an error smaller than 2 m, compared to an error of 6.5 m for a navigation based solely on GPS. PMID:22969321
Wavefront Sensing With Switched Lenses for Defocus Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
In an alternative hardware design for an apparatus used in image-based wavefront sensing, defocus diversity is introduced by means of fixed lenses that are mounted in a filter wheel (see figure) so that they can be alternately switched into a position in front of the focal plane of an electronic camera recording the image formed by the optical system under test. [The terms image-based, wavefront sensing, and defocus diversity are defined in the first of the three immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] Each lens in the filter wheel is designed so that the optical effect of placing it at the assigned position is equivalent to the optical effect of translating the camera a specified defocus distance along the optical axis. Heretofore, defocus diversity has been obtained by translating the imaging camera along the optical axis to various defocus positions. Because data must be taken at multiple, accurately measured defocus positions, it is necessary to mount the camera on a precise translation stage that must be calibrated for each defocus position and/or to use an optical encoder for measurement and feedback control of the defocus positions. Additional latency is introduced into the wavefront sensing process as the camera is translated to the various defocus positions. Moreover, if the optical system under test has a large focal length, the required defocus values are large, making it necessary to use a correspondingly bulky translation stage. By eliminating the need for translation of the camera, the alternative design simplifies and accelerates the wavefront-sensing process. This design is cost-effective in that the filterwheel/lens mechanism can be built from commercial catalog components. After initial calibration of the defocus value of each lens, a selected defocus value is introduced by simply rotating the filter wheel to place the corresponding lens in front of the camera. The rotation of the wheel can be automated by use of a motor drive, and further calibration is not necessary. Because a camera-translation stage is no longer needed, the size of the overall apparatus can be correspondingly reduced.
Light field analysis and its applications in adaptive optics and surveillance systems
NASA Astrophysics Data System (ADS)
Eslami, Mohammed Ali
An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.
2004-03-13
This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. Earth is the tiny white dot in the center. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. http://photojournal.jpl.nasa.gov/catalog/PIA05560
Seeing in a different light—using an infrared camera to teach heat transfer and optical phenomena
NASA Astrophysics Data System (ADS)
Pei Wong, Choun; Subramaniam, R.
2018-05-01
The infrared camera is a useful tool in physics education to ‘see’ in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.
Seeing in a Different Light--Using an Infrared Camera to Teach Heat Transfer and Optical Phenomena
ERIC Educational Resources Information Center
Wong, Choun Pei; Subramaniam, R.
2018-01-01
The infrared camera is a useful tool in physics education to 'see' in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.
Integrating Terrain Maps Into a Reactive Navigation Strategy
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Werger, Barry; Seraji, Homayoun
2006-01-01
An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.
Target Trailing With Safe Navigation With Colregs for Maritime Autonomous Surface Vehicles
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki (Inventor); Aghazarian, Hrand (Inventor); Huntsberger, Terrance L. (Inventor); Howard, Andrew B. (Inventor); Wolf, Michael T. (Inventor); Zarzhitsky, Dimitri V. (Inventor)
2014-01-01
Systems and methods for operating autonomous waterborne vessels in a safe manner. The systems include hardware for identifying the locations and motions of other vessels, as well as the locations of stationary objects that represent navigation hazards. By applying a computational method that uses a maritime navigation algorithm for avoiding hazards and obeying COLREGS using Velocity Obstacles to the data obtained, the autonomous vessel computes a safe and effective path to be followed in order to accomplish a desired navigational end result, while operating in a manner so as to avoid hazards and to maintain compliance with standard navigational procedures defined by international agreement. The systems and methods have been successfully demonstrated on water with radar and stereo cameras as the perception sensors, and integrated with a higher level planner for trailing a maneuvering target.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, W.C.
1996-04-30
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors. 9 figs.
Condenser for illuminating a ringfield camera with synchrotron emission light
Sweatt, William C.
1996-01-01
The present invention relates generally to the field of condensers for collecting light from a synchrotron radiation source and directing the light into a ringfield of a lithography camera. The present invention discloses a condenser comprising collecting, processing, and imaging optics. The collecting optics are comprised of concave and convex spherical mirrors that collect the light beams. The processing optics, which receive the light beams, are comprised of flat mirrors that converge and direct the light beams into a real entrance pupil of the camera in a symmetrical pattern. In the real entrance pupil are located flat mirrors, common to the beams emitted from the preceding mirrors, for generating substantially parallel light beams and for directing the beams toward the ringfield of a camera. Finally, the imaging optics are comprised of a spherical mirror, also common to the beams emitted from the preceding mirrors, images the real entrance pupil through the resistive mask and into the virtual entrance pupil of the camera. Thus, the condenser is comprised of a plurality of beams with four mirrors corresponding to a single beam plus two common mirrors.
Applying UV cameras for SO2 detection to distant or optically thick volcanic plumes
Kern, Christoph; Werner, Cynthia; Elias, Tamar; Sutton, A. Jeff; Lübcke, Peter
2013-01-01
Ultraviolet (UV) camera systems represent an exciting new technology for measuring two dimensional sulfur dioxide (SO2) distributions in volcanic plumes. The high frame rate of the cameras allows the retrieval of SO2 emission rates at time scales of 1 Hz or higher, thus allowing the investigation of high-frequency signals and making integrated and comparative studies with other high-data-rate volcano monitoring techniques possible. One drawback of the technique, however, is the limited spectral information recorded by the imaging systems. Here, a framework for simulating the sensitivity of UV cameras to various SO2 distributions is introduced. Both the wavelength-dependent transmittance of the optical imaging system and the radiative transfer in the atmosphere are modeled. The framework is then applied to study the behavior of different optical setups and used to simulate the response of these instruments to volcanic plumes containing varying SO2 and aerosol abundances located at various distances from the sensor. Results show that UV radiative transfer in and around distant and/or optically thick plumes typically leads to a lower sensitivity to SO2 than expected when assuming a standard Beer–Lambert absorption model. Furthermore, camera response is often non-linear in SO2 and dependent on distance to the plume and plume aerosol optical thickness and single scatter albedo. The model results are compared with camera measurements made at Kilauea Volcano (Hawaii) and a method for integrating moderate resolution differential optical absorption spectroscopy data with UV imagery to retrieve improved SO2 column densities is discussed.
Combustion pinhole-camera system
Witte, A.B.
1982-05-19
A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.
Combustion pinhole camera system
Witte, A.B.
1984-02-21
A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor. 2 figs.
Combustion pinhole camera system
Witte, Arvel B.
1984-02-21
A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.
Almost Like Being at Bonneville
2004-03-17
NASA Mars Exploration Rover Spirit took this 3-D navigation camera mosaic of the crater called Bonneville. The rover solar panels can be seen in the foreground. 3D glasses are necessary to view this image.
2012-08-20
With the addition of four high-resolution Navigation Camera, or Navcam, images, taken on Aug. 18 Sol 12, Curiosity 360-degree landing-site panorama now includes the highest point on Mount Sharp visible from the rover.
Top of Mars Rover Curiosity Remote Sensing Mast
2011-04-06
The remote sensing mast on NASA Mars rover Curiosity holds two science instruments for studying the rover surroundings and two stereo navigation cameras for use in driving the rover and planning rover activities.
Liquid lens: advances in adaptive optics
NASA Astrophysics Data System (ADS)
Casey, Shawn Patrick
2010-12-01
'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.
Wide field/planetary camera optics study. [for the large space telescope
NASA Technical Reports Server (NTRS)
1979-01-01
Design feasibility of the baseline optical design concept was established for the wide field/planetary camera (WF/PC) and will be used with the space telescope (ST) to obtain high angular resolution astronomical information over a wide field. The design concept employs internal optics to relay the ST image to a CCD detector system. Optical design performance predictions, sensitivity and tolerance analyses, manufacturability of the optical components, and acceptance testing of the two mirror Cassegrain relays are discussed.
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
Mars Exploration Rover Navigation Camera in-flight calibration
NASA Astrophysics Data System (ADS)
Soderblom, Jason M.; Bell, James F.; Johnson, Jeffrey R.; Joseph, Jonathan; Wolff, Michael J.
2008-06-01
The Navigation Camera (Navcam) instruments on the Mars Exploration Rover (MER) spacecraft provide support for both tactical operations as well as scientific observations where color information is not necessary: large-scale morphology, atmospheric monitoring including cloud observations and dust devil movies, and context imaging for both the thermal emission spectrometer and the in situ instruments on the Instrument Deployment Device. The Navcams are a panchromatic stereoscopic imaging system built using identical charge-coupled device (CCD) detectors and nearly identical electronics boards as the other cameras on the MER spacecraft. Previous calibration efforts were primarily focused on providing a detailed geometric calibration in line with the principal function of the Navcams, to provide data for the MER navigation team. This paper provides a detailed description of a new Navcam calibration pipeline developed to provide an absolute radiometric calibration that we estimate to have an absolute accuracy of 10% and a relative precision of 2.5%. Our calibration pipeline includes steps to model and remove the bias offset, the dark current charge that accumulates in both the active and readout regions of the CCD, and the shutter smear. It also corrects pixel-to-pixel responsivity variations using flat-field images, and converts from raw instrument-corrected digital number values per second to units of radiance (W m-2 nm-1 sr-1), or to radiance factor (I/F). We also describe here the initial results of two applications where radiance-calibrated Navcam data provide unique information for surface photometric and atmospheric aerosol studies.
Laparoscopic assistance by operating room nurses: Results of a virtual-reality study.
Paschold, M; Huber, T; Maedge, S; Zeissig, S R; Lang, H; Kneist, W
2017-04-01
Laparoscopic assistance is often entrusted to a less experienced resident, medical student, or operating room nurse. Data regarding laparoscopic training for operating room nurses are not available. The aim of the study was to analyse the initial performance level and learning curves of operating room nurses in basic laparoscopic surgery compared with medical students and surgical residents to determine their ability to assist with this type of procedure. The study was designed to compare the initial virtual reality performance level and learning curves of user groups to analyse competence in laparoscopic assistance. The study subjects were operating room nurses, medical students, and first year residents. Participants performed three validated tasks (camera navigation, peg transfer, fine dissection) on a virtual reality laparoscopic simulator three times in 3 consecutive days. Laparoscopic experts were enrolled as a control group. Participants filled out questionnaires before and after the course. Nurses and students were comparable in their initial performance (p>0.05). Residents performed better in camera navigation than students and nurses and reached the expert level for this task. Residents, students, and nurses had comparable bimanual skills throughout the study; while, experts performed significantly better in bimanual manoeuvres at all times (p<0.05). The included user groups had comparable skills for bimanual tasks. Residents with limited experience reached the expert level in camera navigation. With training, nurses, students, and first year residents are equally capable of assisting in basic laparoscopic procedures. Copyright © 2017 Elsevier Ltd. All rights reserved.
International testing of a Mars rover prototype
NASA Astrophysics Data System (ADS)
Kemurjian, Alexsandr Leonovich; Linkin, V.; Friedman, L.
1993-03-01
Tests on a prototype engineering model of the Russian Mars 96 Rover were conducted by an international team in and near Death Valley in the United States in late May, 1992. These tests were part of a comprehensive design and testing program initiated by the three Russian groups responsible for the rover development. The specific objectives of the May tests were: (1) evaluate rover performance over different Mars-like terrains; (2) evaluate state-of-the-art teleoperation and autonomy development for Mars rover command, control and navigation; and (3) organize an international team to contribute expertise and capability on the rover development for the flight project. The range and performance that can be planned for the Mars mission is dependent on the degree of autonomy that will be possible to implement on the mission. Current plans are for limited autonomy, with Earth-based teleoperation for the nominal navigation system. Several types of television systems are being investigated for inclusion in the navigation system including panoramic camera, stereo, and framing cameras. The tests used each of these in teleoperation experiments. Experiments were included to consider use of such TV data in autonomy algorithms. Image processing and some aspects of closed-loop control software were also tested. A micro-rover was tested to help consider the value of such a device as a payload supplement to the main rover. The concept is for the micro-rover to serve like a mobile hand, with its own sensors including a television camera.
Coherent infrared imaging camera (CIRIC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.
1995-07-01
New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less
NASA Astrophysics Data System (ADS)
Zelazny, Amy; Benson, Robert; Deegan, John; Walsh, Ken; Schmidt, W. David; Howe, Russell
2013-06-01
We describe the benefits to camera system SWaP-C associated with the use of aspheric molded glasses and optical polymers in the design and manufacture of optical components and elements. Both camera objectives and display eyepieces, typical for night vision man-portable EO/IR systems, are explored. We discuss optical trade-offs, system performance, and cost reductions associated with this approach in both visible and non-visible wavebands, specifically NIR and LWIR. Example optical models are presented, studied, and traded using this approach.
A novel graphical user interface for ultrasound-guided shoulder arthroscopic surgery
NASA Astrophysics Data System (ADS)
Tyryshkin, K.; Mousavi, P.; Beek, M.; Pichora, D.; Abolmaesumi, P.
2007-03-01
This paper presents a novel graphical user interface developed for a navigation system for ultrasound-guided computer-assisted shoulder arthroscopic surgery. The envisioned purpose of the interface is to assist the surgeon in determining the position and orientation of the arthroscopic camera and other surgical tools within the anatomy of the patient. The user interface features real time position tracking of the arthroscopic instruments with an optical tracking system, and visualization of their graphical representations relative to a three-dimensional shoulder surface model of the patient, created from computed tomography images. In addition, the developed graphical interface facilitates fast and user-friendly intra-operative calibration of the arthroscope and the arthroscopic burr, capture and segmentation of ultrasound images, and intra-operative registration. A pilot study simulating the computer-aided shoulder arthroscopic procedure on a shoulder phantom demonstrated the speed, efficiency and ease-of-use of the system.
1996-01-13
The Near Earth Asteroid Rendezvous (NEAR) spacecraft undergoing preflight preparation in the Spacecraft Assembly Encapsulation Facility-2 (SAEF-2) at Kennedy Space Center (KSC). NEAR will perform two critical mission events - Mathilde flyby and the Deep-Space maneuver. NEAR will fly-by Mathilde, a 38-mile (61-km) diameter C-type asteroid, making use of its imaging system to obtain useful optical navigation images. The primary science instrument will be the camera, but measurements of magnetic fields and mass also will be made. The Deep-Space Maneuver (DSM) will be executed about a week after the Mathilde fly-by. The DSM represents the first of two major burns during the NEAR mission of the 100-pound bi-propellant (Hydrazine/nitrogen tetroxide) thruster. This maneuver is necessary to lower the perihelion distance of NEAR's trajectory. The DSM will be conducted in two segments to minimize the possibility of an overburn situation.
JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium
1990-10-29
screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and
NASA Technical Reports Server (NTRS)
1999-01-01
A survey is presented of NASA-developed technologies and systems that were reaching commercial application in the course of 1999. Attention is given to the contributions of each major NASA Research Center. Representative 'spinoff' technologies include the predictive AI engine monitoring system EMPAS, the GPS-based Wide Area Augmentation System for aircraft navigation, a CMOS-Active Pixel Sensor camera-on-a-chip, a marine spectroradiometer, portable fuel cells, hyperspectral camera technology, and a rapid-prototyping process for ceramic components.
Mars Exploration Rover engineering cameras
Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.
2003-01-01
NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.
Cheng, Xuemin; Yang, Yikang; Hao, Qun
2016-01-01
The thermal environment is an important factor in the design of optical systems. This study investigated the thermal analysis technology of optical systems for navigation guidance and control in supersonic aircraft by developing empirical equations for the front temperature gradient and rear thermal diffusion distance, and for basic factors such as flying parameters and the structure of the optical system. Finite element analysis (FEA) was used to study the relationship between flying and front dome parameters and the system temperature field. Systematic deduction was then conducted based on the effects of the temperature field on the physical geometry and ray tracing performance of the front dome and rear optical lenses, by deriving the relational expressions between the system temperature field and the spot size and positioning precision of the rear optical lens. The optical systems used for navigation guidance and control in supersonic aircraft when the flight speed is in the range of 1–5 Ma were analysed using the derived equations. Using this new method it was possible to control the precision within 10% when considering the light spot received by the four-quadrant detector, and computation time was reduced compared with the traditional method of separately analysing the temperature field of the front dome and rear optical lens using FEA. Thus, the method can effectively increase the efficiency of parameter analysis and computation in an airborne optical system, facilitating the systematic, effective and integrated thermal analysis of airborne optical systems for navigation guidance and control. PMID:27763515
Cheng, Xuemin; Yang, Yikang; Hao, Qun
2016-10-17
The thermal environment is an important factor in the design of optical systems. This study investigated the thermal analysis technology of optical systems for navigation guidance and control in supersonic aircraft by developing empirical equations for the front temperature gradient and rear thermal diffusion distance, and for basic factors such as flying parameters and the structure of the optical system. Finite element analysis (FEA) was used to study the relationship between flying and front dome parameters and the system temperature field. Systematic deduction was then conducted based on the effects of the temperature field on the physical geometry and ray tracing performance of the front dome and rear optical lenses, by deriving the relational expressions between the system temperature field and the spot size and positioning precision of the rear optical lens. The optical systems used for navigation guidance and control in supersonic aircraft when the flight speed is in the range of 1-5 Ma were analysed using the derived equations. Using this new method it was possible to control the precision within 10% when considering the light spot received by the four-quadrant detector, and computation time was reduced compared with the traditional method of separately analysing the temperature field of the front dome and rear optical lens using FEA. Thus, the method can effectively increase the efficiency of parameter analysis and computation in an airborne optical system, facilitating the systematic, effective and integrated thermal analysis of airborne optical systems for navigation guidance and control.
Modeling of digital information optical encryption system with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.
2015-10-01
State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.
Natural user interface as a supplement of the holographic Raman tweezers
NASA Astrophysics Data System (ADS)
Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel
2014-09-01
Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Effect of camera angulation on adaptation of CAD/CAM restorations.
Parsell, D E; Anderson, B C; Livingston, H M; Rudd, J I; Tankersley, J D
2000-01-01
A significant concern with computer-assisted design/computer-assisted manufacturing (CAD/CAM)-produced prostheses is the accuracy of adaptation of the restoration to the preparation. The objective of this study is to determine the effect of operator-controlled camera misalignment on restoration adaptation. A CEREC 2 CAD/CAM unit (Sirona Dental Systems, Bensheim, Germany) was used to capture the optical impressions and machine the restorations. A Class I preparation was used as the standard preparation for optical impressions. Camera angles along the mesio-distal and buccolingual alignment were varied from the ideal orientation. Occlusal marginal gaps and sample height, width, and length were measured and compared to preparation dimensions. For clinical correlation, clinicians were asked to take optical impressions of mesio-occlusal preparations (Class II) on all four second molar sites, using a patient simulator. On the adjacent first molar occlusal surfaces, a preparation was machined such that camera angulation could be calculated from information taken from the optical impression. Degree of tilt and plane of tilt were compared to the optimum camera positions for those preparations. One-way analysis of variance and Dunnett C post hoc testing (alpha = 0.01) revealed little significant degradation in fit with camera angulation. Only the apical length fit was significantly degraded by excessive angulation. The CEREC 2 CAD/CAM system was found to be relatively insensitive to operator-induced errors attributable to camera misalignments of less than 5 degrees in either the buccolingual or the mesiodistal plane. The average camera tilt error generated by clinicians for all sites was 1.98 +/- 1.17 degrees.
Optical analysis of a compound quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Wall, S. D.; Burcher, E. E.; Huck, F. O.
1974-01-01
A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.
SHOK—The First Russian Wide-Field Optical Camera in Space
NASA Astrophysics Data System (ADS)
Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.
2018-02-01
Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
2004-02-10
This is a three-dimensional stereo anaglyph of an image taken by the front navigation camera onboard NASA Mars Exploration Rover Spirit, showing an interesting patch of rippled soil. 3D glasses are necessary to view this image.
Opportunity Surroundings After 25 Miles on Mars
2014-08-14
This July 29, 2014, panorama combines several images from the navigation camera on NASA Mars Exploration Rover Opportunity to show the rover surroundings after surpassing 25 miles 40.23 kilometers of total driving on Mars.
NASA Technical Reports Server (NTRS)
Acton, C. H., Jr.; Ohtakay, H.
1975-01-01
Optical navigation uses spacecraft television pictures of a target body against a known star background in a process which relates the spacecraft trajectory to the target body. This technology was used in the Mariner-Venus-Mercury mission, with the optical data processed in near-real-time, simulating a mission critical environment. Optical data error sources were identified, and a star location error analysis was carried out. Several methods for selecting limb crossing coordinates were used, and a limb smear compensation was introduced. Omission of planetary aberration corrections was the source of large optical residuals.
Integration of Kinect and Low-Cost Gnss for Outdoor Navigation
NASA Astrophysics Data System (ADS)
Pagliaria, D.; Pinto, L.; Reguzzoni, M.; Rossi, L.
2016-06-01
Since its launch on the market, Microsoft Kinect sensor has represented a great revolution in the field of low cost navigation, especially for indoor robotic applications. In fact, this system is endowed with a depth camera, as well as a visual RGB camera, at a cost of about 200. The characteristics and the potentiality of the Kinect sensor have been widely studied for indoor applications. The second generation of this sensor has been announced to be capable of acquiring data even outdoors, under direct sunlight. The task of navigating passing from an indoor to an outdoor environment (and vice versa) is very demanding because the sensors that work properly in one environment are typically unsuitable in the other one. In this sense the Kinect could represent an interesting device allowing bridging the navigation solution between outdoor and indoor. In this work the accuracy and the field of application of the new generation of Kinect sensor have been tested outdoor, considering different lighting conditions and the reflective properties of the emitted ray on different materials. Moreover, an integrated system with a low cost GNSS receiver has been studied, with the aim of taking advantage of the GNSS positioning when the satellite visibility conditions are good enough. A kinematic test has been performed outdoor by using a Kinect sensor and a GNSS receiver and it is here presented.
Final Optical Design of PANIC, a Wide-Field Infrared Camera for CAHA
NASA Astrophysics Data System (ADS)
Cárdenas, M. C.; Gómez, J. Rodríguez; Lenzen, R.; Sánchez-Blanco, E.
We present the Final Optical Design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Ritchey-Chrtien focus of the Calar Alto 2.2 m telescope. This will be the first instrument built under the German-Spanish consortium that manages the Calar Alto observatory. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. The optical design produces a well defined internal pupil available to reducing the thermal background by a cryogenic pupil stop. A mosaic of four detectors Hawaii 2RG of 2 k ×2 k, made by Teledyne, will give a field of view of 31.9 arcmin ×31.9 arcmin.
Afocal viewport optics for underwater imaging
NASA Astrophysics Data System (ADS)
Slater, Dan
2014-09-01
A conventional camera can be adapted for underwater use by enclosing it in a sealed waterproof pressure housing with a viewport. The viewport, as an optical interface between water and air needs to consider both the camera and water optical characteristics while also providing a high pressure water seal. Limited hydrospace visibility drives a need for wide angle viewports. Practical optical interfaces between seawater and air vary from simple flat plate windows to complex water contact lenses. This paper first provides a brief overview of the physical and optical properties of the ocean environment along with suitable optical materials. This is followed by a discussion of the characteristics of various afocal underwater viewport types including flat windows, domes and the Ivanoff corrector lens, a derivative of a Galilean wide angle camera adapter. Several new and interesting optical designs derived from the Ivanoff corrector lens are presented including a pair of very compact afocal viewport lenses that are compatible with both in water and in air environments and an afocal underwater hyper-hemispherical fisheye lens.
Beacons for supporting lunar landing navigation
NASA Astrophysics Data System (ADS)
Theil, Stephan; Bora, Leonardo
2017-03-01
Current and future planetary exploration missions involve a landing on the target celestial body. Almost all of these landing missions are currently relying on a combination of inertial and optical sensor measurements to determine the current flight state with respect to the target body and the desired landing site. As soon as an infrastructure at the landing site exists, the requirements as well as conditions change for vehicles landing close to this existing infrastructure. This paper investigates the options for ground-based infrastructure supporting the onboard navigation system and analyzes the impact on the achievable navigation accuracy. For that purpose, the paper starts with an existing navigation architecture based on optical navigation and extends it with measurements to support navigation with ground infrastructure. A scenario of lunar landing is simulated and the provided functions of the ground infrastructure as well as the location with respect to the landing site are evaluated. The results are analyzed and discussed.
Systems analysis for ground-based optical navigation
NASA Technical Reports Server (NTRS)
Null, G. W.; Owen, W. M., Jr.; Synnott, S. P.
1992-01-01
Deep-space telecommunications systems will eventually operate at visible or near-infrared regions to provide increased information return from interplanetary spacecraft. This would require an onboard laser transponder in place of (or in addition to) the usual microwave transponder, as well as a network of ground-based and/or space-based optical observing stations. This article examines the expected navigation systems to meet these requirements. Special emphasis is given to optical astrometric (angular) measurements of stars, solar system target bodies, and (when available) laser-bearing spacecraft, since these observations can potentially provide the locations of both spacecraft and target bodies. The role of astrometry in the navigation system and the development options for astrometric observing systems are also discussed.
Adaptive Optics For Imaging Bright Objects Next To Dim Ones
NASA Technical Reports Server (NTRS)
Shao, Michael; Yu, Jeffrey W.; Malbet, Fabien
1996-01-01
Adaptive optics used in imaging optical systems, according to proposal, to enhance high-dynamic-range images (images of bright objects next to dim objects). Designed to alter wavefronts to correct for effects of scattering of light from small bumps on imaging optics. Original intended application of concept in advanced camera installed on Hubble Space Telescope for imaging of such phenomena as large planets near stars other than Sun. Also applicable to other high-quality telescopes and cameras.
Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites
NASA Astrophysics Data System (ADS)
Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc
2017-11-01
As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini-satellite class platform.
A simple optical tweezers for trapping polystyrene particles
NASA Astrophysics Data System (ADS)
Shiddiq, Minarni; Nasir, Zulfa; Yogasari, Dwiyana
2013-09-01
Optical tweezers is an optical trap. For decades, it has become an optical tool that can trap and manipulate any particle from the very small size like DNA to the big one like bacteria. The trapping force comes from the radiation pressure of laser light which is focused to a group of particles. Optical tweezers has been used in many research areas such as atomic physics, medical physics, biophysics, and chemistry. Here, a simple optical tweezers has been constructed using a modified Leybold laboratory optical microscope. The ocular lens of the microscope has been removed for laser light and digital camera accesses. A laser light from a Coherent diode laser with wavelength λ = 830 nm and power 50 mW is sent through an immersion oil objective lens with magnification 100 × and NA 1.25 to a cell made from microscope slides containing polystyrene particles. Polystyrene particles with size 3 μm and 10 μm are used. A CMOS Thorlabs camera type DCC1545M with USB Interface and Thorlabs camera lens 35 mm are connected to a desktop and used to monitor the trapping and measure the stiffness of the trap. The camera is accompanied by camera software which makes able for the user to capture and save images. The images are analyzed using ImageJ and Scion macro. The polystyrene particles have been trapped successfully. The stiffness of the trap depends on the size of the particles and the power of the laser. The stiffness increases linearly with power and decreases as the particle size larger.
Parallel-Processing Software for Correlating Stereo Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric
2007-01-01
A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.
NASA Astrophysics Data System (ADS)
Da Deppo, V.; Naletto, G.; Nicolosi, P.; Zambolin, P.; De Cecco, M.; Debei, S.; Parzianello, G.; Ramous, P.; Zaccariotto, M.; Fornasier, S.; Verani, S.; Thomas, N.; Barthol, P.; Hviid, S. F.; Sebastian, I.; Meller, R.; Sierks, H.; Keller, H. U.; Barbieri, C.; Angrilli, F.; Lamy, P.; Rodrigo, R.; Rickman, H.; Wenzel, K. P.
2017-11-01
Rosetta is one of the cornerstone missions of the European Space Agency for having a rendezvous with the comet 67P/Churyumov-Gerasimenko in 2014. The imaging instrument on board the satellite is OSIRIS (Optical, Spectroscopic and Infrared Remote Imaging System), a cooperation among several European institutes, which consists of two cameras: a Narrow (NAC) and a Wide Angle Camera (WAC). The WAC optical design is an innovative one: it adopts an all reflecting, unvignetted and unobstructed two mirror configuration which allows to cover a 12° × 12° field of view with an F/5.6 aperture and gives a nominal contrast ratio of about 10-4. The flight model of this camera has been successfully integrated and tested in our laboratories, and finally has been integrated on the satellite which is now waiting to be launched in February 2004. In this paper we are going to describe the optical characteristics of the camera, and to summarize the results so far obtained with the preliminary calibration data. The analysis of the optical performance of this model shows a good agreement between theoretical performance and experimental results.
Outer planet probe navigation. [considering Pioneer space missions
NASA Technical Reports Server (NTRS)
Friedman, L.
1974-01-01
A series of navigation studies in conjunction with outer planet Pioneer missions are reformed to determine navigation requirements and measurement systems in order to target probes. Some particular cases are established where optical navigation is important and some cases where radio alone navigation is suffucient. Considered are a direct Saturn mission, a Saturn Uranus mission, a Jupiter Uranus mission, and a Titan probe mission.
Rover-based visual target tracking validation and mission infusion
NASA Technical Reports Server (NTRS)
Kim, Won S.; Steele, Robert D.; Ansar, Adnan I.; Ali, Khaled; Nesnas, Issa
2005-01-01
The Mars Exploration Rovers (MER'03), Spirit and Opportunity, represent the state of the art in rover operations on Mars. This paper presents validation experiments of different visual tracking algorithms using the rover's navigation camera.
Opportunity View Leaving Cape York
2013-06-07
NASA Mars Exploration Rover Opportunity used its navigation camera to acquire this view looking toward the southwest. The scene includes tilted rocks at the edge of a bench surrounding Cape York, with Burns formation rocks exposed in Botany Bay.
Opportunity Surroundings on 3,000th Sol, Vertical Projection
2012-09-07
This 360-degree vertical projection was assembled from images taken by the navigation camera on NASA Mars Exporation Rover Opportunity shows terrain surrounding the position where the rover spent its 3,000th Martian day.
Opportunity Surroundings on 3,000th Sol, Polar Projection
2012-09-07
This 360-degree polar projection was assembled from images taken by the navigation camera on NASA Mars Exporation Rover Opportunity shows terrain surrounding the position where the rover spent its 3,000th Martian day.
NASA Technical Reports Server (NTRS)
2007-01-01
On sol 1120 (February 26, 2007), the navigation camera aboard NASA's Mars Exploration Rover Spirit captured one of the best dust devils it's seen in its three-plus year mission. The series of navigation camera images were put together to make a dust devil movie. The dust devil column is clearly defined and is clearly bent in the down wind direction. Near the end of the movie, the base of the dust devil becomes much wider. The atmospheric science team thinks that this is because the dust devil encountered some sand and therefore produced a 'saltation skirt,' an apron of material that is thrown out of the dust devil because it is too large to be carried up into suspension. Also near the end of the movie the dust devil seems to move faster across the surface. This is because Spirit began taking pictures less frequently, and not because the dust devil sped up.Spatial calibration of an optical see-through head mounted display
Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew
2010-01-01
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
Sarnadskiĭ, V N
2007-01-01
The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.
NASA Technical Reports Server (NTRS)
Hertel, R. J.
1979-01-01
An electro-optical method to measure the aeroelastic deformations of wind tunnel models is examined. The multitarget tracking performance of one of the two electronic cameras comprising the stereo pair is modeled and measured. The properties of the targets at the model, the camera optics, target illumination, number of targets, acquisition time, target velocities, and tracker performance are considered. The electronic camera system is shown to be capable of locating, measuring, and following the positions of 5 to 50 targets attached to the model at measuring rates up to 5000 targets per second.
Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor
Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung
2017-01-01
Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Under-vehicle autonomous inspection through undercarriage signatures
NASA Astrophysics Data System (ADS)
Schoenherr, Edward; Smuda, Bill
2005-05-01
Increased threats to gate security have caused recent need for improved vehicle inspection methods at security checkpoints in various fields of defense and security. A fast, reliable system of under-vehicle inspection that detects possibly harmful or unwanted materials hidden on vehicle undercarriages and notifies the user of the presence of these materials while allowing the user a safe standoff distance from the inspection site is desirable. An autonomous under-vehicle inspection system would provide for this. The proposed system would function as follows: A low-clearance tele-operated robotic platform would be equipped with sonar/laser range finding sensors as well as a video camera. As a vehicle to be inspected enters a checkpoint, the robot would autonomously navigate under the vehicle, using algorithms to detect tire locations for weigh points. During this navigation, data would be collected from the sonar/laser range finding hardware. This range data would be used to compile an impression of the vehicle undercarriage. Once this impression is complete, the system would compare it to a database of pre-scanned undercarriage impressions. Based on vehicle makes and models, any variance between the undercarriage being inspected and the impression compared against in the database would be marked as potentially threatening. If such variances exist, the robot would navigate to these locations and place the video camera in such a manner that the location in question can be viewed from a standoff position through a TV monitor. At this time, manual control of the robot navigation and camera control can be taken to imply further, more detailed inspection of the area/materials in question. After-market vehicle modifications would provide some difficulty, yet with enough pre-screening of such modifications, the system should still prove accurate. Also, impression scans that are taken in the field can be stored and tagged with a vehicles's license plate number, and future inspections of that vehicle can be compared to already screened and cleared impressions of the same vehicle in order to search for variance.
Science Activity Planner for the MER Mission
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Crockett, Thomas M.; Fox, Jason M.; Joswig, Joseph C.; Powell, Mark W.; Shams, Khawaja S.; Torres, Recaredo J.; Wallick, Michael N.; Mittman, David S.
2008-01-01
The Maestro Science Activity Planner is a computer program that assists human users in planning operations of the Mars Explorer Rover (MER) mission and visualizing scientific data returned from the MER rovers. Relative to its predecessors, this program is more powerful and easier to use. This program is built on the Java Eclipse open-source platform around a Web-browser-based user-interface paradigm to provide an intuitive user interface to Mars rovers and landers. This program affords a combination of advanced display and simulation capabilities. For example, a map view of terrain can be generated from images acquired by the High Resolution Imaging Science Explorer instrument aboard the Mars Reconnaissance Orbiter spacecraft and overlaid with images from a navigation camera (more precisely, a stereoscopic pair of cameras) aboard a rover, and an interactive, annotated rover traverse path can be incorporated into the overlay. It is also possible to construct an overhead perspective mosaic image of terrain from navigation-camera images. This program can be adapted to similar use on other outer-space missions and is potentially adaptable to numerous terrestrial applications involving analysis of data, operations of robots, and planning of such operations for acquisition of scientific data.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles
Benjamin, R.F.
1983-10-18
An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles
Benjamin, Robert F.
1987-01-01
An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.
Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles
Benjamin, R.F.
1987-03-10
An apparatus is disclosed for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously. 3 figs.
Augmented virtuality for arthroscopic knee surgery.
Li, John M; Bardana, Davide D; Stewart, A James
2011-01-01
This paper describes a computer system to visualize the location and alignment of an arthroscope using augmented virtuality. A 3D computer model of the patient's joint (from CT) is shown, along with a model of the tracked arthroscopic probe and the projection of the camera image onto the virtual joint. A user study, using plastic bones instead of live patients, was made to determine the effectiveness of this navigated display; the study showed that the navigated display improves target localization in novice residents.
Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System
2015-03-26
camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Infrared stereo calibration for unmanned ground vehicle navigation
NASA Astrophysics Data System (ADS)
Harguess, Josh; Strange, Shawn
2014-06-01
The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.
Inertial navigation sensor integrated obstacle detection system
NASA Technical Reports Server (NTRS)
Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)
1992-01-01
A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.
Effects of Optical Artifacts in a Laser-Based Spacecraft Navigation Sensor
NASA Technical Reports Server (NTRS)
LeCroy, Jerry E.; Howard, Richard T.; Hallmark, Dean S.
2007-01-01
Testing of the Advanced Video Guidance Sensor (AVGS) used for proximity operations navigation on the Orbital Express ASTRO spacecraft exposed several unanticipated imaging system artifacts and aberrations that required correction to meet critical navigation performance requirements. Mitigation actions are described for a number of system error sources, including lens aberration, optical train misalignment, laser speckle, target image defects, and detector nonlinearity/noise characteristics. Sensor test requirements and protocols are described, along with a summary of test results from sensor confidence tests and system performance testing.
Effects of Optical Artifacts in a Laser-Based Spacecraft Navigation Sensor
NASA Technical Reports Server (NTRS)
LeCroy, Jerry E.; Hallmark, Dean S.; Howard, Richard T.
2007-01-01
Testing Of the Advanced Video Guidance Sensor (AVGS) used for proximity operations navigation on the Orbital Express ASTRO spacecraft exposed several unanticipated imaging system artifacts and aberrations that required correction, to meet critical navigation performance requirements. Mitigation actions are described for a number of system error sources, including lens aberration, optical train misalignment, laser speckle, target image defects, and detector nonlinearity/noise characteristics. Sensor test requirements and protocols are described, along with a summary ,of test results from sensor confidence tests and system performance testing.
Mars Exploration Rover Navigation Camera in-flight calibration
Soderblom, J.M.; Bell, J.F.; Johnson, J. R.; Joseph, J.; Wolff, M.J.
2008-01-01
The Navigation Camera (Navcam) instruments on the Mars Exploration Rover (MER) spacecraft provide support for both tactical operations as well as scientific observations where color information is not necessary: large-scale morphology, atmospheric monitoring including cloud observations and dust devil movies, and context imaging for both the thermal emission spectrometer and the in situ instruments on the Instrument Deployment Device. The Navcams are a panchromatic stereoscopic imaging system built using identical charge-coupled device (CCD) detectors and nearly identical electronics boards as the other cameras on the MER spacecraft. Previous calibration efforts were primarily focused on providing a detailed geometric calibration in line with the principal function of the Navcams, to provide data for the MER navigation team. This paper provides a detailed description of a new Navcam calibration pipeline developed to provide an absolute radiometric calibration that we estimate to have an absolute accuracy of 10% and a relative precision of 2.5%. Our calibration pipeline includes steps to model and remove the bias offset, the dark current charge that accumulates in both the active and readout regions of the CCD, and the shutter smear. It also corrects pixel-to-pixel responsivity variations using flat-field images, and converts from raw instrument-corrected digital number values per second to units of radiance (W m-2 nm-1 sr-1), or to radiance factor (I/F). We also describe here the initial results of two applications where radiance-calibrated Navcam data provide unique information for surface photometric and atmospheric aerosol studies. Copyright 2008 by the American Geophysical Union.
Parametric Covariance Model for Horizon-Based Optical Navigation
NASA Technical Reports Server (NTRS)
Hikes, Jacob; Liounis, Andrew J.; Christian, John A.
2016-01-01
This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.
Comparison of standing volume estimates using optical dendrometers
Neil A. Clark; Stanley J. Zarnoch; Alexander Clark; Gregory A. Reams
2001-01-01
This study compared height and diameter measurements and volume estimates on 20 hardwood and 20 softwood stems using traditional optical dendrometers, an experimental camera instrument, and mechanical calipers. Multiple comparison tests showed significant differences among the means for lower stem diameters when the camera was used. There were no significant...
Comparison of Standing Volume Estimates Using Optical Dendrometers
Neil A. Clark; Stanley J. Zarnoch; Alexander Clark; Gregory A. Reams
2001-01-01
This study compared height and diameter measurements and volume estimates on 20 hardwood and 20 softwood stems using traditional optical dendrometers, an experimental camera instrument, and mechanical calipers. Multiple comparison tests showed significant differences among the means for lower stem diameters when the camera was used. There were no significant...
Optical gas imaging (OGI) cameras have the unique ability to exploit the electromagnetic properties of fugitive chemical vapors to make invisible gases visible. This ability is extremely useful for industrial facilities trying to mitigate product losses from escaping gas and fac...
Optical design of portable nonmydriatic fundus camera
NASA Astrophysics Data System (ADS)
Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang
2016-03-01
Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.
Drive Direction Image by Opportunity After Surpassing 20 Miles
2011-07-19
NASA Mars Exploration Rover Opportunity used its navigation camera to record this view in the eastward driving direction after completing a drive on July 17, 2011, that took the rover total driving distance on Mars beyond 20 miles.
2012-08-08
These are the first two full-resolution images of the Martian surface from the Navigation cameras on NASA Curiosity rover, which are located on the rover head or mast. The rim of Gale Crater can be seen in the distance beyond the pebbly ground.
Scooped Material on Rover Observation Tray
2012-10-25
Sample material from the fourth scoop of Martian soil collected by NASA Mars rover Curiosity is on the rover observation tray in this image taken during the mission 78th Martian sol, Oct. 24, 2012 by Curiosity left Navigation Camera.
Opportunity Surroundings After Sol 2363 Drive
2010-09-29
This mosaic of images from the navigation camera on NASA Mars Exploration Rover Opportunity shows surroundings of the rover location following a drive on Sept. 16, 2010. The terrain includes light-toned bedrock and darker ripples of wind-blown sand.
Skirting an Obstacle, Opportunity Sol 1867
2009-07-15
This view from the navigation camera on NASA Mars Exploration Rover Opportunity shows tracks left by backing out of a wind-formed ripple after the rover wheels had started to dig too deeply into the dust and sand of the ripple.
Underwater Inspection of Navigation Structures with an Acoustic Camera
2013-08-01
the camera with a slow angular speed while recording the images. 5. After the scanning has been performed, review recorded data to determine the...Core x86) or newer 2GB RAM 120GB disc space Operating system requirements Windows XP, Vista, Windows 7, 32/64 bit Java requirements Sun... Java JDK, Version 1.6, Update 16 or newer, for installation Limitations and tips for proper scanning Best results are achieved when scanning in
Hybrid optical acoustic seafloor mapping
NASA Astrophysics Data System (ADS)
Inglis, Gabrielle
The oceanographic research and industrial communities have a persistent demand for detailed three dimensional sea floor maps which convey both shape and texture. Such data products are used for archeology, geology, ship inspection, biology, and habitat classification. There are a variety of sensing modalities and processing techniques available to produce these maps and each have their own potential benefits and related challenges. Multibeam sonar and stereo vision are such two sensors with complementary strengths making them ideally suited for data fusion. Data fusion approaches however, have seen only limited application to underwater mapping and there are no established methods for creating hybrid, 3D reconstructions from two underwater sensing modalities. This thesis develops a processing pipeline to synthesize hybrid maps from multi-modal survey data. It is helpful to think of this processing pipeline as having two distinct phases: Navigation Refinement and Map Construction. This thesis extends existing work in underwater navigation refinement by incorporating methods which increase measurement consistency between both multibeam and camera. The result is a self consistent 3D point cloud comprised of camera and multibeam measurements. In map construction phase, a subset of the multi-modal point cloud retaining the best characteristics of each sensor is selected to be part of the final map. To quantify the desired traits of a map several characteristics of a useful map are distilled into specific criteria. The different ways that hybrid maps can address these criteria provides justification for producing them as an alternative to current methodologies. The processing pipeline implements multi-modal data fusion and outlier rejection with emphasis on different aspects of map fidelity. The resulting point cloud is evaluated in terms of how well it addresses the map criteria. The final hybrid maps retain the strengths of both sensors and show significant improvement over the single modality maps and naively assembled multi-modal maps.
The Sydney University PAPA camera
NASA Astrophysics Data System (ADS)
Lawson, Peter R.
1994-04-01
The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.
ARNICA, the Arcetri near-infrared camera: Astronomical performance assessment.
NASA Astrophysics Data System (ADS)
Hunt, L. K.; Lisi, F.; Testi, L.; Baffa, C.; Borelli, S.; Maiolino, R.; Moriondo, G.; Stanga, R. M.
1996-01-01
The Arcetri near-infrared camera ARNICA was built as a users' instrument for the Infrared Telescope at Gornergrat (TIRGO), and is based on a 256x256 NICMOS 3 detector. In this paper, we discuss ARNICA's optical and astronomical performance at the TIRGO and at the William Herschel Telescope on La Palma. Optical performance is evaluated in terms of plate scale, distortion, point spread function, and ghosting. Astronomical performance is characterized by camera efficiency, sensitivity, and spatial uniformity of the photometry.
640x480 PtSi Stirling-cooled camera system
NASA Astrophysics Data System (ADS)
Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.
1992-09-01
A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.
NASA Astrophysics Data System (ADS)
Waki, Masaki; Uruno, Shigenori; Ohashi, Hiroyuki; Manabe, Tetsuya; Azuma, Yuji
We propose an optical fiber connection navigation system that uses visible light communication for an integrated distribution module in a central office. The system realizes an accurate database, requires less skilled work to operate and eliminates human error. This system can achieve a working time reduction of up to 88.0% compared with the conventional work without human error for the connection/removal of optical fiber cords, and is economical as regards installation and operation.
Ohnsorge, J A K; Weisskopf, M; Siebert, C H
2005-01-01
Optoelectronic navigation for computer-assisted orthopaedic surgery (CAOS) is based on a firm connection of bone with passive reflectors or active light-emitting diodes in a specific three-dimensional pattern. Even a so-called "minimally-invasive" dynamic reference base (DRB) requires fixation with screws or clamps via incision of the skin. Consequently an originally percutaneous intervention would unnecessarily be extended to an open procedure. Thus, computer-assisted navigation is rarely applied. Due to their tree-like design most DRB's interfere with the surgeon's actions and therefore are at permanent risk to be accidentally dislocated. Accordingly, the optic communication between the camera and the operative site may repeatedly be interrupted. The aim of the research was the development of a less bulky, more comfortable, stable and safely trackable device that can be fixed truly percutaneously. With engineering support of the industrial partner the radiolucent epiDRB was developed. It can be fixed with two or more pins and gains additional stability from its epicutaneous position. The intraoperative applicability and reliability was experimentally tested. Its low centre of gravity and its flat design allow the device to be located directly in the area of interest. Thanks to its epicutaneous position and its particular shape the epiDRB may perpetually be tracked by the navigation system without hindering the surgeon's actions. Hence, the risk of being displaced by accident is minimised and the line of sight remains unaffected. With the newly developed epiDRB computer-assisted navigation becomes easier and safer to handle even in punctures and other percutaneous procedures at the spine as much as at the extremities without an unproportionate amount of additional trauma. Due to the special design referencing of more than one vertebral body is possible at one time, thus decreasing radiation exposure and increasing efficiency.
A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals
Hicks, Stephen L.; Wilson, Iain; Muhammed, Louwai; Worsfold, John; Downes, Susan M.; Kennard, Christopher
2013-01-01
Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals. PMID:23844067
Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph
2015-01-01
The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-01-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787
Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A
2013-06-01
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.
Orbital-science investigation: Part C: photogrammetry of Apollo 15 photography
Wu, Sherman S.C.; Schafer, Francis J.; Jordan, Raymond; Nakata, Gary M.; Derick, James L.
1972-01-01
Mapping of large areas of the Moon by photogrammetric methods was not seriously considered until the Apollo 15 mission. In this mission, a mapping camera system and a 61-cm optical-bar high-resolution panoramic camera, as well as a laser altimeter, were used. The mapping camera system comprises a 7.6-cm metric terrain camera and a 7.6-cm stellar camera mounted in a fixed angular relationship (an angle of 96° between the two camera axes). The metric camera has a glass focal-plane plate with reseau grids. The ground-resolution capability from an altitude of 110 km is approximately 20 m. Because of the auxiliary stellar camera and the laser altimeter, the resulting metric photography can be used not only for medium- and small-scale cartographic or topographic maps, but it also can provide a basis for establishing a lunar geodetic network. The optical-bar panoramic camera has a 135- to 180-line resolution, which is approximately 1 to 2 m of ground resolution from an altitude of 110 km. Very large scale specialized topographic maps for supporting geologic studies of lunar-surface features can be produced from the stereoscopic coverage provided by this camera.
Luo, Xiongbiao; Jayarathne, Uditha L; McLeod, A Jonathan; Mori, Kensaku
2014-01-01
Endoscopic navigation generally integrates different modalities of sensory information in order to continuously locate an endoscope relative to suspicious tissues in the body during interventions. Current electromagnetic tracking techniques for endoscopic navigation have limited accuracy due to tissue deformation and magnetic field distortion. To avoid these limitations and improve the endoscopic localization accuracy, this paper proposes a new endoscopic navigation framework that uses an optical mouse sensor to measure the endoscope movements along its viewing direction. We then enhance the differential evolution algorithm by modifying its mutation operation. Based on the enhanced differential evolution method, these movement measurements and image structural patches in endoscopic videos are fused to accurately determine the endoscope position. An evaluation on a dynamic phantom demonstrated that our method provides a more accurate navigation framework. Compared to state-of-the-art methods, it improved the navigation accuracy from 2.4 to 1.6 mm and reduced the processing time from 2.8 to 0.9 seconds.
Integrated communications and optical navigation system
NASA Astrophysics Data System (ADS)
Mueller, J.; Pajer, G.; Paluszek, M.
2013-12-01
The Integrated Communications and Optical Navigation System (ICONS) is a flexible navigation system for spacecraft that does not require global positioning system (GPS) measurements. The navigation solution is computed using an Unscented Kalman Filter (UKF) that can accept any combination of range, range-rate, planet chord width, landmark, and angle measurements using any celestial object. Both absolute and relative orbit determination is supported. The UKF employs a full nonlinear dynamical model of the orbit including gravity models and disturbance models. The ICONS package also includes attitude determination algorithms using the UKF algorithm with the Inertial Measurement Unit (IMU). The IMU is used as the dynamical base for the attitude determination algorithms. This makes the sensor a more capable plug-in replacement for a star tracker, thus reducing the integration and test cost of adding this sensor to a spacecraft. Recent additions include an integrated optical communications system which adds communications, and integrated range and range rate measurement and timing. The paper includes test results from trajectories based on the NASA New Horizons spacecraft.
Telecommunications and navigation systems design for manned Mars exploration missions
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
1989-06-01
This paper discusses typical manned Mars exploration needs for telecommunications, including preliminary navigation support functions. It is a brief progress report on an ongoing study program within the current NASA JPL Deep Space Network (DSN) activities. A typical Mars exploration case is defined, and support approaches comparing microwave and optical frequency performance for both local in situ and Mars-earth links are described. Optical telecommunication and navigation technology development opportunities in a Mars exploration program are also identified. A local Mars system telecommunication relay and navigation capability for service support of all Mars missions has been proposed as part of an overall solar system communications network. The effects of light-time delay and occultations on real-time mission decision-making are discussed; the availability of increased local mass data storage may be more important than increasing peak data rates to earth. The long-term frequency use plan will most likely include a mix of microwave, millimeter-wave and optical link capabilities to meet a variety of deep space mission needs.
Telecommunications and navigation systems design for manned Mars exploration missions
NASA Technical Reports Server (NTRS)
Hall, Justin R.; Hastrup, Rolf C.
1989-01-01
This paper discusses typical manned Mars exploration needs for telecommunications, including preliminary navigation support functions. It is a brief progress report on an ongoing study program within the current NASA JPL Deep Space Network (DSN) activities. A typical Mars exploration case is defined, and support approaches comparing microwave and optical frequency performance for both local in situ and Mars-earth links are described. Optical telecommunication and navigation technology development opportunities in a Mars exploration program are also identified. A local Mars system telecommunication relay and navigation capability for service support of all Mars missions has been proposed as part of an overall solar system communications network. The effects of light-time delay and occultations on real-time mission decision-making are discussed; the availability of increased local mass data storage may be more important than increasing peak data rates to earth. The long-term frequency use plan will most likely include a mix of microwave, millimeter-wave and optical link capabilities to meet a variety of deep space mission needs.
A Kinect™ camera based navigation system for percutaneous abdominal puncture
NASA Astrophysics Data System (ADS)
Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao
2016-08-01
Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26 ± 1.94 mm, 2.92 ± 1.67 mm, and 5.23 ± 2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40 ± 2.72 mm, and its lateral and longitudinal component were 4.30 ± 2.51 mm and 3.80 ± 3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable, and that the second generation Kinect™-based navigation is superior to the first-generation Kinect™, and has potential of clinical application in percutaneous abdominal puncture.
Reconditioning of Cassini Narrow-Angle Camera
2002-07-23
These five images of single stars, taken at different times with the narrow-angle camera on NASA Cassini spacecraft, show the effects of haze collecting on the camera optics, then successful removal of the haze by warming treatments.
Evaluation of modified portable digital camera for screening of diabetic retinopathy.
Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi
2009-01-01
To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.
First 3-D Panorama of Spirit Landing Site
2004-01-05
This sprawling look at the martian landscape surrounding the Mars Exploration Rover Spirit is the first 3-D stereo image from the rover navigation camera. Sleepy Hollow can be seen to center left of the image. 3D glasses are necessary.
Streaks on Opportunity Solar Panel After Uphill Drive
2016-03-31
This image from the navigation camera on the mast of NASA Mars Exploration Rover Opportunity shows streaks of dust or sand on the vehicle rear solar panel after a series of drives during which the rover was pointed steeply uphill.
Spirit Look Ahead After Sol 1866 Drive
2009-07-16
This scene combines three frames taken by the navigation camera on NASA Mars Exploration Rover Spirit during the 1,866th Martian day, or sol, of Spirit mission on Mars April 3, 2009. It spans 120 degrees, with south at the center.
2009-07-16
This scene combines three frames taken by the navigation camera on NASA Mars Exploration Rover Spirit during the 1,869th Martian day, or sol, of Spirit mission on Mars April 6, 2009. It spans 120 degrees, with south at the center.
Image Relayed by MAVEN Mars Orbiter from Curiosity Mars Rover
2014-11-10
The first demonstration of NASA MAVEN Mars orbiter capability to relay data from a Mars surface mission, on Nov. 6, 2014, included this image, taken Oct. 23, 2014, by Curiosity Navigation Camera, showing part of Pahrump Hills outcrop.
Time-resolved X-ray excited optical luminescence using an optical streak camera
NASA Astrophysics Data System (ADS)
Ward, M. J.; Regier, T. Z.; Vogt, J. M.; Gordon, R. A.; Han, W.-Q.; Sham, T. K.
2013-03-01
We report the development of a time-resolved XEOL (TR-XEOL) system that employs an optical streak camera. We have conducted TR-XEOL experiments at the Canadian Light Source (CLS) operating in single bunch mode with a 570 ns dark gap and 35 ps electron bunch pulse, and at the Advanced Photon Source (APS) operating in top-up mode with a 153 ns dark gap and 33.5 ps electron bunch pulse. To illustrate the power of this technique we measured the TR-XEOL of solid-solution nanopowders of gallium nitride - zinc oxide, and for the first time have been able to resolve near-band-gap (NBG) optical luminescence emission from these materials. Herein we will discuss the development of the streak camera TR-XEOL technique and its application to the study of these novel materials.
Exact optics - III. Schwarzschild's spectrograph camera revised
NASA Astrophysics Data System (ADS)
Willstrop, R. V.
2004-03-01
Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.
Optimetrics for Precise Navigation
NASA Technical Reports Server (NTRS)
Yang, Guangning; Heckler, Gregory; Gramling, Cheryl
2017-01-01
Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.
Dual beam optical interferometer
NASA Technical Reports Server (NTRS)
Gutierrez, Roman C. (Inventor)
2003-01-01
A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Image quality testing of assembled IR camera modules
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik
2013-10-01
Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.
NASA Astrophysics Data System (ADS)
Baruch, Daniel; Abookasis, David
2017-04-01
The application of optical techniques as tools for biomedical research has generated substantial interest for the ability of such methodologies to simultaneously measure biochemical and morphological parameters of tissue. Ongoing optimization of optical techniques may introduce such tools as alternative or complementary to conventional methodologies. The common approach shared by current optical techniques lies in the independent acquisition of tissue's optical properties (i.e., absorption and reduced scattering coefficients) from reflected or transmitted light. Such optical parameters, in turn, provide detailed information regarding both the concentrations of clinically relevant chromophores and macroscopic structural variations in tissue. We couple a noncontact optical setup with a simple analysis algorithm to obtain absorption and scattering coefficients of biological samples under test. Technically, a portable picoprojector projects serial sinusoidal patterns at low and high spatial frequencies, while a spectrometer and two independent CCD cameras simultaneously acquire the reflected diffuse light through a single spectrometer and two separate CCD cameras having different bandpass filters at nonisosbestic and isosbestic wavelengths in front of each. This configuration fills the gaps in each other's capabilities for acquiring optical properties of tissue at high spectral and spatial resolution. Experiments were performed on both tissue-mimicking phantoms as well as hands of healthy human volunteers to quantify their optical properties as proof of concept for the present technique. In a separate experiment, we derived the optical properties of the hand skin from the measured diffuse reflectance, based on a recently developed camera model. Additionally, oxygen saturation levels of tissue measured by the system were found to agree well with reference values. Taken together, the present results demonstrate the potential of this integrated setup for diagnostic and research applications.
Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics
NASA Astrophysics Data System (ADS)
Furxhi, Orges; Frascati, Joe; Driggers, Ronald
2018-04-01
Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
A low-cost test-bed for real-time landmark tracking
NASA Astrophysics Data System (ADS)
Csaszar, Ambrus; Hanan, Jay C.; Moreels, Pierre; Assad, Christopher
2007-04-01
A low-cost vehicle test-bed system was developed to iteratively test, refine and demonstrate navigation algorithms before attempting to transfer the algorithms to more advanced rover prototypes. The platform used here was a modified radio controlled (RC) car. A microcontroller board and onboard laptop computer allow for either autonomous or remote operation via a computer workstation. The sensors onboard the vehicle represent the types currently used on NASA-JPL rover prototypes. For dead-reckoning navigation, optical wheel encoders, a single axis gyroscope, and 2-axis accelerometer were used. An ultrasound ranger is available to calculate distance as a substitute for the stereo vision systems presently used on rovers. The prototype also carries a small laptop computer with a USB camera and wireless transmitter to send real time video to an off-board computer. A real-time user interface was implemented that combines an automatic image feature selector, tracking parameter controls, streaming video viewer, and user generated or autonomous driving commands. Using the test-bed, real-time landmark tracking was demonstrated by autonomously driving the vehicle through the JPL Mars yard. The algorithms tracked rocks as waypoints. This generated coordinates calculating relative motion and visually servoing to science targets. A limitation for the current system is serial computing-each additional landmark is tracked in order-but since each landmark is tracked independently, if transferred to appropriate parallel hardware, adding targets would not significantly diminish system speed.
Combined hostile fire and optics detection
NASA Astrophysics Data System (ADS)
Brännlund, Carl; Tidström, Jonas; Henriksson, Markus; Sjöqvist, Lars
2013-10-01
Snipers and other optically guided weapon systems are serious threats in military operations. We have studied a SWIR (Short Wave Infrared) camera-based system with capability to detect and locate snipers both before and after shot over a large field-of-view. The high frame rate SWIR-camera allows resolution of the temporal profile of muzzle flashes which is the infrared signature associated with the ejection of the bullet from the rifle. The capability to detect and discriminate sniper muzzle flashes with this system has been verified by FOI in earlier studies. In this work we have extended the system by adding a laser channel for optics detection. A laser diode with slit-shaped beam profile is scanned over the camera field-of-view to detect retro reflection from optical sights. The optics detection system has been tested at various distances up to 1.15 km showing the feasibility to detect rifle scopes in full daylight. The high speed camera gives the possibility to discriminate false alarms by analyzing the temporal data. The intensity variation, caused by atmospheric turbulence, enables discrimination of small sights from larger reflectors due to aperture averaging, although the targets only cover a single pixel. It is shown that optics detection can be integrated in combination with muzzle flash detection by adding a scanning rectangular laser slit. The overall optics detection capability by continuous surveillance of a relatively large field-of-view looks promising. This type of multifunctional system may become an important tool to detect snipers before and after shot.
A rigid and thermally stable all ceramic optical support bench assembly for the LSST Camera
NASA Astrophysics Data System (ADS)
Kroedel, Matthias; Langton, J. Brian; Wahl, Bill
2017-09-01
This paper will present the ceramic design, fabrication and metrology results and assembly plan of the LSST camera optical bench structure which is using the unique manufacturing features of the HB-Cesic technology. The optical bench assembly consists of a rigid "Grid" fabrication supporting individual raft plates mounting sensor assemblies by way of a rigid kinematic support system to meet extreme stringent requirements for focal plane planarity and stability.
NASA Technical Reports Server (NTRS)
Nabors, Sammy
2015-01-01
NASA offers companies an optical system that provides a unique panoramic perspective with a single camera. NASA's Marshall Space Flight Center has developed a technology that combines a panoramic refracting optic (PRO) lens with a unique detection system to acquire a true 360-degree field of view. Although current imaging systems can acquire panoramic images, they must use up to five cameras to obtain the full field of view. MSFC's technology obtains its panoramic images from one vantage point.
NASA Astrophysics Data System (ADS)
Ćwiok, M.; Dominik, W.; Małek, K.; Mankiewicz, L.; Mrowca-Ciułacz, J.; Nawrocki, K.; Piotrowski, L. W.; Sitek, P.; Sokołowski, M.; Wrochna, G.; Żarnecki, A. F.
2007-06-01
Experiment “Pi of the Sky” is designed to search for prompt optical emission from GRB sources. 32 CCD cameras covering 2 steradians will monitor the sky continuously. The data will be analysed on-line in search for optical flashes. The prototype with 2 cameras operated at Las Campanas (Chile) since 2004 has recognised several outbursts of flaring stars and has given limits for a few GRB.
Computer-aided system for detecting runway incursions
NASA Astrophysics Data System (ADS)
Sridhar, Banavar; Chatterji, Gano B.
1994-07-01
A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.
Chang, Victoria C; Tang, Shou-Jiang; Swain, C Paul; Bergs, Richard; Paramo, Juan; Hogg, Deborah C; Fernandez, Raul; Cadeddu, Jeffrey A; Scott, Daniel J
2013-08-01
The influence of endoscopic video camera (VC) image quality on surgical performance has not been studied. Flexible endoscopes are used as substitutes for laparoscopes in natural orifice translumenal endoscopic surgery (NOTES), but their optics are originally designed for intralumenal use. Manipulable wired or wireless independent VCs might offer advantages for NOTES but are still under development. To measure the optical characteristics of 4 VC systems and to compare their impact on the performance of surgical suturing tasks. VC systems included a laparoscope (Storz 10 mm), a flexible endoscope (Olympus GIF 160), and 2 prototype deployable cameras (magnetic anchoring and guidance system [MAGS] Camera and PillCam). In a randomized fashion, the 4 systems were evaluated regarding standardized optical characteristics and surgical manipulations of previously validated ex vivo (fundamentals of laparoscopic surgery model) and in vivo (live porcine Nissen model) tasks; objective metrics (time and errors/precision) and combined surgeon (n = 2) performance were recorded. Subtle differences were detected for color tests, and field of view was variable (65°-115°). Suitable resolution was detected up to 10 cm for the laparoscope and MAGS camera but only at closer distances for the endoscope and PillCam. Compared with the laparoscope, surgical suturing performances were modestly lower for the MAGS camera and significantly lower for the endoscope (ex vivo) and PillCam (ex vivo and in vivo). This study documented distinct differences in VC systems that may be used for NOTES in terms of both optical characteristics and surgical performance. Additional work is warranted to optimize cameras for NOTES. Deployable systems may be especially well suited for this purpose.
NASA Astrophysics Data System (ADS)
Motta, Danilo A.; Serillo, André; de Matos, Luciana; Yasuoka, Fatima M. M.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.
2014-03-01
Glaucoma is the second main cause of the blindness in the world and there is a tendency to increase this number due to the lifetime expectation raise of the population. Glaucoma is related to the eye conditions, which leads the damage to the optic nerve. This nerve carries visual information from eye to brain, then, if it has damage, it compromises the visual quality of the patient. In the majority cases the damage of the optic nerve is irreversible and it happens due to increase of intraocular pressure. One of main challenge for the diagnosis is to find out this disease, because any symptoms are not present in the initial stage. When is detected, it is already in the advanced stage. Currently the evaluation of the optic disc is made by sophisticated fundus camera, which is inaccessible for the majority of Brazilian population. The purpose of this project is to develop a specific fundus camera without fluorescein angiography and red-free system to accomplish 3D image of optic disc region. The innovation is the new simplified design of a stereo-optical system, in order to make capable the 3D image capture and in the same time quantitative measurements of excavation and topography of optic nerve; something the traditional fundus cameras do not do. The dedicated hardware and software is developed for this ophthalmic instrument, in order to permit quick capture and print of high resolution 3D image and videos of optic disc region (20° field-of-view) in the mydriatic and nonmydriatic mode.
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.
2002-06-01
Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.
Sample-Collection Drill Hole on Martian Sandstone Target Windjana
2014-05-06
This image from the Navigation Camera Navcam on NASA Curiosity Mars rover shows two holes at top center drilled into a sandstone target called Windjana. The farther hole, with larger pile of tailings around it, is a full-depth sampling hole.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...
2016-11-28
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
Measuring the spatial resolution of an optical system in an undergraduate optics laboratory
NASA Astrophysics Data System (ADS)
Leung, Calvin; Donnelly, T. D.
2017-06-01
Two methods of quantifying the spatial resolution of a camera are described, performed, and compared, with the objective of designing an imaging-system experiment for students in an undergraduate optics laboratory. With the goal of characterizing the resolution of a typical digital single-lens reflex (DSLR) camera, we motivate, introduce, and show agreement between traditional test-target contrast measurements and the technique of using Fourier analysis to obtain the modulation transfer function (MTF). The advantages and drawbacks of each method are compared. Finally, we explore the rich optical physics at work in the camera system by calculating the MTF as a function of wavelength and f-number. For example, we find that the Canon 40D demonstrates better spatial resolution at short wavelengths, in accordance with scalar diffraction theory, but is not diffraction-limited, being significantly affected by spherical aberration. The experiment and data analysis routines described here can be built and written in an undergraduate optics lab setting.
Calibration of Viking imaging system pointing, image extraction, and optical navigation measure
NASA Technical Reports Server (NTRS)
Breckenridge, W. G.; Fowler, J. W.; Morgan, E. M.
1977-01-01
Pointing control and knowledge accuracy of Viking Orbiter science instruments is controlled by the scan platform. Calibration of the scan platform and the imaging system was accomplished through mathematical models. The calibration procedure and results obtained for the two Viking spacecraft are described. Included are both ground and in-flight scan platform calibrations, and the additional calibrations unique to optical navigation.
NASA Astrophysics Data System (ADS)
Masciotti, James M.; Rahim, Shaheed; Grover, Jarrett; Hielscher, Andreas H.
2007-02-01
We present a design for frequency domain instrument that allows for simultaneous gathering of magnetic resonance and diffuse optical tomographic imaging data. This small animal imaging system combines the high anatomical resolution of magnetic resonance imaging (MRI) with the high temporal resolution and physiological information provided by diffuse optical tomography (DOT). The DOT hardware comprises laser diodes and an intensified CCD camera, which are modulated up to 1 GHz by radio frequency (RF) signal generators. An optical imaging head is designed to fit inside the 4 cm inner diameter of a 9.4 T MRI system. Graded index fibers are used to transfer light between the optical hardware and the imaging head within the RF coil. Fiducial markers are integrated into the imaging head to allow the determination of the positions of the source and detector fibers on the MR images and to permit co-registration of MR and optical tomographic images. Detector fibers are arranged compactly and focused through a camera lens onto the photocathode of the intensified CCD camera.
A preliminary optical design for the JANUS camera of ESA's space mission JUICE
NASA Astrophysics Data System (ADS)
Greggio, D.; Magrin, D.; Ragazzoni, R.; Munari, M.; Cremonese, G.; Bergomi, M.; Dima, M.; Farinato, J.; Marafatto, L.; Viotto, V.; Debei, S.; Della Corte, V.; Palumbo, P.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Schmitz, N.; Schipani, P.; Lara, L.
2014-08-01
The JANUS (Jovis, Amorum ac Natorum Undique Scrutator) will be the on board camera of the ESA JUICE satellite dedicated to the study of Jupiter and its moons, in particular Ganymede and Europa. This optical channel will provide surface maps with plate scale of 15 microrad/pixel with both narrow and broad band filters in the spectral range between 0.35 and 1.05 micrometers over a Field of View 1.72 × 1.29 degrees2. The current optical design is based on TMA design, with on-axis pupil and off-axis field of view. The optical stop is located at the secondary mirror providing an effective collecting area of 7854 mm2 (100 mm entrance pupil diameter) and allowing a simple internal baffling for first order straylight rejection. The nominal optical performances are almost limited by the diffraction and assure a nominal MTF better than 63% all over the whole Field of View. We describe here the optical design of the camera adopted as baseline together with the trade-off that has led us to this solution.
Quantitative optical metrology with CMOS cameras
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.
2004-08-01
Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
Virtual-stereo fringe reflection technique for specular free-form surface testing
NASA Astrophysics Data System (ADS)
Ma, Suodong; Li, Bo
2016-11-01
Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.
Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huen, T.
1987-07-01
A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less
A positional estimation technique for an autonomous land vehicle in an unstructured environment
NASA Technical Reports Server (NTRS)
Talluri, Raj; Aggarwal, J. K.
1990-01-01
This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.
A USB 2.0 computer interface for the UCO/Lick CCD cameras
NASA Astrophysics Data System (ADS)
Wei, Mingzhi; Stover, Richard J.
2004-09-01
The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.
Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study
NASA Astrophysics Data System (ADS)
Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom
2018-02-01
This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.
NASA Astrophysics Data System (ADS)
Swain, Pradyumna; Mark, David
2004-09-01
The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.
Versatile microsecond movie camera
NASA Astrophysics Data System (ADS)
Dreyfus, R. W.
1980-03-01
A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.
Star Observations by Asteroid Multiband Imaging Camera (AMICA) on Hayabusa (MUSES-C) Cruising Phase
NASA Astrophysics Data System (ADS)
Saito, J.; Hashimoto, T.; Kubota, T.; Hayabusa AMICA Team
Muses-C is the first Japanese asteroid mission and also a technology demonstration one to the S-type asteroid, 25143 Itokawa (1998SF36). It was launched at May 9, 2003, and renamed Hayabusa after the spacecraft was confirmed to be on the interplanetary orbit. This spacecraft has the event of the Earth-swingby for gravitational assist in the way to Itokawa on 2004 May. The arrival to Itokawa is scheduled on 2005 summer. During the visit to Itokawa, the remote-sensing observation with AMICA, NIRS (Near Infrared Spectrometer), XRS (X-ray Fluorescence Spectrometer), and LIDAR are performed, and the spacecraft descends and collects the surface samples at the touch down to the surface. The captured asteroid sample will be returned to the Earth in the middle of 2007. The telescopic optical navigation camera (ONC-T) with seven bandpass filters (and one wide-band filter) and polarizers is called AMICA (Asteroid Multiband Imaging CAmera) when ONC-T is used for scientific observations. The AMICA's seven bandpass filters are nearly equivalent to the seven filters of the ECAS (Eight Color Asteroid Survey) system. Obtained spectroscopic data will be compared with previously obtained ECAS observations. AMICA also has four polarizers, which are located on one edge of the CCD chip (covering 1.1 x 1.1 degrees each). Using the polarizers of AMICA, we can obtain polarimetric information of the target asteroid's surface. Since last November, we planned the test observations of some stars and planets by AMICA and could successfully obtain these images. Here, we briefly report these observations and its calibration by the ground-based observational data. In addition, we also present a current status of AMICA.
Local navigation and fuzzy control realization for autonomous guided vehicle
NASA Astrophysics Data System (ADS)
El-Konyaly, El-Sayed H.; Saraya, Sabry F.; Shehata, Raef S.
1996-10-01
This paper addresses the problem of local navigation for an autonomous guided vehicle (AGV) in a structured environment that contains static and dynamic obstacles. Information about the environment is obtained via a CCD camera. The problem is formulated as a dynamic feedback control problem in which speed and steering decisions are made on the fly while the AGV is moving. A decision element (DE) that uses local information is proposed. The DE guides the vehicle in the environment by producing appropriate navigation decisions. Dynamic models of a three-wheeled vehicle for driving and steering mechanisms are derived. The interaction between them is performed via the local feedback DE. A controller, based on fuzzy logic, is designed to drive the vehicle safely in an intelligent and human-like manner. The effectiveness of the navigation and control strategies in driving the AGV is illustrated and evaluated.
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Crew-Aided Autonomous Navigation Project
NASA Technical Reports Server (NTRS)
Holt, Greg
2015-01-01
Manual capability to perform star/planet-limb sightings provides a cheap, simple, and robust backup navigation source for exploration missions independent from the ground. Sextant sightings from spacecraft were first exercised in Gemini and flew as the loss-of-communications backup for all Apollo missions. This study seeks to procure and characterize error sources of navigation-grade sextants for feasibility of taking star and planetary limb sightings from inside a spacecraft. A series of similar studies was performed in the early/mid-1960s in preparation for Apollo missions, and one goal of this study is to modernize and update those findings. This technique has the potential to deliver significant risk mitigation, validation, and backup to more complex low-TRL automated systems under development involving cameras.
NASA Technical Reports Server (NTRS)
Galante, Joseph M.; Eepoel, John Van; Strube, Matt; Gill, Nat; Gonzalez, Marcelo; Hyslop, Andrew; Patrick, Bryan
2012-01-01
Argon is a flight-ready sensor suite with two visual cameras, a flash LIDAR, an on- board flight computer, and associated electronics. Argon was designed to provide sensing capabilities for relative navigation during proximity, rendezvous, and docking operations between spacecraft. A rigorous ground test campaign assessed the performance capability of the Argon navigation suite to measure the relative pose of high-fidelity satellite mock-ups during a variety of simulated rendezvous and proximity maneuvers facilitated by robot manipulators in a variety of lighting conditions representative of the orbital environment. A brief description of the Argon suite and test setup are given as well as an analysis of the performance of the system in simulated proximity and rendezvous operations.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-01-01
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-12-26
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Evaluation of multispectral plenoptic camera
NASA Astrophysics Data System (ADS)
Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin
2013-01-01
Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.
Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
NASA Astrophysics Data System (ADS)
Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.
2016-08-01
SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI
Approaching Endeavour Crater, Sol 2,680
2011-10-10
This image from the navigation camera on NASA Mars Exploration Rover Opportunity shows the view ahead on the day before the rover reached the rim of Endeavour crater. It was taken during the 2,680th Martian day, or sol, of the rover work on Mars.
Inconspicuous echolocation in hoary bats (Lasiurus cinereus)
Aaron J. Corcoran; Theodore J. Weller
2018-01-01
Echolocation allows bats to occupy diverse nocturnal niches. Bats almost always use echolocation, even when other sensory stimuli are available to guide navigation. Here, using arrays of calibrated infrared cameras and ultrasonic microphones, we demonstrate that hoary bats (Lasiurus cinereus) use previously unknown echolocation behaviours that...
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Coraor, Lee
2000-01-01
The research reported here is a part of NASA's Synthetic Vision System (SVS) project for the development of a High Speed Civil Transport Aircraft (HSCT). One of the components of the SVS is a module for detection of potential obstacles in the aircraft's flight path by analyzing the images captured by an on-board camera in real-time. Design of such a module includes the selection and characterization of robust, reliable, and fast techniques and their implementation for execution in real-time. This report describes the results of our research in realizing such a design. It is organized into three parts. Part I. Data modeling and camera characterization; Part II. Algorithms for detecting airborne obstacles; and Part III. Real time implementation of obstacle detection algorithms on the Datacube MaxPCI architecture. A list of publications resulting from this grant as well as a list of relevant publications resulting from prior NASA grants on this topic are presented.
Adding polarimetric imaging to depth map using improved light field camera 2.0 structure
NASA Astrophysics Data System (ADS)
Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu
2017-06-01
Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.
NASA Astrophysics Data System (ADS)
Scaduto, Lucimara C. N.; Malavolta, Alexandre T.; Modugno, Rodrigo G.; Vales, Luiz F.; Carvalho, Erica G.; Evangelista, Sérgio; Stefani, Mario A.; de Castro Neto, Jarbas C.
2017-11-01
The first Brazilian remote sensing multispectral camera (MUX) is currently under development at Opto Eletronica S.A. It consists of a four-spectral-band sensor covering a 450nm to 890nm wavelength range. This camera will provide images within a 20m ground resolution at nadir. The MUX camera is part of the payload of the upcoming Sino-Brazilian satellites CBERS 3&4 (China-Brazil Earth Resource Satellite). The preliminary alignment between the optical system and the CCD sensor, which is located at the focal plane assembly, was obtained in air condition, clean room environment. A collimator was used for the performance evaluation of the camera. The preliminary performance evaluation of the optical channel was registered by compensating the collimator focus position due to changes in the test environment, as an air-to-vacuum environment transition leads to a defocus process in this camera. Therefore, it is necessary to confirm that the alignment of the camera must always be attained ensuring that its best performance is reached for an orbital vacuum condition. For this reason and as a further step on the development process, the MUX camera Qualification Model was tested and evaluated inside a thermo-vacuum chamber and submitted to an as-orbit vacuum environment. In this study, the influence of temperature fields was neglected. This paper reports on the performance evaluation and discusses the results for this camera when operating within those mentioned test conditions. The overall optical tests and results show that the "in air" adjustment method was suitable to be performed, as a critical activity, to guarantee the equipment according to its design requirements.
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Nonlinear Plasma Experiments in Geospace with Gigawatts of RF Power at HAARP
NASA Astrophysics Data System (ADS)
Sheerin, J. P.; Rayyan, N.; Watkins, B. J.; Bristow, W. A.; Bernhardt, P. A.
2014-10-01
The HAARP phased-array HF transmitter at Gakona, AK delivers up to 3.6 GW (ERP) of HF power in the range of 2.8 - 10 MHz to the ionosphere with millisecond pointing, power modulation, and frequency agility. HAARP's unique features have enabled the conduct of a number of nonlinear plasma experiments in the interaction region of overdense ionospheric plasma including stimulated electromagnetic emissions (SEE), artificial aurora, artificial ionization layers, VLF wave-particle interactions in the magnetosphere, strong Langmuir turbulence (SLT) and suprathermal electron acceleration. Diagnostics include the Modular UHF Ionospheric Radar (MUIR) sited at HAARP, the SuperDARN-Kodiak HF radar, spacecraft radio beacons, HF receivers to record stimulated electromagnetic emissions (SEE) and telescopes and cameras for optical emissions. We report on short timescale ponderomotive overshoot effects, artificial field-aligned irregularities (AFAI), the aspect angle dependence of the intensity of the plasma line, and suprathermal electrons. Applications are made to the study and control of irregularities affecting spacecraft communication and navigation systems.
Maiden Voyage of the Under-Ice Float
NASA Astrophysics Data System (ADS)
Shcherbina, A.; D'Asaro, E. A.; Light, B.; Deming, J. W.; Rehm, E.
2016-02-01
The Under-Ice Float (UIF) is a new autonomous platform for sea ice and upper ocean observations in the marginal ice zone (MIZ). UIF is based on the Mixed Layer Lagrangian Float design, inheriting its accurate buoyancy control and relatively heavy payload capability. A major challenge for sustained autonomous observations in the MIZ is detection of open water for navigation and telemetry surfacings. UIF employs the new surface classification algorithm based on the spectral analysis of surface roughness sensed by an upward-looking sonar. A prototype UIF was deployed in the MIZ of the central Arctic Ocean in late August 2015. The main payload of the first UIF was a bio-optical suit consisting of upward- and downward hyperspectral radiometers; temperature, salinity, chlorophyll, turbidity, and dissolved oxygen sensors, and a high-definition photo camera. In the early stages of its mission, the float successfully avoided ice, detected leads, surfaced in open water, and transmitted data and photographs. We will present the analysis of these observations from the full UIF mission extending into the freeze-up season.
Insect-Inspired Flight Control for Unmanned Aerial Vehicles
NASA Technical Reports Server (NTRS)
Thakoor, Sarita; Stange, G.; Srinivasan, M.; Chahl, Javaan; Hine, Butler; Zornetzer, Steven
2005-01-01
Flight-control and navigation systems inspired by the structure and function of the visual system and brain of insects have been proposed for a class of developmental miniature robotic aircraft called "biomorphic flyers" described earlier in "Development of Biomorphic Flyers" (NPO-30554), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 54. These form a subset of biomorphic explorers, which, as reported in several articles in past issues of NASA Tech Briefs ["Biomorphic Explorers" (NPO-20142), Vol. 22, No. 9 (September 1998), page 71; "Bio-Inspired Engineering of Exploration Systems" (NPO-21142), Vol. 27, No. 5 (May 2003), page 54; and "Cooperative Lander-Surface/Aerial Microflyer Missions for Mars Exploration" (NPO-30286), Vol. 28, No. 5 (May 2004), page 36], are proposed small robots, equipped with microsensors and communication systems, that would incorporate crucial functions of mobility, adaptability, and even cooperative behavior. These functions are inherent to biological organisms but are challenging frontiers for technical systems. Biomorphic flyers could be used on Earth or remote planets to explore otherwise difficult or impossible to reach sites. An example of an exploratory task of search/surveillance functions currently being tested is to obtain high-resolution aerial imagery, using a variety of miniaturized electronic cameras. The control functions to be implemented by the systems in development include holding altitude, avoiding hazards, following terrain, navigation by reference to recognizable terrain features, stabilization of flight, and smooth landing. Flying insects perform these and other functions remarkably well, even though insect brains contains fewer than 10(exp -4) as many neurons as does the human brain. Although most insects have immobile, fixed-focus eyes and lack stereoscopy (and hence cannot perceive depth directly), they utilize a number of ingenious strategies for perceiving, and navigating in, three dimensions. Despite their lack of stereoscopy, insects infer distances to potential obstacles and other objects from image motion cues that result from their own motions in the environment. The concept of motion of texture in images as a source of motion cues is denoted generally as the concept of optic or optical flow. Computationally, a strategy based on optical flow is simpler than is stereoscopy for avoiding hazards and following terrain. Hence, this strategy offers the potential to design vision-based control computing subsystems that would be more compact, would weigh less, and would demand less power than would subsystems of equivalent capability based on a conventional stereoscopic approach.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
Advanced imaging research and development at DARPA
NASA Astrophysics Data System (ADS)
Dhar, Nibir K.; Dat, Ravi
2012-06-01
Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.
Optical touch sensing: practical bounds for design and performance
NASA Astrophysics Data System (ADS)
Bläßle, Alexander; Janbek, Bebart; Liu, Lifeng; Nakamura, Kanna; Nolan, Kimberly; Paraschiv, Victor
2013-02-01
Touch sensitive screens are used in many applications ranging in size from smartphones and tablets to display walls and collaborative surfaces. In this study, we consider optical touch sensing, a technology best suited for large-scale touch surfaces. Optical touch sensing utilizes cameras and light sources placed along the edge of the display. Within this framework, we first find a sufficient number of cameras necessary for identifying a convex polygon touching the screen, using a continuous light source on the boundary of a circular domain. We then find the number of cameras necessary to distinguish between two circular objects in a circular or rectangular domain. Finally, we use Matlab to simulate the polygonal mesh formed from distributing cameras and light sources on a circular domain. Using this, we compute the number of polygons in the mesh and the maximum polygon area to give us information about the accuracy of the configuration. We close with summary and conclusions, and pointers to possible future research directions.
Clinical Validation of a Smartphone-Based Adapter for Optic Disc Imaging in Kenya.
Bastawrous, Andrew; Giardini, Mario Ettore; Bolster, Nigel M; Peto, Tunde; Shah, Nisha; Livingstone, Iain A T; Weiss, Helen A; Hu, Sen; Rono, Hillary; Kuper, Hannah; Burton, Matthew
2016-02-01
Visualization and interpretation of the optic nerve and retina are essential parts of most physical examinations. To design and validate a smartphone-based retinal adapter enabling image capture and remote grading of the retina. This validation study compared the grading of optic nerves from smartphone images with those of a digital retinal camera. Both image sets were independently graded at Moorfields Eye Hospital Reading Centre. Nested within the 6-year follow-up (January 7, 2013, to March 12, 2014) of the Nakuru Eye Disease Cohort in Kenya, 1460 adults (2920 eyes) 55 years and older were recruited consecutively from the study. A subset of 100 optic disc images from both methods were further used to validate a grading app for the optic nerves. Data analysis was performed April 7 to April 12, 2015. Vertical cup-disc ratio for each test was compared in terms of agreement (Bland-Altman and weighted κ) and test-retest variability. A total of 2152 optic nerve images were available from both methods (also 371 from the reference camera but not the smartphone, 170 from the smartphone but not the reference camera, and 227 from neither the reference camera nor the smartphone). Bland-Altman analysis revealed a mean difference of 0.02 (95% CI, -0.21 to 0.17) and a weighted κ coefficient of 0.69 (excellent agreement). The grades of an experienced retinal photographer were compared with those of a lay photographer (no health care experience before the study), and no observable difference in image acquisition quality was found. Nonclinical photographers using the low-cost smartphone adapter were able to acquire optic nerve images at a standard that enabled independent remote grading of the images comparable to those acquired using a desktop retinal camera operated by an ophthalmic assistant. The potential for task shifting and the detection of avoidable causes of blindness in the most at-risk communities makes this an attractive public health intervention.
Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2013-03-01
Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.
Holographic motion picture camera with Doppler shift compensation
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1976-01-01
A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.
Standard design for National Ignition Facility x-ray streak and framing cameras.
Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S
2010-10-01
The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
Preliminary Design of a Lightning Optical Camera and ThundEr (LOCATE) Sensor
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.; Koshak, William J.; Rybski, Paul M.; Arnold, James E. (Technical Monitor)
2001-01-01
The preliminary design of an optical/acoustical instrument is described for making highly accurate real-time determinations of the location of cloud-to-ground (CG) lightning. The instrument, named the Lightning Optical Camera And ThundEr (LOCATE) sensor, will also image the clear and cloud-obscured lightning channel produced from CGs and cloud flashes, and will record the transient optical waveforms produced from these discharges. The LOCATE sensor will consist of a full (360 degrees) field-of-view optical camera for obtaining CG channel image and azimuth, a sensitive thunder microphone for obtaining CG range, and a fast photodiode system for time-resolving the lightning optical waveform. The optical waveform data will be used to discriminate CGs from cloud flashes. Together, the optical azimuth and thunder range is used to locate CGs and it is anticipated that a network of LOCATE sensors would determine CG source location to well within 100 meters. All of this would be accomplished for a relatively inexpensive cost compared to present RF lightning location technologies, but of course the range detection is limited and will be quantified in the future. The LOCATE sensor technology would have practical applications for electric power utility companies, government (e.g. NASA Kennedy Space Center lightning safety and warning), golf resort lightning safety, telecommunications, and other industries.
Mechanical Design of the LSST Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordby, Martin; Bowden, Gordon; Foss, Mike
2008-06-13
The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors inmore » image reconstruction. Design and analysis for the camera body and cryostat will be detailed.« less
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Research on three-dimensional reconstruction method based on binocular vision
NASA Astrophysics Data System (ADS)
Li, Jinlin; Wang, Zhihui; Wang, Minjun
2018-03-01
As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.
Method and system for providing autonomous control of a platform
NASA Technical Reports Server (NTRS)
Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)
2012-01-01
The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).
Optical design of the SuMIRe/PFS spectrograph
NASA Astrophysics Data System (ADS)
Pascal, Sandrine; Vives, Sébastien; Barkhouser, Robert; Gunn, James E.
2014-07-01
The SuMIRe Prime Focus Spectrograph (PFS), developed for the 8-m class SUBARU telescope, will consist of four identical spectrographs, each receiving 600 fibers from a 2394 fiber robotic positioner at the telescope prime focus. Each spectrograph includes three spectral channels to cover the wavelength range [0.38-1.26] um with a resolving power ranging between 2000 and 4000. A medium resolution mode is also implemented to reach a resolving power of 5000 at 0.8 um. Each spectrograph is made of 4 optical units: the entrance unit which produces three corrected collimated beams and three camera units (one per spectral channel: "blue, "red", and "NIR"). The beam is split by using two large dichroics; and in each arm, the light is dispersed by large VPH gratings (about 280x280mm). The proposed optical design was optimized to achieve the requested image quality while simplifying the manufacturing of the whole optical system. The camera design consists in an innovative Schmidt camera observing a large field-of-view (10 degrees) with a very fast beam (F/1.09). To achieve such a performance, the classical spherical mirror is replaced by a catadioptric mirror (i.e meniscus lens with a reflective surface on the rear side of the glass, like a Mangin mirror). This article focuses on the optical architecture of the PFS spectrograph and the perfornance achieved. We will first described the global optical design of the spectrograph. Then, we will focus on the Mangin-Schmidt camera design. The analysis of the optical performance and the results obtained are presented in the last section.
A near-Infrared SETI Experiment: Alignment and Astrometric precision
NASA Astrophysics Data System (ADS)
Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan
2016-06-01
Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.
An onboard navigation system which fulfills Mars aerocapture guidance requirements
NASA Technical Reports Server (NTRS)
Brand, Timothy J.; Fuhry, Douglas P.; Shepperd, Stanley W.
1989-01-01
The development of a candidate autonomous onboard Mars approach navigation scheme capable of supporting aerocapture into Mars orbit is discussed. An aerocapture guidance and navigation system which can run independently of the preaerocapture navigation was used to define a preliminary set of accuracy requirements at entry interface. These requirements are used to evaluate the proposed preaerocapture navigation scheme. This scheme uses optical sightings on Deimos with a star tracker and an inertial measurement unit for instrumentation as a source for navigation nformation. Preliminary results suggest that the approach will adequately support aerocaputre into Mars orbit.
KAPAO Prime: Design and Simulation
NASA Astrophysics Data System (ADS)
McGonigle, Lorcan; Choi, P. I.; Severson, S. A.; Spjut, E.
2013-01-01
KAPAO (KAPAO A Pomona Adaptive Optics instrument) is a dual-band natural guide star adaptive optics system designed to measure and remove atmospheric aberration over UV-NIR wavelengths from Pomona College’s telescope atop Table Mountain. We present here, the final optical system, KAPAO Prime, designed in Zemax Optical Design Software that uses custom off-axis paraboloid mirrors (OAPs) to manipulate light appropriately for a Shack-Hartman wavefront sensor, deformable mirror, and science cameras. KAPAO Prime is characterized by diffraction limited imaging over the full 81” field of view of our optical camera at f/33 as well as over the smaller field of view of our NIR camera at f/50. In Zemax, tolerances of 1% on OAP focal length and off-axis distance were shown to contribute an additional 4 nm of wavefront error (98% confidence) over the field of view of our optical camera; the contribution from surface irregularity was determined analytically to be 40nm for OAPs specified to λ/10 surface irregularity (632.8nm). Modeling of the temperature deformation of the breadboard in SolidWorks revealed 70 micron contractions along the edges of the board for a decrease of 75°F when applied to OAP positions such displacements from the optimal layout are predicted to contribute an additional 20 nanometers of wavefront error. Flexure modeling of the breadboard due to gravity is on-going. We hope to begin alignment and testing of KAPAO Prime in Q1 2013.
Performance prediction of optical image stabilizer using SVM for shaker-free production line
NASA Astrophysics Data System (ADS)
Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo
2016-04-01
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
NASA Astrophysics Data System (ADS)
Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo
2008-11-01
Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.
NASA Technical Reports Server (NTRS)
Kassak, John E.
1991-01-01
The objective of the operational television (OTV) technology was to develop a multiple camera system (up to 256 cameras) for NASA Kennedy installations where camera video, synchronization, control, and status data are transmitted bidirectionally via a single fiber cable at distances in excess of five miles. It is shown that the benefits (such as improved video performance, immunity from electromagnetic interference and radio frequency interference, elimination of repeater stations, and more system configuration flexibility) can be realized if application of the proven fiber optic transmission concept is used. The control system will marry the lens, pan and tilt, and camera control functions into a modular based Local Area Network (LAN) control network. Such a system does not exist commercially at present since the Television Broadcast Industry's current practice is to divorce the positional controls from the camera control system. The application software developed for this system will have direct applicability to similar systems in industry using LAN based control systems.
Smartphone and Curriculum Opportunities for College Faculty
ERIC Educational Resources Information Center
Migdalski, Scott T.
2017-01-01
The ever-increasing popularity of the smartphone continues to impact many professions. Physicians use the device for medication dosing, professional drivers use the GPS application, mariners use the navigation maps, builders use materials-estimator applications, property appraisers use the camera capability, and students use the device to search…
Enabling Communication and Navigation Technologies for Future Near Earth Science Missions
NASA Technical Reports Server (NTRS)
Israel, David J.; Heckler, Gregory; Menrad, Robert; Hudiburg, John; Boroson, Don; Robinson, Bryan; Cornwell, Donald
2016-01-01
In 2015, the Earth Regimes Network Evolution Study (ERNESt) proposed an architectural concept and technologies that evolve to enable space science and exploration missions out to the 2040 timeframe. The architectural concept evolves the current instantiations of the Near Earth Network and Space Network with new technologies to provide a global communication and navigation network that provides communication and navigation services to a wide range of space users in the near Earth domain. The technologies included High Rate Optical Communications, Optical Multiple Access (OMA), Delay Tolerant Networking (DTN), User Initiated Services (UIS), and advanced Position, Navigation, and Timing technology. This paper describes the key technologies and their current technology readiness levels. Examples of science missions that could be enabled by the technologies and the projected operational benefits of the architecture concept to missions are also described.
Mobile robots IV; Proceedings of the Meeting, Philadelphia, PA, Nov. 6, 7, 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, W.J.; Chun, W.H.
1990-01-01
The present conference on mobile robot systems discusses high-speed machine perception based on passive sensing, wide-angle optical ranging, three-dimensional path planning for flying/crawling robots, navigation of autonomous mobile intelligence in an unstructured natural environment, mechanical models for the locomotion of a four-articulated-track robot, a rule-based command language for a semiautonomous Mars rover, and a computer model of the structured light vision system for a Mars rover. Also discussed are optical flow and three-dimensional information for navigation, feature-based reasoning trail detection, a symbolic neural-net production system for obstacle avoidance and navigation, intelligent path planning for robot navigation in an unknown environment,more » behaviors from a hierarchical control system, stereoscopic TV systems, the REACT language for autonomous robots, and a man-amplifying exoskeleton.« less
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
Machine-Vision Aids for Improved Flight Operations
NASA Technical Reports Server (NTRS)
Menon, P. K.; Chatterji, Gano B.
1996-01-01
The development of machine vision based pilot aids to help reduce night approach and landing accidents is explored. The techniques developed are motivated by the desire to use the available information sources for navigation such as the airport lighting layout, attitude sensors and Global Positioning System to derive more precise aircraft position and orientation information. The fact that airport lighting geometry is known and that images of airport lighting can be acquired by the camera, has lead to the synthesis of machine vision based algorithms for runway relative aircraft position and orientation estimation. The main contribution of this research is the synthesis of seven navigation algorithms based on two broad families of solutions. The first family of solution methods consists of techniques that reconstruct the airport lighting layout from the camera image and then estimate the aircraft position components by comparing the reconstructed lighting layout geometry with the known model of the airport lighting layout geometry. The second family of methods comprises techniques that synthesize the image of the airport lighting layout using a camera model and estimate the aircraft position and orientation by comparing this image with the actual image of the airport lighting acquired by the camera. Algorithms 1 through 4 belong to the first family of solutions while Algorithms 5 through 7 belong to the second family of solutions. Algorithms 1 and 2 are parameter optimization methods, Algorithms 3 and 4 are feature correspondence methods and Algorithms 5 through 7 are Kalman filter centered algorithms. Results of computer simulation are presented to demonstrate the performance of all the seven algorithms developed.
NASA Astrophysics Data System (ADS)
Tate, Tyler H.; McGregor, Davis; Barton, Jennifer K.
2017-02-01
The optical design for a dual modality endoscope based on piezo scanning fiber technology is presented including a novel technique to combine forward-viewing navigation and side viewing OCT. Potential applications include navigating body lumens such as the fallopian tube, biliary ducts and cardiovascular system. A custom cover plate provides a rotationally symmetric double reflection of the OCT beam to deviate and focus the OCT beam out the side of the endoscope for cross-sectional imaging of the tubal lumen. Considerations in the choice of the scanning fiber are explored and a new technique to increase the divergence angle of the scanning fiber to improve system performance is presented. Resolution and the necessary scanning density requirements to achieve Nyquist sampling of the full image are considered. The novel optical design lays the groundwork for a new approach integrating side-viewing OCT into multimodality endoscopes for small lumen imaging. KEYWORDS:
Arain, Nabeel A; Cadeddu, Jeffrey A; Best, Sara L; Roshek, Thomas; Chang, Victoria; Hogg, Deborah C; Bergs, Richard; Fernandez, Raul; Webb, Erin M; Scott, Daniel J
2012-04-01
This study aimed to evaluate the surgeon performance and workload of a next-generation magnetically anchored camera compared with laparoscopic and flexible endoscopic imaging systems for laparoscopic and single-site laparoscopy (SSL) settings. The cameras included a 5-mm 30° laparoscope (LAP), a magnetically anchored (MAGS) camera, and a flexible endoscope (ENDO). The three camera systems were evaluated using standardized optical characteristic tests. Each system was used in random order for visualization during performance of a standardized suturing task by four surgeons. Each participant performed three to five consecutive repetitions as a surgeon and also served as a camera driver for other surgeons. Ex vivo testing was conducted in a laparoscopic multiport and SSL layout using a box trainer. In vivo testing was performed only in the multiport configuration and used a previously validated live porcine Nissen model. Optical testing showed superior resolution for MAGS at 5 and 10 cm compared with LAP or ENDO. The field of view ranged from 39 to 99°. The depth of focus was almost three times greater for MAGS (6-270 mm) than for LAP (2-88 mm) or ENDO (1-93 mm). Both ex vivo and in vivo multiport combined surgeon performance was significantly better for LAP than for ENDO, but no significant differences were detected for MAGS. For multiport testing, workload ratings were significantly less ex vivo for LAP and MAGS than for ENDO and less in vivo for LAP than for MAGS or ENDO. For ex vivo SSL, no significant performance differences were detected, but camera drivers rated the workload significantly less for MAGS than for LAP or ENDO. The data suggest that the improved imaging element of the next-generation MAGS camera has optical and performance characteristics that meet or exceed those of the LAP or ENDO systems and that the MAGS camera may be especially useful for SSL. Further refinements of the MAGS camera are encouraged.
The GCT camera for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium
2017-12-01
The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.
Omnidirectional Underwater Camera Design and Calibration
Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David
2015-01-01
This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707
Concave Surround Optics for Rapid Multi-View Imaging
2006-11-01
thus is amenable to capturing dynamic events avoiding the need to construct and calibrate an array of cameras. We demonstrate the system with a high...hard to assemble and calibrate . In this paper we present an optical system capable of rapidly moving the viewpoint around a scene. Our system...flexibility, large camera arrays are typically expensive and require significant effort to calibrate temporally, geometrically and chromatically
Time-resolved optical measurements of the post-detonation combustion of aluminized explosives
NASA Astrophysics Data System (ADS)
Carney, Joel R.; Miller, J. Scott; Gump, Jared C.; Pangilinan, G. I.
2006-06-01
The dynamic observation and characterization of light emission following the detonation and subsequent combustion of an aluminized explosive is described. The temporal, spatial, and spectral specificity of the light emission are achieved using a combination of optical diagnostics. Aluminum and aluminum monoxide emission peaks are monitored as a function of time and space using streak camera based spectroscopy in a number of light collection configurations. Peak areas of selected aluminum containing species are tracked as a function of time to ascertain the relative kinetics (growth and decay of emitting species) during the energetic event. At the chosen streak camera sensitivity, aluminum emission is observed for 10μs following the detonation of a confined 20g charge of PBXN-113, while aluminum monoxide emission persists longer than 20μs. A broadband optical emission gauge, shock velocity gauge, and fast digital framing camera are used as supplemental optical diagnostics. In-line, collimated detection is determined to be the optimum light collection geometry because it is independent of distance between the optics and the explosive charge. The chosen optical configuration also promotes a constant cylindrical collection volume that should facilitate future modeling efforts.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
PRISM Spectrograph Optical Design
NASA Technical Reports Server (NTRS)
Chipman, Russell A.
1995-01-01
The objective of this contract is to explore optical design concepts for the PRISM spectrograph and produce a preliminary optical design. An exciting optical configuration has been developed which will allow both wavelength bands to be imaged onto the same detector array. At present the optical design is only partially complete because PRISM will require a fairly elaborate optical system to meet its specification for throughput (area*solid angle). The most complex part of the design, the spectrograph camera, is complete, providing proof of principle that a feasible design is attainable. This camera requires 3 aspheric mirrors to fit inside the 20x60 cm cross-section package. A complete design with reduced throughput (1/9th) has been prepared. The design documents the optical configuration concept. A suitable dispersing prism material, CdTe, has been identified for the prism spectrograph, after a comparison of many materials.
Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy
NASA Technical Reports Server (NTRS)
1984-01-01
Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.
Optical design and development of a snapshot light-field laryngoscope
NASA Astrophysics Data System (ADS)
Zhu, Shuaishuai; Jin, Peng; Liang, Rongguang; Gao, Liang
2018-02-01
The convergence of recent advances in optical fabrication and digital processing yields a generation of imaging technology-light-field (LF) cameras which bridge the realms of applied mathematics, optics, and high-performance computing. Herein for the first time, we introduce the paradigm of LF imaging into laryngoscopy. The resultant probe can image the three-dimensional shape of vocal folds within a single camera exposure. Furthermore, to improve the spatial resolution, we developed an image fusion algorithm, providing a simple solution to a long-standing problem in LF imaging.
Horizon Based Orientation Estimation for Planetary Surface Navigation
NASA Technical Reports Server (NTRS)
Bouyssounouse, X.; Nefian, A. V.; Deans, M.; Thomas, A.; Edwards, L.; Fong, T.
2016-01-01
Planetary rovers navigate in extreme environments for which a Global Positioning System (GPS) is unavailable, maps are restricted to relatively low resolution provided by orbital imagery, and compass information is often lacking due to weak or not existent magnetic fields. However, an accurate rover localization is particularly important to achieve the mission success by reaching the science targets, avoiding negative obstacles visible only in orbital maps, and maintaining good communication connections with ground. This paper describes a horizon solution for precise rover orientation estimation. The detected horizon in imagery provided by the on board navigation cameras is matched with the horizon rendered over the existing terrain model. The set of rotation parameters (roll, pitch yaw) that minimize the cost function between the two horizon curves corresponds to the rover estimated pose.
A novel optical system design of light field camera
NASA Astrophysics Data System (ADS)
Wang, Ye; Li, Wenhua; Hao, Chenyang
2016-01-01
The structure of main lens - Micro Lens Array (MLA) - imaging sensor is usually adopted in optical system of light field camera, and the MLA is the most important part in the optical system, which has the function of collecting and recording the amplitude and phase information of the field light. In this paper, a novel optical system structure is proposed. The novel optical system is based on the 4f optical structure, and the micro-aperture array (MAA) is used to instead of the MLA for realizing the information acquisition of the 4D light field. We analyze the principle that the novel optical system could realize the information acquisition of the light field. At the same time, a simple MAA, line grating optical system, is designed by ZEMAX software in this paper. The novel optical system is simulated by a line grating optical system, and multiple images are obtained in the image plane. The imaging quality of the novel optical system is analyzed.
Visual Servoing via Navigation Functions
2002-02-06
kernel was adequate). The PC is equipped with a Data Translations12 DT3155 frame grabber connected to a standard 30Hz NTSC video camera. Using MATLAB’s C...Richard M. Murray, Zexiang Li, and S. Shankar Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, Reading, Mass., 1994. [26] Dan Pedoe
Performance Characteristic Mems-Based IMUs for UAVs Navigation
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.
2015-08-01
Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.
Cheap streak camera based on the LD-S-10 intensifier tube
NASA Astrophysics Data System (ADS)
Dashevsky, Boris E.; Krutik, Mikhail I.; Surovegin, Alexander L.
1992-01-01
Basic properties of a new streak camera and its test results are reported. To intensify images on its screen, we employed modular G1 tubes, the LD-A-1.0 and LD-A-0.33, enabling magnification of 1.0 and 0.33, respectively. If necessary, the LD-A-0.33 tube may be substituted by any other image intensifier of the LDA series, the choice to be determined by the size of the CCD matrix with fiber-optical windows. The reported camera employs a 12.5- mm-long CCD strip consisting of 1024 pixels, each 12 X 500 micrometers in size. Registered radiation was imaged on a 5 X 0.04 mm slit diaphragm tightly connected with the LD-S- 10 fiber-optical input window. Electrons escaping the cathode are accelerated in a 5 kV electric field and focused onto a phosphor screen covering a fiber-optical plate as they travel between deflection plates. Sensitivity of the latter was 18 V/mm, which implies that the total deflecting voltage was 720 V per 40 mm of the screen surface, since reversed-polarity scan pulses +360 V and -360 V were applied across the deflection plate. The streak camera provides full scan times over the screen of 15, 30, 50, 100, 250, and 500 ns. Timing of the electrically or optically driven camera was done using a 10 ns step-controlled-delay (0 - 500 ns) circuit.
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu
2013-01-01
An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust. PMID:24250261
Baohua, Li; Wenjie, Lai; Yun, Chen; Zongming, Liu
2013-01-01
An autonomous navigation algorithm using the sensor that integrated the star sensor (FOV1) and ultraviolet earth sensor (FOV2) is presented. The star images are sampled by FOV1, and the ultraviolet earth images are sampled by the FOV2. The star identification algorithm and star tracking algorithm are executed at FOV1. Then, the optical axis direction of FOV1 at J2000.0 coordinate system is calculated. The ultraviolet image of earth is sampled by FOV2. The center vector of earth at FOV2 coordinate system is calculated with the coordinates of ultraviolet earth. The autonomous navigation data of satellite are calculated by integrated sensor with the optical axis direction of FOV1 and the center vector of earth from FOV2. The position accuracy of the autonomous navigation for satellite is improved from 1000 meters to 300 meters. And the velocity accuracy of the autonomous navigation for satellite is improved from 100 m/s to 20 m/s. At the same time, the period sine errors of the autonomous navigation for satellite are eliminated. The autonomous navigation for satellite with a sensor that integrated ultraviolet earth sensor and star sensor is well robust.
Multiple beacons for supporting lunar landing navigation
NASA Astrophysics Data System (ADS)
Theil, Stephan; Bora, Leonardo
2018-02-01
The exploration and potential future exploitation of solar system bodies requires technologies for precise and safe landings. Current navigation systems for landing probes are relying on a combination of inertial and optical sensor measurements to determine the current flight state with respect to the target body and the desired landing site. With a future transition from single exploration missions to more frequent first exploration and then exploitation missions, the implementation and operation of these missions changes, since it can be expected that a ground infrastructure on the target body is available in the vicinity of the landing site. In a previous paper, the impact of a single ground-based beacon on the navigation performance was investigated depending on the type of radiometric measurements and on the location of the beacon with respect to the landing site. This paper extends this investigation on options for ground-based multiple beacons supporting the on-board navigation system. It analyzes the impact on the achievable navigation accuracy. For that purpose, the paper introduces briefly the existing navigation architecture based on optical navigation and its extension with radiometric measurements. The same scenario of lunar landing as in the previous paper is simulated. The results are analyzed and discussed. They show a single beacon at a large distance along the landing trajectory and multiple beacons close to the landing site can improve the navigation performance. The results show how large the landing area can be increased where a sufficient navigation performance is achieved using the beacons.
The sequence measurement system of the IR camera
NASA Astrophysics Data System (ADS)
Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo
2011-08-01
Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.
Smartphone Fundus Photography.
Nazari Khanamiri, Hossein; Nakatsuka, Austin; El-Annan, Jaafar
2017-07-06
Smartphone fundus photography is a simple technique to obtain ocular fundus pictures using a smartphone camera and a conventional handheld indirect ophthalmoscopy lens. This technique is indispensable when picture documentation of optic nerve, retina, and retinal vessels is necessary but a fundus camera is not available. The main advantage of this technique is the widespread availability of smartphones that allows documentation of macula and optic nerve changes in many settings that was not previously possible. Following the well-defined steps detailed here, such as proper alignment of the phone camera, handheld lens, and the patient's pupil, is the key for obtaining a clear retina picture with no interfering light reflections and aberrations. In this paper, the optical principles of indirect ophthalmoscopy and fundus photography will be reviewed first. Then, the step-by-step method to record a good quality retinal image using a smartphone will be explained.
Geometric and Optic Characterization of a Hemispherical Dome Port for Underwater Photogrammetry
Menna, Fabio; Nocerino, Erica; Fassi, Francesco; Remondino, Fabio
2016-01-01
The popularity of automatic photogrammetric techniques has promoted many experiments in underwater scenarios leading to quite impressive visual results, even by non-experts. Despite these achievements, a deep understanding of camera and lens behaviors as well as optical phenomena involved in underwater operations is fundamental to better plan field campaigns and anticipate the achievable results. The paper presents a geometric investigation of a consumer grade underwater camera housing, manufactured by NiMAR and equipped with a 7′′ dome port. After a review of flat and dome ports, the work analyzes, using simulations and real experiments, the main optical phenomena involved when operating a camera underwater. Specific aspects which deal with photogrammetric acquisitions are considered with some tests in laboratory and in a swimming pool. Results and considerations are shown and commented. PMID:26729133
Advanced Navigation Strategies For Asteroid Sample Return Missions
NASA Technical Reports Server (NTRS)
Getzandanner, K.; Bauman, J.; Williams, B.; Carpenter, J.
2010-01-01
Flyby and rendezvous missions to asteroids have been accomplished using navigation techniques derived from experience gained in planetary exploration. This paper presents analysis of advanced navigation techniques required to meet unique challenges for precision navigation to acquire a sample from an asteroid and return it to Earth. These techniques rely on tracking data types such as spacecraft-based laser ranging and optical landmark tracking in addition to the traditional Earth-based Deep Space Network radio metric tracking. A systematic study of navigation strategy, including the navigation event timeline and reduction in spacecraft-asteroid relative errors, has been performed using simulation and covariance analysis on a representative mission.
Designing the optimal semi-warm NIR spectrograph for SALT via detailed thermal analysis
NASA Astrophysics Data System (ADS)
Wolf, Marsha J.; Sheinis, Andrew I.; Mulligan, Mark P.; Wong, Jeffrey P.; Rogers, Allen
2008-07-01
The near infrared (NIR) upgrade to the Robert Stobie Spectrograph (RSS) on the Southern African Large Telescope (SALT), RSS/NIR, extends the spectral coverage of all modes of the optical spectrograph. The RSS/NIR is a low to medium resolution spectrograph with broadband, spectropolarimetric, and Fabry-Perot imaging capabilities. The optical and NIR arms can be used simultaneously to extend spectral coverage from 3200 Å to approximately 1.6 μm. Both arms utilize high efficiency volume phase holographic gratings via articulating gratings and cameras. The NIR camera incorporates a HAWAII-2RG detector with an Epps optical design consisting of 6 spherical elements and providing subpixel rms image sizes of 7.5 +/- 1.0 μm over all wavelengths and field angles. The NIR spectrograph is semi-warm, sharing a common slit plane and partial collimator with the optical arm. A pre-dewar, cooled to below ambient temperature, houses the final NIR collimator optic, the grating/Fabry-Perot etalon, the polarizing beam splitter, and the first three camera optics. The last three camera elements, blocking filters, and detector are housed in a cryogenically cooled dewar. The semi-warm design concept has long been proposed as an economical way to extend optical instruments into the NIR, however, success has been very limited. A major portion of our design effort entails a detailed thermal analysis using non-sequential ray tracing to interactively guide the mechanical design and determine a truly realizable long wavelength cutoff over which astronomical observations will be sky-limited. In this paper we describe our thermal analysis, design concepts for the staged cooling scheme, and results to be incorporated into the overall mechanical design and baffling.
Optomechanical stability design of space optical mapping camera
NASA Astrophysics Data System (ADS)
Li, Fuqiang; Cai, Weijun; Zhang, Fengqin; Li, Na; Fan, Junjie
2018-01-01
According to the interior orientation elements and imaging quality requirements of mapping application to mapping camera and combined with off-axis three-mirror anastigmat(TMA) system, high optomechanical stability design of a space optical mapping camera is introduced in this paper. The configuration is a coaxial TMA system used in off-axis situation. Firstly, the overall optical arrangement is described., and an overview of the optomechanical packaging is provided. Zerodurglass, carbon fiber composite and carbon-fiber reinforced silicon carbon (C/SiC) are widely used in the optomechanical structure, because their low coefficient of thermal expansion (CTE) can reduce the thermal sensitivity of the mirrors and focal plane. Flexible and unloading support are used in reflector and camera supporting structure. Epoxy structural adhesives is used for bonding optics to metal structure is also introduced in this paper. The primary mirror is mounted by means of three-point ball joint flexures system, which is attach to the back of the mirror. Then, In order to predict flexural displacements due to gravity, static finite element analysis (FEA) is performed on the primary mirror. The optical performance peak-to-valley (PV) and root-mean-square (RMS) wavefront errors are detected before and after assemble. Also, the dynamic finite element analysis(FEA) of the whole optical arrangement is carried out as to investigate the performance of optomechanical. Finally, in order to evaluate the stability of the design, the thermal vacuum test and vibration test are carried out and the Modulation Transfer Function (MTF) and elements of interior orientation are presented as the evaluation index. Before and after the thermal vacuum test and vibration test, the MTF, focal distance and position of the principal point of optical system are measured and the result is as expected.
Direct endoscopic video registration for sinus surgery
NASA Astrophysics Data System (ADS)
Mirota, Daniel; Taylor, Russell H.; Ishii, Masaru; Hager, Gregory D.
2009-02-01
Advances in computer vision have made possible robust 3D reconstruction of monocular endoscopic video. These reconstructions accurately represent the visible anatomy and, once registered to pre-operative CT data, enable a navigation system to track directly through video eliminating the need for an external tracking system. Video registration provides the means for a direct interface between an endoscope and a navigation system and allows a shorter chain of rigid-body transformations to be used to solve the patient/navigation-system registration. To solve this registration step we propose a new 3D-3D registration algorithm based on Trimmed Iterative Closest Point (TrICP)1 and the z-buffer algorithm.2 The algorithm takes as input a 3D point cloud of relative scale with the origin at the camera center, an isosurface from the CT, and an initial guess of the scale and location. Our algorithm utilizes only the visible polygons of the isosurface from the current camera location during each iteration to minimize the search area of the target region and robustly reject outliers of the reconstruction. We present example registrations in the sinus passage applicable to both sinus surgery and transnasal surgery. To evaluate our algorithm's performance we compare it to registration via Optotrak and present closest distance point to surface error. We show our algorithm has a mean closest distance error of .2268mm.
A navigation and control system for an autonomous rescue vehicle in the space station environment
NASA Technical Reports Server (NTRS)
Merkel, Lawrence
1991-01-01
A navigation and control system was designed and implemented for an orbital autonomous rescue vehicle envisioned to retrieve astronauts or equipment in the case that they become disengaged from the space station. The rescue vehicle, termed the Extra-Vehicular Activity Retriever (EVAR), has an on-board inertial measurement unit ahd GPS receivers for self state estimation, a laser range imager (LRI) and cameras for object state estimation, and a data link for reception of space station state information. The states of the retriever and objects (obstacles and the target object) are estimated by inertial state propagation which is corrected via measurements from the GPS, the LRI system, or the camera system. Kalman filters are utilized to perform sensor fusion and estimate the state propagation errors. Control actuation is performed by a Manned Maneuvering Unit (MMU). Phase plane control techniques are used to control the rotational and translational state of the retriever. The translational controller provides station-keeping or motion along either Clohessy-Wiltshire trajectories or straight line trajectories in the LVLH frame of any sufficiently observed object or of the space station. The software was used to successfully control a prototype EVAR on an air bearing floor facility, and a simulated EVAR operating in a simulated orbital environment. The design of the navigation system and the control system are presented. Also discussed are the hardware systems and the overall software architecture.
The opto-cryo-mechanical design of the short wavelength camera for the CCAT Observatory
NASA Astrophysics Data System (ADS)
Parshley, Stephen C.; Adams, Joseph; Nikola, Thomas; Stacey, Gordon J.
2014-07-01
The CCAT observatory is a 25-m class Gregorian telescope designed for submillimeter observations that will be deployed at Cerro Chajnantor (~5600 m) in the high Atacama Desert region of Chile. The Short Wavelength Camera (SWCam) for CCAT is an integral part of the observatory, enabling the study of star formation at high and low redshifts. SWCam will be a facility instrument, available at first light and operating in the telluric windows at wavelengths of 350, 450, and 850 μm. In order to trace the large curvature of the CCAT focal plane, and to suit the available instrument space, SWCam is divided into seven sub-cameras, each configured to a particular telluric window. A fully refractive optical design in each sub-camera will produce diffraction-limited images. The material of choice for the optical elements is silicon, due to its excellent transmission in the submillimeter and its high index of refraction, enabling thin lenses of a given power. The cryostat's vacuum windows double as the sub-cameras' field lenses and are ~30 cm in diameter. The other lenses are mounted at 4 K. The sub-cameras will share a single cryostat providing thermal intercepts at 80, 15, 4, 1 and 0.1 K, with cooling provided by pulse tube cryocoolers and a dilution refrigerator. The use of the intermediate temperature stage at 15 K minimizes the load at 4 K and reduces operating costs. We discuss our design requirements, specifications, key elements and expected performance of the optical, thermal and mechanical design for the short wavelength camera for CCAT.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
Vehicular camera pedestrian detection research
NASA Astrophysics Data System (ADS)
Liu, Jiahui
2018-03-01
With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.
Spacecraft hazard avoidance utilizing structured light
NASA Technical Reports Server (NTRS)
Liebe, Carl Christian; Padgett, Curtis; Chapsky, Jacob; Wilson, Daniel; Brown, Kenneth; Jerebets, Sergei; Goldberg, Hannah; Schroeder, Jeffrey
2006-01-01
At JPL, a <5 kg free-flying micro-inspector spacecraft is being designed for host-vehicle inspection. The spacecraft includes a hazard avoidance sensor to navigate relative to the vehicle being inspected. Structured light was selected for hazard avoidance because of its low mass and cost. Structured light is a method of remote sensing 3-dimensional structure of the proximity utilizing a laser, a grating, and a single regular APS camera. The laser beam is split into 400 different beams by a grating to form a regular spaced grid of laser beams that are projected into the field of view of an APS camera. The laser source and the APS camera are separated forming the base of a triangle. The distance to all beam intersections of the host are calculated based on triangulation.
Robotic Vehicle Communications Interoperability
1988-08-01
starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor
Single-Fiber Optical Link For Video And Control
NASA Technical Reports Server (NTRS)
Galloway, F. Houston
1993-01-01
Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.
Preliminary optical design of PANIC, a wide-field infrared camera for CAHA
NASA Astrophysics Data System (ADS)
Cárdenas, M. C.; Rodríguez Gómez, J.; Lenzen, R.; Sánchez-Blanco, E.
2008-07-01
In this paper, we present the preliminary optical design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Calar Alto 2.2 m telescope. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. A mosaic of four Hawaii 2RG of 2k x 2k made by Teledyne is used as detector and will give a field of view of 31.9 arcmin x 31.9 arcmin. This cryogenic instrument has been optimized for the Y, J, H and K bands. Special care has been taken in the selection of the standard IR materials used for the optics in order to maximize the instrument throughput and to include the z band. The main challenges of this design are: to produce a well defined internal pupil which allows reducing the thermal background by a cryogenic pupil stop; the correction of off-axis aberrations due to the large field available; the correction of chromatic aberration because of the wide spectral coverage; and the capability of introduction of narrow band filters (~1%) in the system minimizing the degradation in the filter passband without a collimated stage in the camera. We show the optomechanical error budget and compensation strategy that allows our as built design to met the performances from an optical point of view. Finally, we demonstrate the flexibility of the design showing the performances of PANIC at the CAHA 3.5m telescope.
Multiple-aperture optical design for micro-level cameras using 3D-printing method
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung
2018-02-01
The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.
Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera
NASA Astrophysics Data System (ADS)
Liu, Rui-Xue; Zheng, Xian-Liang; Li, Da-Yu; Xia, Ming-Liang; Hu, Li-Fa; Cao, Zhao-Liang; Mu, Quan-Quan; Xuan, Li
2014-09-01
With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with -8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully.
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
Optical Navigation Image of Ganymede
NASA Technical Reports Server (NTRS)
1996-01-01
NASA's Galileo spacecraft, now in orbit around Jupiter, returned this optical navigation image June 3, 1996, showing that the spacecraft is accurately targeted for its first flyby of the giant moon Ganymede on June 27. The missing data in the frame is the result of a special editing feature recently added to the spacecraft's computer to transmit navigation images more quickly. This is first in a series of optical navigation frames, highly edited onboard the spacecraft, that will be used to fine-tune the spacecraft's trajectory as Galileo approaches Ganymede. The image, used for navigation purposes only, is the product of new computer processing capabilities on the spacecraft that allow Galileo to send back only the information required to show the spacecraft is properly targeted and that Ganymede is where navigators calculate it to be. 'This navigation image is totally different from the pictures we'll be taking for scientific study of Ganymede when we get close to it later this month,' said Galileo Project Scientist Dr. Torrence Johnson. On June 27, Galileo will fly just 844 kilometers (524 miles) above Ganymede and return the most detailed, full-frame, high-resolution images and other measurements of the satellite ever obtained. Icy Ganymede is the largest moon in the solar system and three-quarters the size of Mars. It is one of the four large Jovian moons that are special targets of study for the Galileo mission. Of the more than 5 million bits contained in a single image, Galileo performed on-board editing to send back a mere 24,000 bits containing the essential information needed to assure proper targeting. Only the light-to-dark transitions of the crescent Ganymede and reference star locations were transmitted to Earth. The navigation image was taken from a distance of 9.8 million kilometers (6.1 million miles). On June 27th, the spacecraft will be 10,000 times closer to Ganymede.
NASA Astrophysics Data System (ADS)
Gill, E.; Honfi Camilo, L.; Kuystermans, P.; Maas, A. S. B. B.; Buutfeld, B. A. M.; van der Pols, R. H.
2008-09-01
This paper summarizes a study performed by ten students at the Delft University of Technology on a lunar exploration vehicle suited for competing in the Google Lunar X Prize1. The design philosophy aimed at a quick and simple design process, to comply with the mission constraints. This is achieved by using conventional technology and performing the mission with two identical rovers, increasing reliability and simplicity of systems. Both rovers are however capable of operating independently. The required subsystems have been designed for survival and operation on the lunar surface for an estimated mission lifetime of five days. This preliminary study shows that it is possible for two nano-rovers to perform the basic exploration tasks. The mission has been devised such that after launch the rovers endure a 160 hour voyage to the Moon after which they will land on Sinus Medii with a dedicated lunar transfer/lander vehicle. The mission outline itself has the two nano-rovers travelling in the same direction, moving simultaneously. This mission characteristic allows a quick take-over of the required tasks by the second rover in case of one rover breakdown. The main structure of the rovers will consist of Aluminium 2219 T851, due to its good thermal properties and high hardness. Because of the small dimensions of the rovers, the vehicles will use rigid caterpillar tracks as locomotion system. The track systems are sealed from lunar dust using closed track to prevent interference with the mechanisms. This also prevents any damage to the electronics inside the tracks. For the movement speed a velocity of 0.055 m/s has been determined. This is about 90% of the maximum rover velocity, allowing direct control from Earth. The rovers are operated by a direct control loop, involving the mission control center. In order to direct the rovers safely, a continuous video link with the Earth is necessary to assess its immediate surroundings. Two forward pointing navigational cameras aid the human controller by obtaining stereoscopic images. An additional navigational camera in the rear is used as a contingency to drive rearwards. All navigational cameras have a maximal resolution of 640 by 480 pixels. Each rover has one main High Definition (HD) camera capable of acquiring still images and videos. These cameras have a resolution of 1920 by 1080 pixels and a frame rate of 60 frames per second. Resolution and sampling rates can be modified to accommodate data transmission constraints. To comply with the self portrait requirement imposed by the Google Lunar X Prize, the rovers will take images of each other, capturing 50% of the surface exploration system on the still image. As a contingency, both vehicles are also capable composing self portraits from an assembly of multiple images of its own structure, similar to the panoramic images. The camera is positioned above the rover on a mast providing two degrees of freedom for the camera to be able to rotate 360º horizontally and from -45º to 90º vertically. Both rovers are equipped with an omni-directional antenna. A WiMax system is used for all communication with the lander vehicle. The communication is done via the commonly used TCP/IP, which can be easily integrated in the software systems of the mission. The lander vehicle itself will act as a relay station for the data transfer with the ground station on Earth. The selected Digital Signal Processor (D.S.P.) has been specifically designed for compressing raw HD format using little power. The D.S.P. is capable of compressing the raw video data while at the same time performing remaining tasks such as navigation. Since the D.S.P. is designed for Earth use, it has to be adapted to cope with the lunar environment. This can be achieved by proper implication of radiation shielding. As the primary power source Gallium-Arsenide solar panels are used. These are the most efficient solar panels to date. Additionally, a Lithium-Ion battery is used as the secondary power source. In total at least 45Wh of energy are needed to complete the mission. A passive thermal system has been found to comply with the thermal requirements of the rovers. Therefore white paint and optical solar reflectors are used. These have a high emissivity and low absorption. The most striking characteristic for the rover mission is the miniaturization of components, allowing a small and low-mass rover design. Also, the use of adapted offthe- shelf components would dramatically reduce costs with respect to proven space grade components. The typical short mission lifetime allows this approach. It must be noted however that to ensure correct functionality of these components in space, they have to be customized and adapted to cope with vacuum and high radiation levels. Based on the achieved results, the Delft University of Technology is currently looking for partnerships in further development of a design capable of competing in the Google Lunar X Prize.
Electronographic cameras for space astronomy.
NASA Technical Reports Server (NTRS)
Carruthers, G. R.; Opal, C. B.
1972-01-01
Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.
A compact high-speed pnCCD camera for optical and x-ray applications
NASA Astrophysics Data System (ADS)
Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo
2012-07-01
We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.
Enhancement Strategies for Frame-To Uas Stereo Visual Odometry
NASA Astrophysics Data System (ADS)
Kersten, J.; Rodehorst, V.
2016-06-01
Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-02-24
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.
Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-01-01
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102
Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes.
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2015-10-01
the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external
Wavefront measurement of plastic lenses for mobile-phone applications
NASA Astrophysics Data System (ADS)
Huang, Li-Ting; Cheng, Yuan-Chieh; Wang, Chung-Yen; Wang, Pei-Jen
2016-08-01
In camera lenses for mobile-phone applications, all lens elements have been designed with aspheric surfaces because of the requirements in minimal total track length of the lenses. Due to the diffraction-limited optics design with precision assembly procedures, element inspection and lens performance measurement have become cumbersome in the production of mobile-phone cameras. Recently, wavefront measurements based on Shack-Hartmann sensors have been successfully implemented on injection-molded plastic lens with aspheric surfaces. However, the applications of wavefront measurement on small-sized plastic lenses have yet to be studied both theoretically and experimentally. In this paper, both an in-house-built and a commercial wavefront measurement system configured on two optics structures have been investigated with measurement of wavefront aberrations on two lens elements from a mobile-phone camera. First, the wet-cell method has been employed for verifications of aberrations due to residual birefringence in an injection-molded lens. Then, two lens elements of a mobile-phone camera with large positive and negative power have been measured with aberrations expressed in Zernike polynomial to illustrate the effectiveness in wavefront measurement for troubleshooting defects in optical performance.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Design of microcontroller based system for automation of streak camera.
Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P
2010-08-01
A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.
Design of microcontroller based system for automation of streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.
2010-08-15
A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor.more » A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.« less
Design framework for a spectral mask for a plenoptic camera
NASA Astrophysics Data System (ADS)
Berkner, Kathrin; Shroff, Sapna A.
2012-01-01
Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.
Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul
2016-02-01
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
NASA Astrophysics Data System (ADS)
Jakubovic, Raphael; Gupta, Shuarya; Guha, Daipayan; Mainprize, Todd; Yang, Victor X. D.
2017-02-01
Cranial neurosurgical procedures are especially delicate considering that the surgeon must localize the subsurface anatomy with limited exposure and without the ability to see beyond the surface of the surgical field. Surgical accuracy is imperative as even minor surgical errors can cause major neurological deficits. Traditionally surgical precision was highly dependent on surgical skill. However, the introduction of intraoperative surgical navigation has shifted the paradigm to become the current standard of care for cranial neurosurgery. Intra-operative image guided navigation systems are currently used to allow the surgeon to visualize the three-dimensional subsurface anatomy using pre-acquired computed tomography (CT) or magnetic resonance (MR) images. The patient anatomy is fused to the pre-acquired images using various registration techniques and surgical tools are typically localized using optical tracking methods. Although these techniques positively impact complication rates, surgical accuracy is limited by the accuracy of the navigation system and as such quantification of surgical error is required. While many different measures of registration accuracy have been presented true navigation accuracy can only be quantified post-operatively by comparing a ground truth landmark to the intra-operative visualization. In this study we quantified the accuracy of cranial neurosurgical procedures using a novel optical surface imaging navigation system to visualize the three-dimensional anatomy of the surface anatomy. A tracked probe was placed on the screws of cranial fixation plates during surgery and the reported position of the centre of the screw was compared to the co-ordinates of the post-operative CT or MR images, thus quantifying cranial neurosurgical error.
Cadaveric in-situ testing of optical coherence tomography system-based skull base surgery guidance
NASA Astrophysics Data System (ADS)
Sun, Cuiru; Khan, Osaama H.; Siegler, Peter; Jivraj, Jamil; Wong, Ronnie; Yang, Victor X. D.
2015-03-01
Optical Coherence Tomography (OCT) has extensive potential for producing clinical impact in the field of neurological diseases. A neurosurgical OCT hand-held forward viewing probe in Bayonet shape has been developed. In this study, we test the feasibility of integrating this imaging probe with modern navigation technology for guidance and monitoring of skull base surgery. Cadaver heads were used to simulate relevant surgical approaches for treatment of sellar, parasellar and skull base pathology. A high-resolution 3D CT scan was performed on the cadaver head to provide baseline data for navigation. The cadaver head was mounted on existing 3- or 4-point fixation systems. Tracking markers were attached to the OCT probe and the surgeon-probe-OCT interface was calibrated. 2D OCT images were shown in real time together with the optical tracking images to the surgeon during surgery. The intraoperative video and multimodality imaging data set, consisting of real time OCT images, OCT probe location registered to neurosurgical navigation were assessed. The integration of intraoperative OCT imaging with navigation technology provides the surgeon with updated image information, which is important to deal with tissue shifts and deformations during surgery. Preliminary results demonstrate that the clinical neurosurgical navigation system can provide the hand held OCT probe gross anatomical localization. The near-histological imaging resolution of intraoperative OCT can improve the identification of microstructural/morphology differences. The OCT imaging data, combined with the neurosurgical navigation tracking has the potential to improve image interpretation, precision and accuracy of the therapeutic procedure.
Topography of the 81/P Wild 2 Nucleus Derived from Stardust Stereoimages
NASA Technical Reports Server (NTRS)
Kirk, R. L.; Duxbury, T. C.; Horz, F.; Brownlee, D. E.; Newburn, R. L.; Tsou, P.
2005-01-01
On 2 January, 2004, the Stardust spacecraft flew by the nucleus of comet 81P/Wild 2 with a closest approach distance of approx. 240 km. During the encounter, the Stardust Optical Navigation Camera (ONC) obtained 72 images of the nucleus with exposure times alternating between 10 ms (near-optimal for most of the nucleus surface) and 100 ms (used for navigation, and revealing additional details in the coma and dark portions of the surface. Phase angles varied from 72 deg. to near zero to 103 deg. during the encounter, allowing the entire sunlit portion of the surface to be imaged. As many as 20 of the images near closest approach are of sufficiently high resolution to be used in mapping the nucleus surface; of these, two pairs of short-exposure images were used to create the nucleus shape model and derived products reported here. The best image resolution obtained was approx. 14 m/pixel, resulting in approx. 300 pixels across the nucleus. The Stardust Wild 2 dataset is therefore markedly superior from a stereomapping perspective to the Deep Space 1 MICAS images of comet Borrelly. The key subset of the latter (3 images) covered only about a quarter of the surface at phase angles approx. 50 - 60 and less than 50 x 160 pixels across the nucleus, yet it sufficed for groups at the USGS and DLR to produce digital elevation models (DEMs) and study the morphology and photometry of the nucleus in detail.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Imaging of optically diffusive media by use of opto-elastography
NASA Astrophysics Data System (ADS)
Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude
2007-02-01
We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.
NASA Astrophysics Data System (ADS)
Duan, Pengfei; Lei, Wenping
2017-11-01
A number of disciplines (mechanics, structures, thermal, and optics) are needed to design and build Space Camera. Separate design models are normally constructed by each discipline CAD/CAE tools. Design and analysis is conducted largely in parallel subject to requirements that have been levied on each discipline, and technical interaction between the different disciplines is limited and infrequent. As a result a unified view of the Space Camera design across discipline boundaries is not directly possible in the approach above, and generating one would require a large manual, and error-prone process. A collaborative environment that is built on abstract model and performance template allows engineering data and CAD/CAE results to be shared across above discipline boundaries within a common interface, so that it can help to attain speedy multivariate design and directly evaluate optical performance under environment loadings. A small interdisciplinary engineering team from Beijing Institute of Space Mechanics and Electricity has recently conducted a Structural/Thermal/Optical (STOP) analysis of a space camera with this collaborative environment. STOP analysis evaluates the changes in image quality that arise from the structural deformations when the thermal environment of the camera changes throughout its orbit. STOP analyses were conducted for four different test conditions applied during final thermal vacuum (TVAC) testing of the payload on the ground. The STOP Simulation Process begins with importing an integrated CAD model of the camera geometry into the collaborative environment, within which 1. Independent thermal and structural meshes are generated. 2. The thermal mesh and relevant engineering data for material properties and thermal boundary conditions are then used to compute temperature distributions at nodal points in both the thermal and structures mesh through Thermal Desktop, a COTS thermal design and analysis code. 3. Thermally induced structural deformations of the camera are then evaluated in Nastran, an industry standard code for structural design and analysis. 4. Thermal and structural results are next imported into SigFit, another COTS tool that computes deformation and best fit rigid body displacements for the optical surfaces. 5. SigFit creates a modified optical prescription that is imported into CODE V for evaluation of optical performance impacts. The integrated STOP analysis was validated using TVAC test data. For the four different TVAC tests, the relative errors between simulation and test data of measuring points temperatures were almost around 5%, while in some test conditions, they were even much lower to 1%. As to image quality MTF, relative error between simulation and test was 8.3% in the worst condition, others were all below 5%. Through the validation, it has been approved that the collaborative design and simulation environment can achieved the integrated STOP analysis of Space Camera efficiently. And further, the collaborative environment allows an interdisciplinary analysis that formerly might take several months to perform to be completed in two or three weeks, which is very adaptive to scheme demonstration of projects in earlier stages.
Qualification Tests of Micro-camera Modules for Space Applications
NASA Astrophysics Data System (ADS)
Kimura, Shinichi; Miyasaka, Akira
Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.