Hand-eye calibration using a target registration error model.
Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M
2017-10-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
Hand–eye calibration using a target registration error model
Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.
2017-01-01
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657
Earth-orbiting extreme ultraviolet spectroscopic mission: SPRINT-A/EXCEED
NASA Astrophysics Data System (ADS)
Yoshikawa, I.; Tsuchiya, F.; Yamazaki, A.; Yoshioka, K.; Uemizu, K.; Murakami, G.; Kimura, T.; Kagitani, M.; Terada, N.; Kasaba, Y.; Sakanoi, T.; Ishii, H.; Uji, K.
2012-09-01
The EXCEED (Extreme Ultraviolet Spectroscope for Exospheric Dynamics) mission is an Earth-orbiting extreme ultraviolet (EUV) spectroscopic mission and the first in the SPRINT series being developed by ISAS/JAXA. It will be launched in the summer of 2013. EUV spectroscopy is suitable for observing tenuous gases and plasmas around planets in the solar system (e.g., Mercury, Venus, Mars, Jupiter, and Saturn). Advantage of remote sensing observation is to take a direct picture of the plasma dynamics and distinguish between spatial and temporal variability explicitly. One of the primary observation targets is an inner magnetosphere of Jupiter, whose plasma dynamics is dominated by planetary rotation. Previous observations have shown a few percents of the hot electron population in the inner magnetosphere whose temperature is 100 times higher than the background thermal electrons. Though the hot electrons have a significant impact on the energy balance in the inner magnetosphere, their generation process has not yet been elucidated. In the EUV range, a number of emission lines originate from plasmas distributed in Jupiter's inner magnetosphere. The EXCEED spectrograph is designed to have a wavelength range of 55-145 nm with minimum spectral resolution of 0.4 nm, enabling the electron temperature and ion composition in the inner magnetosphere to be determined. Another primary objective is to investigate an unresolved problem concerning the escape of the atmosphere to space. Although there have been some in-situ observations by orbiters, our knowledge is still limited. The EXCEED mission plans to make imaging observations of plasmas around Venus and Mars to determine the amounts of escaping atmosphere. The instrument's field of view (FOV) is so wide that we can get an image from the interaction region between the solar wind and planetary plasmas down to the tail region at one time. This will provide us with information about outward-flowing plasmas, e.g., their composition, rate, and dependence on solar activity. EXCEED has two mission instruments: the EUV spectrograph and a target guide camera that is sensitive to visible light. The EUV spectrograph is designed to have a wavelength range of 55-145 nm with a spectral resolution of 0.4-1.0 nm. The spectrograph slits have a FOV of 400 x 140 arcseconds (maximum). The optics of the instrument consists of a primary mirror with a diameter of 20cm, a laminar type grating, and a 5-stage micro-channel plate assembly with a resistive anode encoder. To achieve high efficiencies, the surfaces of the primary mirror and the grating are coated with CVD-SiC. Because of the large primary mirror and high efficiencies, good temporal resolution and complete spatial coverage for Io plasma torus observation is expected. Based on a feasibility study using the spectral diagnosis method, it is shown that EXCEED can determine the Io plasma torus parameters, such as the electron density, temperatures, hot electron fraction and so on, using an exposure time of 50 minutes. The target guide camera will be used to capture the target and guide the observation area of interest to the slit. Emissions from outside the slit's FOV will be reflected by the front of the slit and guided to the target guide camera. The guide camera's FOV is 240" x 240". The camera will take an image every 3 seconds and the image is sent to a mission data processor (MDP), which calculates the centroid of the image. During an observation, the bus system controls the attitude to keep the centroid position of the target in the guide camera with an accuracy of ±5 arc-seconds. With the help of the target guide camera, we will take spectral images with a long exposure time of 50 minutes and good spatial resolution of 20 arc-seconds.
A near-Infrared SETI Experiment: Alignment and Astrometric precision
NASA Astrophysics Data System (ADS)
Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan
2016-06-01
Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.
NASA Astrophysics Data System (ADS)
Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.
2017-02-01
In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.
Science observations with the IUE using the one-gyro mode
NASA Technical Reports Server (NTRS)
Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, Chris R.; Perez, M. R.; Webb, J.
1990-01-01
The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked which will rely on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.
Science observations with the IUE using the one-gyro mode
NASA Technical Reports Server (NTRS)
Imhoff, C.; Pitts, R.; Arquilla, R.; Shrader, C.; Perez, M.; Webb, J.
1990-01-01
The International Ultraviolet Explorer (IUE) attitude control system originally included an inertial reference package containing six gyroscopes for three axis stabilization. The science instrument includes a prime and redundant Field Error Sensor (FES) camera for target acquisition and offset guiding. Since launch, four of the six gyroscopes have failed. The current attitude control system utilizes the remaining two gyros and a Fine Sun Sensor (FSS) for three axis stabilization. When the next gyro fails, a new attitude control system will be uplinked, which will relay on the remaining gyro and the FSS for general three axis stabilization. In addition to the FSS, the FES cameras will be required to assist in maintaining fine attitude control during target acquisition. This has required thoroughly determining the characteristics of the FES cameras and the spectrograph aperture plate as well as devising new target acquisition procedures. The results of this work are presented.
The Road To The Objective Force. Armaments for the Army Transformation
2001-06-18
Vehicle Fire Support Vehicle •TOW 2B Anti-Tank Capability Under Armor •Detection of NBC Hazards Mortar Carrier •Dismounted M121 120mm MRT Initially...engaged from under armor M6 Launchers (x4) Staring Array Thermal Sight Height reduction for air transport Day Camera Target Acq Sight Armament Remote...PM BCT ANTI-TANK GUIDED MISSILE VEHICLE • TOWII • ITAS (Raytheon) - 2 Missiles • IBAS Day Camera • Missile is Remotely Fired Under Armor • M6 Smoke
Opto-mechanical system design of test system for near-infrared and visible target
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Zhu, Guodong; Wang, Yuchao
2014-12-01
Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.
Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles
NASA Technical Reports Server (NTRS)
Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick
2012-01-01
Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.
Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2012-10-01
In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.
Opto-mechanical design of the G-CLEF flexure control camera system
NASA Astrophysics Data System (ADS)
Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson
2016-08-01
The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
NASA Astrophysics Data System (ADS)
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; Ballester, Otger; Baltay, Charles; Besuner, Robert; Buckley-Geer, Elizabeth; Butler, Karen; Cardiel, Laia; Dey, Arjun; Duan, Yutong; Elliott, Ann; Emmet, William; Gershkovich, Irena; Honscheid, Klaus; Illa, Jose M.; Jimenez, Jorge; Joyce, Richard; Karcher, Armin; Kent, Stephen; Lambert, Andrew; Lampton, Michael; Levi, Michael; Manser, Christopher; Marshall, Robert; Martini, Paul; Paat, Anthony; Probst, Ronald; Rabinowitz, David; Reil, Kevin; Robertson, Amy; Rockosi, Connie; Schlegel, David; Schubnell, Michael; Serrano, Santiago; Silber, Joseph; Soto, Christian; Sprayberry, David; Summers, David; Tarlé, Greg; Weaver, Benjamin A.
2018-02-01
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was an on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. Lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
KleinJan, G H; Brouwer, O R; Mathéron, H M; Rietbergen, D D D; Valdés Olmos, R A; Wouters, M W; van den Berg, N S; van Leeuwen, F W B
2016-01-01
To assess if combined fluorescence- and radio-guided occult lesion localization (hybrid ROLL) is feasible in patients scheduled for surgical resection of non-palpable (18)F-FDG-avid lesions on PET/CT. Four patients with (18)F-FDG-avid lesions on follow-up PET/CT that were not palpable during physical examination but were suspected to harbor metastasis were enrolled. Guided by ultrasound, the hybrid tracer indocyanine green (ICG)-(99m)Tc-nanocolloid was injected centrally in the target lesion. SPECT/CT imaging was used to confirm tracer deposition. Intraoperatively, lesions were localized using a hand-held gamma ray detection probe, a portable gamma camera, and a fluorescence camera. After excision, the gamma camera was used to check the wound bed for residual activity. A total of six (18)F-FDG-avid lymph nodes were identified and scheduled for hybrid ROLL. Comparison of the PET/CT images with the acquired SPECT/CT after hybrid tracer injection confirmed accurate tracer deposition. No side effects were observed. Combined radio- and fluorescence-guidance enabled localization and excision of the target lesion in all patients. Five of the six excised lesions proved tumor-positive at histopathology. The hybrid ROLL approach appears to be feasible and can facilitate the intraoperative localization and excision of non-palpable lesions suspected to harbor tumor metastases. In addition to the initial radioguided detection, the fluorescence component of the hybrid tracer enables high-resolution intraoperative visualization of the target lesion. The procedure needs further evaluation in a larger cohort and wider range of malignancies to substantiate these preliminary findings. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; ...
2018-01-15
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was anmore » on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. In conclusion, lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.« less
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was anmore » on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. In conclusion, lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.« less
Photon collider: a four-channel autoguider solution
NASA Astrophysics Data System (ADS)
Hygelund, John C.; Haynes, Rachel; Burleson, Ben; Fulton, Benjamin J.
2010-07-01
The "Photon Collider" uses a compact array of four off axis autoguider cameras positioned with independent filtering and focus. The photon collider is two way symmetric and robustly mounted with the off axis light crossing the science field which allows the compact single frame construction to have extremely small relative deflections between guide and science CCDs. The photon collider provides four independent guiding signals with a total of 15 square arc minutes of sky coverage. These signals allow for simultaneous altitude, azimuth, field rotation and focus guiding. Guide cameras read out without exposure overhead increasing the tracking cadence. The independent focus allows the photon collider to maintain in focus guide stars when the main science camera is taking defocused exposures as well as track for telescope focus changes. Independent filters allow auto guiding in the science camera wavelength bandpass. The four cameras are controlled with a custom web services interface from a single Linux based industrial PC, and the autoguider mechanism and telemetry is built around a uCLinux based Analog Devices BlackFin embedded microprocessor. Off axis light is corrected with a custom meniscus correcting lens. Guide CCDs are cooled with ethylene glycol with an advanced leak detection system. The photon collider was built for use on Las Cumbres Observatory's 2 meter Faulks telescopes and currently used to guide the alt-az mount.
STS-52 CANEX-2 Canadian Target Assembly (CTA) held by RMS over OV-102's PLB
1992-11-01
STS052-71-057 (22 Oct-1 Nov 1992) --- This 70mm frame, photographed with a handheld Hasselblad camera aimed through Columbia's aft flight deck windows, captures the operation of the Space Vision System (SVS) experiment above the cargo bay. Target dots have been placed on the Canadian Target Assembly (CTA), a small satellite, in the grasp of the Canadian-built remote manipulator system (RMS) arm. SVS utilized a Shuttle TV camera to monitor the dots strategically arranged on the satellite, to be tracked. As the satellite moved via the arm, the SVS computer measured the changing position of the dots and provided real-time television display of the location and orientation of the CTA. This type of displayed information is expected to help an operator guide the RMS or the Mobile Servicing System (MSS) of the future when berthing or deploying satellites. Also visible in the frame is the U.S. Microgravity Payload (USMP-01).
Pancam Imaging of the Mars Exploration Rover Landing Sites in Gusev Crater and Meridiani Planum
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Squyres, S. W.; Arvidson, R. E.; Arneson, H. M.; Bass, D.; Cabrol, N.; Calvin, W.; Farmer, J.; Farrand, W. H.
2004-01-01
The Mars Exploration Rovers carry four Panoramic Camera (Pancam) instruments (two per rover) that have obtained high resolution multispectral and stereoscopic images for studies of the geology, mineralogy, and surface and atmospheric physical properties at both rover landing sites. The Pancams are also providing significant mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach imaging products.
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
Bioluminescence Tomography–Guided Radiation Therapy for Preclinical Research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bin; Wang, Ken Kang-Hsin, E-mail: kwang27@jhmi.edu; Yu, Jingjing
Purpose: In preclinical radiation research, it is challenging to localize soft tissue targets based on cone beam computed tomography (CBCT) guidance. As a more effective method to localize soft tissue targets, we developed an online bioluminescence tomography (BLT) system for small-animal radiation research platform (SARRP). We demonstrated BLT-guided radiation therapy and validated targeting accuracy based on a newly developed reconstruction algorithm. Methods and Materials: The BLT system was designed to dock with the SARRP for image acquisition and to be detached before radiation delivery. A 3-mirror system was devised to reflect the bioluminescence emitted from the subject to a stationarymore » charge-coupled device (CCD) camera. Multispectral BLT and the incomplete variables truncated conjugate gradient method with a permissible region shrinking strategy were used as the optimization scheme to reconstruct bioluminescent source distributions. To validate BLT targeting accuracy, a small cylindrical light source with high CBCT contrast was placed in a phantom and also in the abdomen of a mouse carcass. The center of mass (CoM) of the source was recovered from BLT and used to guide radiation delivery. The accuracy of the BLT-guided targeting was validated with films and compared with the CBCT-guided delivery. In vivo experiments were conducted to demonstrate BLT localization capability for various source geometries. Results: Online BLT was able to recover the CoM of the embedded light source with an average accuracy of 1 mm compared to that with CBCT localization. Differences between BLT- and CBCT-guided irradiation shown on the films were consistent with the source localization revealed in the BLT and CBCT images. In vivo results demonstrated that our BLT system could potentially be applied for multiple targets and tumors. Conclusions: The online BLT/CBCT/SARRP system provides an effective solution for soft tissue targeting, particularly for small, nonpalpable, or orthotopic tumor models.« less
Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.
Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John
2018-01-01
In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.
Chun, Hyeong Jin; Han, Yong Duk; Park, Yoo Min; Kim, Ka Ram; Lee, Seok Jae
2018-01-01
To overcome the time and space constraints in disease diagnosis via the biosensing approach, we developed a new signal-transducing strategy that can be applied to colorimetric optical biosensors. Our study is focused on implementation of a signal transduction technology that can directly translate the color intensity signals—that require complicated optical equipment for the analysis—into signals that can be easily counted with the naked eye. Based on the selective light absorption and wavelength-filtering principles, our new optical signaling transducer was built from a common computer monitor and a smartphone. In this signal transducer, the liquid crystal display (LCD) panel of the computer monitor served as a light source and a signal guide generator. In addition, the smartphone was used as an optical receiver and signal display. As a biorecognition layer, a transparent and soft material-based biosensing channel was employed generating blue output via a target-specific bienzymatic chromogenic reaction. Using graphics editor software, we displayed the optical signal guide patterns containing multiple polygons (a triangle, circle, pentagon, heptagon, and 3/4 circle, each associated with a specified color ratio) on the LCD monitor panel. During observation of signal guide patterns displayed on the LCD monitor panel using a smartphone camera via the target analyte-loaded biosensing channel as a color-filtering layer, the number of observed polygons changed according to the concentration of the target analyte via the spectral correlation between absorbance changes in a solution of the biosensing channel and color emission properties of each type of polygon. By simple counting of the changes in the number of polygons registered by the smartphone camera, we could efficiently measure the concentration of a target analyte in a sample without complicated and expensive optical instruments. In a demonstration test on glucose as a model analyte, we could easily measure the concentration of glucose in the range from 0 to 10 mM. PMID:29509682
Chun, Hyeong Jin; Han, Yong Duk; Park, Yoo Min; Kim, Ka Ram; Lee, Seok Jae; Yoon, Hyun C
2018-03-06
To overcome the time and space constraints in disease diagnosis via the biosensing approach, we developed a new signal-transducing strategy that can be applied to colorimetric optical biosensors. Our study is focused on implementation of a signal transduction technology that can directly translate the color intensity signals-that require complicated optical equipment for the analysis-into signals that can be easily counted with the naked eye. Based on the selective light absorption and wavelength-filtering principles, our new optical signaling transducer was built from a common computer monitor and a smartphone. In this signal transducer, the liquid crystal display (LCD) panel of the computer monitor served as a light source and a signal guide generator. In addition, the smartphone was used as an optical receiver and signal display. As a biorecognition layer, a transparent and soft material-based biosensing channel was employed generating blue output via a target-specific bienzymatic chromogenic reaction. Using graphics editor software, we displayed the optical signal guide patterns containing multiple polygons (a triangle, circle, pentagon, heptagon, and 3/4 circle, each associated with a specified color ratio) on the LCD monitor panel. During observation of signal guide patterns displayed on the LCD monitor panel using a smartphone camera via the target analyte-loaded biosensing channel as a color-filtering layer, the number of observed polygons changed according to the concentration of the target analyte via the spectral correlation between absorbance changes in a solution of the biosensing channel and color emission properties of each type of polygon. By simple counting of the changes in the number of polygons registered by the smartphone camera, we could efficiently measure the concentration of a target analyte in a sample without complicated and expensive optical instruments. In a demonstration test on glucose as a model analyte, we could easily measure the concentration of glucose in the range from 0 to 10 mM.
Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation
Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.
2003-01-01
The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.
Automatic Focus Adjustment of a Microscope
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance
2005-01-01
AUTOFOCUS is a computer program for use in a control system that automatically adjusts the position of an instrument arm that carries a microscope equipped with an electronic camera. In the original intended application of AUTOFOCUS, the imaging microscope would be carried by an exploratory robotic vehicle on a remote planet, but AUTOFOCUS could also be adapted to similar applications on Earth. Initially control software other than AUTOFOCUS brings the microscope to a position above a target to be imaged. Then the instrument arm is moved to lower the microscope toward the target: nominally, the target is approached from a starting distance of 3 cm in 10 steps of 3 mm each. After each step, the image in the camera is subjected to a wavelet transform, which is used to evaluate the texture in the image at multiple scales to determine whether and by how much the microscope is approaching focus. A focus measure is derived from the transform and used to guide the arm to bring the microscope to the focal height. When the analysis reveals that the microscope is in focus, image data are recorded and transmitted.
Baikejiang, Reheman; Zhang, Wei; Li, Changqing
2017-01-01
Diffuse optical tomography (DOT) has attracted attentions in the last two decades due to its intrinsic sensitivity in imaging chromophores of tissues such as hemoglobin, water, and lipid. However, DOT has not been clinically accepted yet due to its low spatial resolution caused by strong optical scattering in tissues. Structural guidance provided by an anatomical imaging modality enhances the DOT imaging substantially. Here, we propose a computed tomography (CT) guided multispectral DOT imaging system for breast cancer imaging. To validate its feasibility, we have built a prototype DOT imaging system which consists of a laser at the wavelength of 650 nm and an electron multiplying charge coupled device (EMCCD) camera. We have validated the CT guided DOT reconstruction algorithms with numerical simulations and phantom experiments, in which different imaging setup parameters, such as projection number of measurements and width of measurement patch, have been investigated. Our results indicate that an air-cooling EMCCD camera is good enough for the transmission mode DOT imaging. We have also found that measurements at six angular projections are sufficient for DOT to reconstruct the optical targets with 2 and 4 times absorption contrast when the CT guidance is applied. Finally, we have described our future research plan on integration of a multispectral DOT imaging system into a breast CT scanner.
Design of an infrared camera based aircraft detection system for laser guide star installations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedman, H.; Macintosh, B.
1996-03-05
There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.
Two-Camera Acquisition and Tracking of a Flying Target
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter
2008-01-01
A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.
User guide for the USGS aerial camera Report of Calibration.
Tayman, W.P.
1984-01-01
Calibration and testing of aerial mapping cameras includes the measurement of optical constants and the check for proper functioning of a number of complicated mechanical and electrical parts. For this purpose the US Geological Survey performs an operational type photographic calibration. This paper is not strictly a scientific paper but rather a 'user guide' to the USGS Report of Calibration of an aerial mapping camera for compliance with both Federal and State mapping specifications. -Author
The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation
NASA Astrophysics Data System (ADS)
Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team
2002-12-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.
Field-of-View Guiding Camera on the HISAKI (SPRINT-A) Satellite
NASA Astrophysics Data System (ADS)
Yamazaki, A.; Tsuchiya, F.; Sakanoi, T.; Uemizu, K.; Yoshioka, K.; Murakami, G.; Kagitani, M.; Kasaba, Y.; Yoshikawa, I.; Terada, N.; Kimura, T.; Sakai, S.; Nakaya, K.; Fukuda, S.; Sawai, S.
2014-11-01
HISAKI (SPRINT-A) satellite is an earth-orbiting Extreme UltraViolet (EUV) spectroscopic mission and launched on 14 Sep. 2013 by the launch vehicle Epsilon-1. Extreme ultraviolet spectroscope (EXCEED) onboard the satellite will investigate plasma dynamics in Jupiter's inner magnetosphere and atmospheric escape from Venus and Mars. EUV spectroscopy is useful to measure electron density and temperature and ion composition in plasma environment. EXCEED also has an advantage to measure spatial distribution of plasmas around the planets. To measure radial plasma distribution in the Jovian inner magnetosphere and plasma emissions from ionosphere, exosphere and tail separately (for Venus and Mars), the pointing accuracy of the spectroscope should be smaller than spatial structures of interest (20 arc-seconds). For satellites in the low earth orbit (LEO), the pointing displacement is generally caused by change of alignment between the satellite bus module and the telescope due to the changing thermal inputs from the Sun and Earth. The HISAKI satellite is designed to compensate the displacement by tracking the target with using a Field-Of-View (FOV) guiding camera. Initial checkout of the attitude control for the EXCEED observation shows that pointing accuracy kept within 2 arc-seconds in a case of "track mode" which is used for Jupiter observation. For observations of Mercury, Venus, Mars, and Saturn, the entire disk will be guided inside slit to observe plasma around the planets. Since the FOV camera does not capture the disk in this case, the satellite uses a star tracker (STT) to hold the attitude ("hold mode"). Pointing accuracy during this mode has been 20-25 arc-seconds. It has been confirmed that the attitude control works well as designed.
NASA Astrophysics Data System (ADS)
Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried
2017-09-01
Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.
Visual control of robots using range images.
Pomares, Jorge; Gil, Pablo; Torres, Fernando
2010-01-01
In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.
Imaging probe for breast cancer localization
NASA Astrophysics Data System (ADS)
Soluri, A.; Scafè, R.; Capoccetti, F.; Burgio, N.; Schiaratura, A.; Pani, R.; Pellegrini, R.; Cinti, M. N.; Mechella, M.; Amanti, A.; David, V.; Scopinaro, F.
2003-01-01
High spatial resolution, small Field Of View (FOV), fully portable scintillation cameras are lower cost and obviously lower weight than large FOV, not transportable Anger gamma cameras. Portable cameras allow easy transfer of the detector, thus of radioisotope imaging, where the bioptical procedure takes place. In this paper we describe a preliminary experience on radionuclide Breast Cancer (BC) imaging with a 22.8×22.8 mm 2 FOV minicamera, already used by our group for sentinel node detection with the name of Imaging Probe (IP). In this work IP BC detection was performed with the aim of guiding biopsy, in particular open biopsy, or to help or modify fine needle or needle addressing when main driving method was echography or digital radiography. The IP prototype weight was about 1 kg. This small scintillation camera is based on the compact Position Sensitive Photomultiplier Tube Hamamatsu R7600-00-C8, coupled to a CsI(Tl) scintillation array 2.6×2.6×5.0 mm 3 crystal-pixel size. Spatial resolution of the IP was 2.5 mm Full-Width at Half-Maximum at laboratory tests. IP was provided with acquisition software allowing quick change of pixels number on the computer acquisition frame and an on-line image-smoothing program. Both these programs were developed in order to allow nuclear physicians to quickly get target source when the patient was anesthetized in the operator room, with sterile conditions. 99mTc Sestamibi (MIBI) was injected at the dose of 740 MBq 1 h before imaging and biopsy to 14 patients with suspicious or known BC. Scintigraphic images were acquired before and after biopsy in each patient. Operator was allowed to take into account scintigraphic images as well as previously performed X-ray mammograms and echographies. High-resolution IP images were able to guide biopsy toward cancer or washout zones of the cancer, that are thought to be chemoresistant in 7 patients out of 10. Four patients, in whom IP and MIBI were not able to guide biopsy, did not show cancer. Two patients in whom biopsy was performed in the high washout zone, did show Multi Drug Resistance (MDR) gene product at immunohistochemistry on bioptical samples. Specific radioactivity was measured on biopsy specimens and measurement confirmed the etherogeneous distribution of MIBI within cancers. Our study confirms the ability of IP to guide breast biopsy even when our mini-camera has to be manually handled by trained physicians during operation.
NASA Technical Reports Server (NTRS)
Brower, S. J.; Ridd, M. K.
1984-01-01
The use of the Environmental Protection Agency (EPA) Enviropod camera system is detailed in this handbook which contains a step-by-step guide for mission planning, flights, film processing, indexing, and documentation. Information regarding Enviropod equipment and specifications is included.
Development of digital shade guides for color assessment using a digital camera with ring flashes.
Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan
2011-02-01
Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.
Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young
2016-04-01
In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.
Analysis of calibration accuracy of cameras with different target sizes for large field of view
NASA Astrophysics Data System (ADS)
Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan
2018-03-01
Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.
Brighton, Caroline H.; Thomas, Adrian L. R.
2017-01-01
The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus, attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best—and exceedingly well—modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant (N). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N < 3). This lower feedback gain is appropriate at the lower flight speed of a biological system, given its presumably higher error and longer delay. This same guidance law could find use in small visually guided drones designed to remove other drones from protected airspace. PMID:29203660
Brighton, Caroline H; Thomas, Adrian L R; Taylor, Graham K
2017-12-19
The ability to intercept uncooperative targets is key to many diverse flight behaviors, from courtship to predation. Previous research has looked for simple geometric rules describing the attack trajectories of animals, but the underlying feedback laws have remained obscure. Here, we use GPS loggers and onboard video cameras to study peregrine falcons, Falco peregrinus , attacking stationary targets, maneuvering targets, and live prey. We show that the terminal attack trajectories of peregrines are not described by any simple geometric rule as previously claimed, and instead use system identification techniques to fit a phenomenological model of the dynamical system generating the observed trajectories. We find that these trajectories are best-and exceedingly well-modeled by the proportional navigation (PN) guidance law used by most guided missiles. Under this guidance law, turning is commanded at a rate proportional to the angular rate of the line-of-sight between the attacker and its target, with a constant of proportionality (i.e., feedback gain) called the navigation constant ( N ). Whereas most guided missiles use navigation constants falling on the interval 3 ≤ N ≤ 5, peregrine attack trajectories are best fitted by lower navigation constants (median N < 3). This lower feedback gain is appropriate at the lower flight speed of a biological system, given its presumably higher error and longer delay. This same guidance law could find use in small visually guided drones designed to remove other drones from protected airspace. Copyright © 2017 the Author(s). Published by PNAS.
Mast Camera and Its Calibration Target on Curiosity Rover
2013-03-18
This set of images illustrates the twin cameras of the Mastcam instrument on NASA Curiosity Mars rover upper left, the Mastcam calibration target lower center, and the locations of the cameras and target on the rover.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
Control Program for an Optical-Calibration Robot
NASA Technical Reports Server (NTRS)
Johnston, Albert
2005-01-01
A computer program provides semiautomatic control of a moveable robot used to perform optical calibration of video-camera-based optoelectronic sensor systems that will be used to guide automated rendezvous maneuvers of spacecraft. The function of the robot is to move a target and hold it at specified positions. With the help of limit switches, the software first centers or finds the target. Then the target is moved to a starting position. Thereafter, with the help of an intuitive graphical user interface, an operator types in coordinates of specified positions, and the software responds by commanding the robot to move the target to the positions. The software has capabilities for correcting errors and for recording data from the guidance-sensor system being calibrated. The software can also command that the target be moved in a predetermined sequence of motions between specified positions and can be run in an advanced control mode in which, among other things, the target can be moved beyond the limits set by the limit switches.
Reitsamer, H; Groiss, H P; Franz, M; Pflug, R
2000-01-31
We present a computer-guided microelectrode positioning system that is routinely used in our laboratory for intracellular electrophysiology and functional staining of retinal neurons. Wholemount preparations of isolated retina are kept in a superfusion chamber on the stage of an inverted microscope. Cells and layers of the retina are visualized by Nomarski interference contrast using infrared light in combination with a CCD camera system. After five-point calibration has been performed the electrode can be guided to any point inside the calibrated volume without moving the retina. Electrode deviations from target cells can be corrected by the software further improving the precision of this system. The good visibility of cells avoids prelabeling with fluorescent dyes and makes it possible to work under completely dark adapted conditions.
Astrophotography Basics: Meteors, Comets, Eclipses, Aurorae, Star Trails. Revised.
ERIC Educational Resources Information Center
Eastman Kodak Co., Rochester, NY.
This pamphlet gives an introduction to the principles of astronomical picture-taking. Chapters included are: (1) "Getting Started" (describing stationary cameras, sky charts and mapping, guided cameras, telescopes, brightness of astronomical subjects, estimating exposure, film selection, camera filters, film processing, and exposure for…
Lincoln Penny on Mars in Camera Calibration Target
2012-09-10
The penny in this image is part of a camera calibration target on NASA Mars rover Curiosity. The MAHLI camera on the rover took this image of the MAHLI calibration target during the 34th Martian day of Curiosity work on Mars, Sept. 9, 2012.
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
Machine vision guided sensor positioning system for leaf temperature assessment
NASA Technical Reports Server (NTRS)
Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)
2001-01-01
A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.
A calibration method based on virtual large planar target for cameras with large FOV
NASA Astrophysics Data System (ADS)
Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu
2018-02-01
In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.
Camera calibration: active versus passive targets
NASA Astrophysics Data System (ADS)
Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli
2011-11-01
Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.
A novel method to reduce time investment when processing videos from camera trap studies.
Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig
2014-01-01
Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs.
Endoscopic laser range scanner for minimally invasive, image guided kidney surgery
NASA Astrophysics Data System (ADS)
Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.
2013-03-01
Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.
Pose estimation of industrial objects towards robot operation
NASA Astrophysics Data System (ADS)
Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu
2017-10-01
With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.
Space-based infrared sensors of space target imaging effect analysis
NASA Astrophysics Data System (ADS)
Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang
2018-02-01
Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, S; Charpentier, P; Sayler, E
2015-06-15
Purpose Isocenter shifts and rotations to correct patient setup errors and organ motion cannot remedy some shape changes of large targets. We are investigating new methods in quantification of target deformation for realtime IGRT of breast and chest wall cancer. Methods Ninety-five patients of breast or chest wall cancer were accrued in an IRB-approved clinical trial of IGRT using 3D surface images acquired at daily setup and beam-on time via an in-room camera. Shifts and rotations relating to the planned reference surface were determined using iterative-closest-point alignment. Local surface displacements and target deformation are measured via a ray-surface intersection andmore » principal component analysis (PCA) of external surface, respectively. Isocenter shift, upper-abdominal displacement, and vectors of the surface projected onto the two principal components, PC1 and PC2, were evaluated for sensitivity and accuracy in detection of target deformation. Setup errors for some deformed targets were estimated by superlatively registering target volume, inner surface, or external surface in weekly CBCT or these outlines on weekly EPI. Results Setup difference according to the inner-surface, external surface, or target volume could be 1.5 cm. Video surface-guided setup agreed with EPI results to within < 0.5 cm while CBCT results were sometimes (∼20%) different from that of EPI (>0.5 cm) due to target deformation for some large breasts and some chest walls undergoing deep-breath-hold irradiation. Square root of PC1 and PC2 is very sensitive to external surface deformation and irregular breathing. Conclusion PCA of external surfaces is quick and simple way to detect target deformation in IGRT of breast and chest wall cancer. Setup corrections based on the target volume, inner surface, and external surface could be significant different. Thus, checking of target shape changes is essential for accurate image-guided patient setup and motion tracking of large deformable targets. NIH grant for the first author as cionsultant and the last author as the PI.« less
Homography-based multiple-camera person-tracking
NASA Astrophysics Data System (ADS)
Turk, Matthew R.
2009-01-01
Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.
NASA Astrophysics Data System (ADS)
Malof, Jordan M.; Collins, Leslie M.
2016-05-01
Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.
Augmented reality system for CT-guided interventions: system description and initial phantom trials
NASA Astrophysics Data System (ADS)
Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.
2003-05-01
We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.
Scene analysis for effective visual search in rough three-dimensional-modeling scenes
NASA Astrophysics Data System (ADS)
Wang, Qi; Hu, Xiaopeng
2016-11-01
Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, D.P.
1998-05-12
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera`s two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera`s integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting. 8 figs.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Laser guide star pointing camera for ESO LGS Facilities
NASA Astrophysics Data System (ADS)
Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.
2014-08-01
Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.
Calibration Target for Curiosity Arm Camera
2012-09-10
This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
NASA Astrophysics Data System (ADS)
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Integration of optical imaging with a small animal irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An
Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prooijen, Monique van; Breen, Stephen
Purpose: Our treatment for choroidal melanoma utilizes the GTC frame. The patient looks at a small LED to stabilize target position. The LED is attached to a metal arm attached to the GTC frame. A camera on the arm allows therapists to monitor patient compliance. To move to mask-based immobilization we need a new LED/camera attachment mechanism. We used a Hazard-Risk Analysis (HRA) to guide the design of the new tool. Method: A pre-clinical model was built with input from therapy and machine shop personnel. It consisted of an aluminum frame placed in aluminum guide posts attached to the couchmore » top. Further development was guided by the Department of Defense Standard Practice - System Safety hazard risk analysis technique. Results: An Orfit mask was selected because it allowed access to indexes on the couch top which assist with setup reproducibility. The first HRA table was created considering mechanical failure modes of the device. Discussions with operators and manufacturers identified other failure modes and solutions. HRA directed the design towards a safe clinical device. Conclusion: A new immobilization tool has been designed using hazard-risk analysis which resulted in an easier-to-use and safer tool compared to the initial design. The remaining risks are all low probability events and not dissimilar from those currently faced with the GTC setup. Given the gains in ease of use for therapists and patients as well as the lower costs for the hospital, we will implement this new tool.« less
Automation of the targeting and reflective alignment concept
NASA Technical Reports Server (NTRS)
Redfield, Robin C.
1992-01-01
The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.
Characterization of SWIR cameras by MRC measurements
NASA Astrophysics Data System (ADS)
Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.
2014-05-01
Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.
Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio
2014-11-01
We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.
Vision-mediated interaction with the Nottingham caves
NASA Astrophysics Data System (ADS)
Ghali, Ahmed; Bayomi, Sahar; Green, Jonathan; Pridmore, Tony; Benford, Steve
2003-05-01
The English city of Nottingham is widely known for its rich history and compelling folklore. A key attraction is the extensive system of caves to be found beneath Nottingham Castle. Regular guided tours are made of the Nottingham caves, during which castle staff tell stories and explain historical events to small groups of visitors while pointing out relevant cave locations and features. The work reported here is part of a project aimed at enhancing the experience of cave visitors, and providing flexible storytelling tools to their guides, by developing machine vision systems capable of identifying specific actions of guides and/or visitors and triggering audio and/or video presentations as a result. Attention is currently focused on triggering audio material by directing the beam of a standard domestic flashlight towards features of interest on the cave wall. Cameras attached to the walls or roof provide image sequences within which torch light and cave features are detected and their relative positions estimated. When a target feature is illuminated the corresponding audio response is generated. We describe the architecture of the system, its implementation within the caves and the results of initial evaluations carried out with castle guides and members of the public.
Infrared needle mapping to assist biopsy procedures and training.
Shar, Bruce; Leis, John; Coucher, John
2018-04-01
A computed tomography (CT) biopsy is a radiological procedure which involves using a needle to withdraw tissue or a fluid specimen from a lesion of interest inside a patient's body. The needle is progressively advanced into the patient's body, guided by the most recent CT scan. CT guided biopsies invariably expose patients to high dosages of radiation, due to the number of scans required whilst the needle is advanced. This study details the design of a novel method to aid biopsy procedures using infrared cameras. Two cameras are used to image the biopsy needle area, from which the proposed algorithm computes an estimate of the needle endpoint, which is projected onto the CT image space. This estimated position may be used to guide the needle between scans, and results in a reduction in the number of CT scans that need to be performed during the biopsy procedure. The authors formulate a 2D augmentation system which compensates for camera pose, and show that multiple low-cost infrared imaging devices provide a promising approach.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
NASA Technical Reports Server (NTRS)
Hertel, R. J.
1979-01-01
An electro-optical method to measure the aeroelastic deformations of wind tunnel models is examined. The multitarget tracking performance of one of the two electronic cameras comprising the stereo pair is modeled and measured. The properties of the targets at the model, the camera optics, target illumination, number of targets, acquisition time, target velocities, and tracker performance are considered. The electronic camera system is shown to be capable of locating, measuring, and following the positions of 5 to 50 targets attached to the model at measuring rates up to 5000 targets per second.
High-resolution mini gamma camera for diagnosis and radio-guided surgery in diabetic foot infection
NASA Astrophysics Data System (ADS)
Scopinaro, F.; Capriotti, G.; Di Santo, G.; Capotondi, C.; Micarelli, A.; Massari, R.; Trotta, C.; Soluri, A.
2006-12-01
The diagnosis of diabetic foot osteomyelitis is often difficult. 99mTc-WBC (White Blood Cell) scintigraphy plays a key role in the diagnosis of bone infections. Spatial resolution of Anger camera is not always able to differentiate soft tissue from bone infection. Aim of present study is to verify if HRD (High-Resolution Detector) is able to improve diagnosis and to help surgery. Patients were studied by HRD showing 25.7×25.7 mm 2 FOV, 2 mm spatial resolution and 18% energy resolution. The patients were underwent to surgery and, when necessary, bone biopsy, both guided by HRD. Four patients were positive at Anger camera without specific signs of osteomyelitis. HRS (High-Resolution Scintigraphy) showed hot spots in the same patients. In two of them the hot spot was bar-shaped and it was localized in correspondence of the small phalanx. The presence of bone infection was confirmed at surgery, which was successfully guided by HRS. 99mTc-WBC HRS was able to diagnose pedal infection and to guide the surgery of diabetic foot, opening a new way in the treatment of infected diabetic foot.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Rocket instrument for far-UV spectrophotometry of faint astronomical objects.
Hartig, G F; Fastie, W G; Davidsen, A F
1980-03-01
A sensitive sounding rocket instrument for moderate (~10-A) resolution far-UV (lambda1160-lambda1750-A) spectrophotometry of faint astronomical objects has been developed. The instrument employs a photon-counting microchannel plate imaging detector and a concave grating spectrograph behind a 40-cm Dall-Kirkham telescope. A unique remote-control pointing system, incorporating an SIT vidicon aspect camera, two star trackers, and a tone-encoded command telemetry link, permits the telescope to be oriented to within 5 arc sec of any target for which suitable guide stars can be found. The design, construction, calibration, and flight performance of the instrument are discussed.
Stellar Oscillations Network Group
NASA Astrophysics Data System (ADS)
Grundahl, F.; Kjeldsen, H.; Christensen-Dalsgaard, J.; Arentoft, T.; Frandsen, S.
2007-06-01
Stellar Oscillations Network Group (SONG) is an initiative aimed at designing and building a network of 1m-class telescopes dedicated to asteroseismology and planet hunting. SONG will have 8 identical telescope nodes each equipped with a high-resolution spectrograph and an iodine cell for obtaining precision radial velocities and a CCD camera for guiding and imaging purposes. The main asteroseismology targets for the network are the brightest (V < 6) stars. In order to improve performance and reduce maintenance costs the instrumentation will only have very few modes of operation. In this contribution we describe the motivations for establishing a network, the basic outline of SONG and the expected performance.
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Kugler, Günter; 't Hart, Bernard M; Kohlbecher, Stefan; Bartl, Klaus; Schumann, Frank; Einhäuser, Wolfgang; Schneider, Erich
2015-01-01
People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency.
Observation of planets by a circumpolar stratospheric telescope
NASA Astrophysics Data System (ADS)
Yamamoto, M.; Taguchi, M.; Yoshida, K.; Sakamoto, Y.; Nakano, T.; Shoji, Y.; Takahashi, Y.; Hamamoto, K.; Nakamoto, J.; Imai, M.
2012-12-01
Phenomena in the planetary atmospheres and plasmaspheres have been studied by various methods using emissions emitted from there in the spectral regions from radio wave to X-ray. Optical observation of a planet has been performed by a ground-based telescope, a satellite telescope and an orbiter. A balloon-borne telescope is proposed as another platform for optical remote sensing of planets. Since it is floated in the stratosphere at an altitude of about 32 km, fine weather condition, excellent seeing and high transmittance of the atmosphere in the near ultraviolet and infrared regions are expected. Especially a planet can be continuously monitored by a long-period circumpolar flight. For these reasons we have been developing a balloon-borne telescope system for planetary observations from the polar stratosphere. In this system a Schmidt-Cassegrain telescope with a 300-mm clear aperture is mounted on a gondola whose attitude is controlled by control moment gyros, an active decoupling motor, and attitude sensors. The gondola can float in the stratosphere for periods longer than 1 week. Pointing stability of 0.1"rms will be achieved by the cooperative operation of the following three-stage pointing devices: a gondola-attitude control system, two axis telescope gimbals for coarse guiding, and a tip/tilt mirror mount for guiding error correction. The optical path is divided to three paths to an ultraviolet camera, an infrared camera and a position-sensitive photomultiplier tube for detection of guiding error. The size of gondola is 1 m by 1 m by 2.7 m high, and the weight is 784 kg including the weight of ballast of 300 kg. The first experiment of the balloon-borne telescope system was conducted on June 3, 2009 at Taikicho, Hokkaido targeting Venus. However, it failed due to a trouble in an onboard computer. The balloon-borne telescope was redesigned for the second experiment in August in 2012, when the target planet is also Venus. In the presentation, the balloon-borne telescope system, the ground-test results of its pointing performance and the results of balloon experiment in 2012 will be reported. Overview of the gondola ;
Near infra-red astronomy with adaptive optics and laser guide stars at the Keck Observatory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Max, C.E.; Gavel, D.T.; Olivier, S.S.
1995-08-03
A laser guide star adaptive optics system is being built for the W. M. Keck Observatory`s 10-meter Keck II telescope. Two new near infra-red instruments will be used with this system: a high-resolution camera (NIRC 2) and an echelle spectrometer (NIRSPEC). The authors describe the expected capabilities of these instruments for high-resolution astronomy, using adaptive optics with either a natural star or a sodium-layer laser guide star as a reference. They compare the expected performance of these planned Keck adaptive optics instruments with that predicted for the NICMOS near infra-red camera, which is scheduled to be installed on the Hubblemore » Space Telescope in 1997.« less
Combined hostile fire and optics detection
NASA Astrophysics Data System (ADS)
Brännlund, Carl; Tidström, Jonas; Henriksson, Markus; Sjöqvist, Lars
2013-10-01
Snipers and other optically guided weapon systems are serious threats in military operations. We have studied a SWIR (Short Wave Infrared) camera-based system with capability to detect and locate snipers both before and after shot over a large field-of-view. The high frame rate SWIR-camera allows resolution of the temporal profile of muzzle flashes which is the infrared signature associated with the ejection of the bullet from the rifle. The capability to detect and discriminate sniper muzzle flashes with this system has been verified by FOI in earlier studies. In this work we have extended the system by adding a laser channel for optics detection. A laser diode with slit-shaped beam profile is scanned over the camera field-of-view to detect retro reflection from optical sights. The optics detection system has been tested at various distances up to 1.15 km showing the feasibility to detect rifle scopes in full daylight. The high speed camera gives the possibility to discriminate false alarms by analyzing the temporal data. The intensity variation, caused by atmospheric turbulence, enables discrimination of small sights from larger reflectors due to aperture averaging, although the targets only cover a single pixel. It is shown that optics detection can be integrated in combination with muzzle flash detection by adding a scanning rectangular laser slit. The overall optics detection capability by continuous surveillance of a relatively large field-of-view looks promising. This type of multifunctional system may become an important tool to detect snipers before and after shot.
Cryogenic optical systems for the rapid infrared imager/spectrometer (RIMAS)
NASA Astrophysics Data System (ADS)
Capone, John I.; Content, David A.; Kutyrev, Alexander S.; Robinson, Frederick D.; Lotkin, Gennadiy N.; Toy, Vicki L.; Veilleux, Sylvain; Moseley, Samuel H.; Gehrels, Neil A.; Vogel, Stuart N.
2014-07-01
The Rapid Infrared Imager/Spectrometer (RIMAS) is designed to perform follow-up observations of transient astronomical sources at near infrared (NIR) wavelengths (0.9 - 2.4 microns). In particular, RIMAS will be used to perform photometric and spectroscopic observations of gamma-ray burst (GRB) afterglows to compliment the Swift satellite's science goals. Upon completion, RIMAS will be installed on Lowell Observatory's 4.3 meter Discovery Channel Telescope (DCT) located in Happy Jack, Arizona. The instrument's optical design includes a collimator lens assembly, a dichroic to divide the wavelength coverage into two optical arms (0.9 - 1.4 microns and 1.4 - 2.4 microns respectively), and a camera lens assembly for each optical arm. Because the wavelength coverage extends out to 2.4 microns, all optical elements are cooled to ~70 K. Filters and transmission gratings are located on wheels prior to each camera allowing the instrument to be quickly configured for photometry or spectroscopy. An athermal optomechanical design is being implemented to prevent lenses from loosing their room temperature alignment as the system is cooled. The thermal expansion of materials used in this design have been measured in the lab. Additionally, RIMAS has a guide camera consisting of four lenses to aid observers in passing light from target sources through spectroscopic slits. Efforts to align these optics are ongoing.
NASA Astrophysics Data System (ADS)
Hertel, R. J.; Hoilman, K. A.
1982-01-01
The effects of model vibration, camera and window nonlinearities, and aerodynamic disturbances in the optical path on the measurement of target position is examined. Window distortion, temperature and pressure changes, laminar and turbulent boundary layers, shock waves, target intensity and, target vibration are also studied. A general computer program was developed to trace optical rays through these disturbances. The use of a charge injection device camera as an alternative to the image dissector camera was examined.
The guidance methodology of a new automatic guided laser theodolite system
NASA Astrophysics Data System (ADS)
Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua
2008-12-01
Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide real time azimuth information of the pointed measurement area by which the motorized theodolite will move accordingly. This methodology realizes the predetermined location of the laser points which is within the camera-pointed scope so that it accelerates the measuring process and implements the approximate guidance instead of manual operations. The simulation results show that the proposed method of automatic guidance is effective and feasible which provides good tracking performance of the predetermined location of laser points.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Report Of The HST Strategy Panel: A Strategy For Recovery
1991-01-01
orbit change out: the Wide Field/Planetary Camera II (WFPC II), the Near-Infrared Camera and Multi- Object Spectrometer (NICMOS) and the Space ...are the Space Telescope Imaging Spectrograph (STB), the Near-Infrared Camera and Multi- Object Spectrom- eter (NICMOS), and the second Wide Field and...expected to fail to lock due to duplicity was 20%; on- orbit data indicates that 10% may be a better estimate, but the guide stars were preselected
Coherent infrared imaging camera (CIRIC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.
1995-07-01
New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less
Wide-field Fluorescent Microscopy and Fluorescent Imaging Flow Cytometry on a Cell-phone
Zhu, Hongying; Ozcan, Aydogan
2013-01-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. ~ 10 μm over a very large field-of-view of ~ 81 mm2. This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water. PMID:23603893
Wide-field fluorescent microscopy and fluorescent imaging flow cytometry on a cell-phone.
Zhu, Hongying; Ozcan, Aydogan
2013-04-11
Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. - 10 μm over a very large field-of-view of - 81 mm(2). This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Creating History Documentaries: A Step-by-Step Guide to Video Projects in the Classroom.
ERIC Educational Resources Information Center
Escobar, Deborah
This guide offers an easy introduction to social studies teachers wanting to challenge their students with creative media by bringing the past to life. The 14-step guide shows teachers and students the techniques needed for researching, scripting, and editing a historical documentary. Using a video camera and computer software, students can…
Design and Development of a High Speed Sorting System Based on Machine Vision Guiding
NASA Astrophysics Data System (ADS)
Zhang, Wenchang; Mei, Jiangping; Ding, Yabin
In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.
The role of general nuclear medicine in breast cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, Lacey R, E-mail: lgreene@csu.edu.au; Wilkinson, Deborah; Faculty of Science, Charles Sturt University, Wagga Wagga, New South Wales
The rising incidence of breast cancer worldwide has prompted many improvements to current care. Routine nuclear medicine is a major contributor to a full gamut of clinical studies such as early lesion detection and stratification; guiding, monitoring, and predicting response to therapy; and monitoring progression, recurrence or metastases. Developments in instrumentation such as the high-resolution dedicated breast device coupled with the diagnostic versatility of conventional cameras have reinserted nuclear medicine as a valuable tool in the broader clinical setting. This review outlines the role of general nuclear medicine, concluding that targeted radiopharmaceuticals and versatile instrumentation position nuclear medicine as amore » powerful modality for patients with breast cancer.« less
Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teruya, A. T.; Palmer, N. E.; Schneider, M. B.
2013-09-01
The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less
Method and system for providing autonomous control of a platform
NASA Technical Reports Server (NTRS)
Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)
2012-01-01
The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).
JPRS Report, Science & Technology, Japan, 27th Aircraft Symposium
1990-10-29
screen; the relative attitude is then determined . 2) Video Sensor System Specific patterns (grapple target, etc.) drawn on the target spacecraft , or the...entire target spacecraft , is imaged by camera . Navigation information is obtained by on-board image processing, such as extraction of contours and...standard figure called "grapple target" located in the vicinity of the grapple fixture on the target spacecraft is imaged by camera . Contour lines and
Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.
Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom
2017-02-23
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.
Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration
Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom
2017-01-01
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588
System for critical infrastructure security based on multispectral observation-detection module
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław
2013-10-01
Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.
Cinematography; A Guide for Film Makers and Film Teachers.
ERIC Educational Resources Information Center
Malkiewicz, J. Kris
Concentrating on the work of the cinematographer--the man behind the camera or in charge of the shooting--this book also touches on techniques of sound recording, cutting, and production logistics. Technical discussions designed to provide the basic principles and techniques of cinematography are presented about cameras, films and sensitometry,…
A guide for recording esthetic and biologic changes with photographs
Arthur W. Magill; R.H. Twiss
1965-01-01
Photography has long been a useful tool for recording and analyzing environmental conditions. Permanent camera points can be established to help detect ,and analyze changes in the esthetics and ecology of wildland resources. This note describes the usefulness of permanent camera points and outlines procedures for establishing points and recording data.
Who Goes There? Linking Remote Cameras and Schoolyard Science to Empower Action
ERIC Educational Resources Information Center
Tanner, Dawn; Ernst, Julie
2013-01-01
Taking Action Opportunities (TAO) is a curriculum that combines guided reflection, a focus on the local environment, and innovative use of wildlife technology to empower student action toward improving the environment. TAO is experientially based and uses remote cameras as a tool for schoolyard exploration. Through TAO, students engage in research…
Architecture of PAU survey camera readout electronics
NASA Astrophysics Data System (ADS)
Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo
2012-07-01
PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.
Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras
NASA Astrophysics Data System (ADS)
Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd
2017-10-01
This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.
2006-10-01
patients with breast cancer underwent scanning with a hybrid camera which combined a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE...investigated a novel approach which combines the output of a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE Hawkeye®). This... scanner , (Hawkeye®, GE Medical system) is attempted in this study. This device is widely available in cardiology community and has the potential to
Line following using a two camera guidance system for a mobile robot
NASA Astrophysics Data System (ADS)
Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.
1996-10-01
Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.
Target Acquisition for Projectile Vision-Based Navigation
2014-03-01
Future Work 20 8. References 21 Appendix A. Simulation Results 23 Appendix B. Derivation of Ground Resolution for a Diffraction-Limited Pinhole Camera...results for visual acquisition (left) and target recognition (right). ..........19 Figure B-1. Differential object and image areas for pinhole camera...projectile and target (measured in terms of the angle ) will depend on target heading. In particular, because we have aligned the x axis along the
Kugler, Günter; 't Hart, Bernard M.; Kohlbecher, Stefan; Bartl, Klaus; Schumann, Frank; Einhäuser, Wolfgang; Schneider, Erich
2015-01-01
Background: People with color vision deficiencies report numerous limitations in daily life, restricting, for example, their access to some professions. However, they use basic color terms systematically and in a similar manner as people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency. Methods: A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn. All participants wore a mobile eye-tracking-driven camera with a high foveal image resolution (EyeSeeCam). Search performance as well as fixations of objects of different colors were examined. Results: Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, participants with color vision deficiencies exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. Conclusions: In contrast to controls, participants with color vision deficiencies are not able to enhance their search for red targets on a (green) lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification) is largely unaffected by the color vision deficiency. PMID:26733851
NASA Astrophysics Data System (ADS)
McElvain, Jon; Campbell, Scott P.; Miller, Jonathan; Jin, Elaine W.
2010-01-01
The dead leaves model was recently introduced as a method for measuring the spatial frequency response (SFR) of camera systems. The target consists of a series of overlapping opaque circles with a uniform gray level distribution and radii distributed as r-3. Unlike the traditional knife-edge target, the SFR derived from the dead leaves target will be penalized for systems that employ aggressive noise reduction. Initial studies have shown that the dead leaves SFR correlates well with sharpness/texture blur preference, and thus the target can potentially be used as a surrogate for more expensive subjective image quality evaluations. In this paper, the dead leaves target is analyzed for measurement of camera system spatial frequency response. It was determined that the power spectral density (PSD) of the ideal dead leaves target does not exhibit simple power law dependence, and scale invariance is only loosely obeyed. An extension to the ideal dead leaves PSD model is proposed, including a correction term to account for system noise. With this extended model, the SFR of several camera systems with a variety of formats was measured, ranging from 3 to 10 megapixels; the effects of handshake motion blur are also analyzed via the dead leaves target.
NASA Technical Reports Server (NTRS)
2008-01-01
We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Dynamics of laser-guided alternating current high voltage discharges
NASA Astrophysics Data System (ADS)
Daigle, J.-F.; Théberge, F.; Lassonde, P.; Kieffer, J.-C.; Fujii, T.; Fortin, J.; Châteauneuf, M.; Dubois, J.
2013-10-01
The dynamics of laser-guided alternating current high voltage discharges are characterized using a streak camera. Laser filaments were used to trigger and guide the discharges produced by a commercial Tesla coil. The streaking images revealed that the dynamics of the guided alternating current high voltage corona are different from that of a direct current source. The measured effective corona velocity and the absence of leader streamers confirmed that it evolves in a pure leader regime.
Development of a machine vision system for automated structural assembly
NASA Technical Reports Server (NTRS)
Sydow, P. Daniel; Cooper, Eric G.
1992-01-01
Research is being conducted at the LaRC to develop a telerobotic assembly system designed to construct large space truss structures. This research program was initiated within the past several years, and a ground-based test-bed was developed to evaluate and expand the state of the art. Test-bed operations currently use predetermined ('taught') points for truss structural assembly. Total dependence on the use of taught points for joint receptacle capture and strut installation is neither robust nor reliable enough for space operations. Therefore, a machine vision sensor guidance system is being developed to locate and guide the robot to a passive target mounted on the truss joint receptacle. The vision system hardware includes a miniature video camera, passive targets mounted on the joint receptacles, target illumination hardware, and an image processing system. Discrimination of the target from background clutter is accomplished through standard digital processing techniques. Once the target is identified, a pose estimation algorithm is invoked to determine the location, in three-dimensional space, of the target relative to the robots end-effector. Preliminary test results of the vision system in the Automated Structural Assembly Laboratory with a range of lighting and background conditions indicate that it is fully capable of successfully identifying joint receptacle targets throughout the required operational range. Controlled optical bench test results indicate that the system can also provide the pose estimation accuracy to define the target position.
Analysis of Photogrammetry Data from ISIM Mockup
NASA Technical Reports Server (NTRS)
Nowak, Maria; Hill, Mike
2007-01-01
During ground testing of the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST), the ISIM Optics group plans to use a Photogrammetry Measurement System for cryogenic calibration of specific target points on the ISIM composite structure and Science Instrument optical benches and other GSE equipment. This testing will occur in the Space Environmental Systems (SES) chamber at Goddard Space Flight Center. Close range photogrammetry is a 3 dimensional metrology system using triangulation to locate custom targets in 3 coordinates via a collection of digital photographs taken from various locations and orientations. These photos are connected using coded targets, special targets that are recognized by the software and can thus correlate the images to provide a 3 dimensional map of the targets, and scaled via well calibrated scale bars. Photogrammetry solves for the camera location and coordinates of the targets simultaneously through the bundling procedure contained in the V-STARS software, proprietary software owned by Geodetic Systems Inc. The primary objectives of the metrology performed on the ISIM mock-up were (1) to quantify the accuracy of the INCA3 photogrammetry camera on a representative full scale version of the ISIM structure at ambient temperature by comparing the measurements obtained with this camera to measurements using the Leica laser tracker system and (2), empirically determine the smallest increment of target position movement that can be resolved by the PG camera in the test setup, i.e., precision, or resolution. In addition, the geometrical details of the test setup defined during the mockup testing, such as target locations and camera positions, will contribute to the final design of the photogrammetry system to be used on the ISIM Flight Structure.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Low, slow, small target recognition based on spatial vision network
NASA Astrophysics Data System (ADS)
Cheng, Zhao; Guo, Pei; Qi, Xin
2018-03-01
Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.
Supermarket Special Departments. [Student Manual] and Answer Book/Teacher's Guide.
ERIC Educational Resources Information Center
Gaskill, Melissa Lynn; Summerall, Mary
This document on food marketing for supermarket special departments contains both a student's manual and an answer book/teacher's guide. The student's manual contains the following 11 assignments: (1) supermarkets of today; (2) merchandising; (3) pharmacy and cosmetics department; (4) housewares and home hardware; (5) video/camera/electronics…
Target-Tracking Camera for a Metrology System
NASA Technical Reports Server (NTRS)
Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David
2009-01-01
An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.
Nonholonomic camera-space manipulation using cameras mounted on a mobile base
NASA Astrophysics Data System (ADS)
Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun
1998-10-01
The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.
NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).
Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander
2017-10-26
The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
Method and apparatus for coherent imaging of infrared energy
Hutchinson, Donald P.
1998-01-01
A coherent camera system performs ranging, spectroscopy, and thermal imaging. Local oscillator radiation is combined with target scene radiation to enable heterodyne detection by the coherent camera's two-dimensional photodetector array. Versatility enables deployment of the system in either a passive mode (where no laser energy is actively transmitted toward the target scene) or an active mode (where a transmitting laser is used to actively illuminate the target scene). The two-dimensional photodetector array eliminates the need to mechanically scan the detector. Each element of the photodetector array produces an intermediate frequency signal that is amplified, filtered, and rectified by the coherent camera's integrated circuitry. By spectroscopic examination of the frequency components of each pixel of the detector array, a high-resolution, three-dimensional or holographic image of the target scene is produced for applications such as air pollution studies, atmospheric disturbance monitoring, and military weapons targeting.
Calculation for simulation of archery goal value using a web camera and ultrasonic sensor
NASA Astrophysics Data System (ADS)
Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti
2017-08-01
Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.
Calibration of the Nikon 200 for Close Range Photogrammetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheriff, Lassana; /City Coll., N.Y. /SLAC
2010-08-25
The overall objective of this project is to study the stability and reproducibility of the calibration parameters of the Nikon D200 camera with a Nikkor 20 mm lens for close-range photogrammetric surveys. The well known 'central perspective projection' model is used to determine the camera parameters for interior orientation. The Brown model extends it with the introduction of radial distortion and other less critical variables. The calibration process requires a dense network of targets to be photographed at different angles. For faster processing, reflective coded targets are chosen. Two scenarios have been used to check the reproducibility of the parameters.more » The first one is using a flat 2D wall with 141 coded targets and 12 custom targets that were previously measured with a laser tracker. The second one is a 3D Unistrut structure with a combination of coded targets and 3D reflective spheres. The study has shown that this setup is only stable during a short period of time. In conclusion, this camera is acceptable when calibrated before each use. Future work should include actual field tests and possible mechanical improvements, such as securing the lens to the camera body.« less
Intelligent person identification system using stereo camera-based height and stride estimation
NASA Astrophysics Data System (ADS)
Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo
2005-05-01
In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.
A target detection multi-layer matched filter for color and hyperspectral cameras
NASA Astrophysics Data System (ADS)
Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.
2018-05-01
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
NASA Astrophysics Data System (ADS)
Dickensheets, David L.; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
Dickensheets, David L; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
Land-based infrared imagery for marine mammal detection
NASA Astrophysics Data System (ADS)
Graber, Joseph; Thomson, Jim; Polagye, Brian; Jessup, Andrew
2011-09-01
A land-based infrared (IR) camera is used to detect endangered Southern Resident killer whales in Puget Sound, Washington, USA. The observations are motivated by a proposed tidal energy pilot project, which will be required to monitor for environmental effects. Potential monitoring methods also include visual observation, passive acoustics, and active acoustics. The effectiveness of observations in the infrared spectrum is compared to observations in the visible spectrum to assess the viability of infrared imagery for cetacean detection and classification. Imagery was obtained at Lime Kiln Park, Washington from 7/6/10-7/9/10 using a FLIR Thermovision A40M infrared camera (7.5-14μm, 37°HFOV, 320x240 pixels) under ideal atmospheric conditions (clear skies, calm seas, and wind speed 0-4 m/s). Whales were detected during both day (9 detections) and night (75 detections) at distances ranging from 42 to 162 m. The temperature contrast between dorsal fins and the sea surface ranged from 0.5 to 4.6 °C. Differences in emissivity from sea surface to dorsal fin are shown to aid detection at high incidence angles (near grazing). A comparison to theory is presented, and observed deviations from theory are investigated. A guide for infrared camera selection based on site geometry and desired target size is presented, with specific considerations regarding marine mammal detection. Atmospheric conditions required to use visible and infrared cameras for marine mammal detection are established and compared with 2008 meteorological data for the proposed tidal energy site. Using conservative assumptions, infrared observations are predicted to provide a 74% increase in hours of possible detection, compared with visual observations.
Graphic Arts: Process Camera, Stripping, and Platemaking. Teacher Guide.
ERIC Educational Resources Information Center
Feasley, Sue C., Ed.
This curriculum guide is the second in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task…
1970 Supplement to the Guide to Microreproduction Equipment.
ERIC Educational Resources Information Center
Ballou, Hubbard W., Ed.
The time period covered by this guide runs from the end of 1968 to the middle of 1970. Microreproduction cameras, microform readers, reader/printers, processors, contact printers, computer output microfilm equipment, and other special microform equipment and accessories produced during this time span are listed. Most of the equipment is domestic,…
Light field analysis and its applications in adaptive optics and surveillance systems
NASA Astrophysics Data System (ADS)
Eslami, Mohammed Ali
An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications. After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system. As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies.
Automated Camera Array Fine Calibration
NASA Technical Reports Server (NTRS)
Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang
2008-01-01
Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.
On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements
ERIC Educational Resources Information Center
Bangou, Francis
2014-01-01
The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…
OSIRIS-REx Asteroid Sample Return Mission Image Analysis
NASA Astrophysics Data System (ADS)
Chevres Fernandez, Lee Roger; Bos, Brent
2018-01-01
NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.
NASA Astrophysics Data System (ADS)
Mi, Yuhe; Huang, Yifan; Li, Lin
2015-08-01
Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
2008-08-18
CAPE CANAVERAL, Fla. – In the Payload Hazardous Servicing Facility at NASA's Kennedy Space Center, a technician guides a crane for attachment to the radiator on the Wide Field Camera 3, or WFC3. The WFC3 will be transferred to the Super Lightweight Interchangeable Carrier. WFC3 is part of the payload on space shuttle Atlantis' STS-125 mission for the fifth and final Hubble servicing flight to NASA's Hubble Space Telescope. The radiator is the "outside" of WFC3 that will be exposed to space and will expel heat out of Hubble and into space through black body radiation. As Hubble enters the last stage of its life, WFC3 will be Hubble's next evolutionary step, allowing Hubble to peer ever further into the mysteries of the cosmos. WFC3 will study a diverse range of objects and phenomena, from young and extremely distant galaxies, to much more nearby stellar systems, to objects within our very own solar system. WFC3 will take the place of Wide Field Planetary Camera 2, which astronauts will bring back to Earth aboard the shuttle. Launch of Atlantis is targeted at 1:34 a.m. EDT Oct. 8. Photo credit: NASA/Amanda Diller
Bowles, H; Sánchez, N; Tapias, A; Paredes, P; Campos, F; Bluemel, C; Valdés Olmos, R A; Vidal-Sicart, S
Radio-guided surgery has been developed for application in those disease scheduled for surgical management, particularly in areas of complex anatomy. This is based on the use of pre-operative scintigraphic planar, tomographic and fused SPECT/CT images, and the possibility of 3D reconstruction for the subsequent intraoperative locating of active lesions using handheld devices (detection probes, gamma cameras, etc.). New tracers and technologies have also been incorporated into these surgical procedures. The combination of visual and acoustic signals during the intraoperative procedure has become possible with new portable imaging modalities. In daily practice, the images offered by these techniques and devices combine perioperative nuclear medicine imaging with the superior resolution of additional optical guidance in the operating room. In many ways they provide real-time images, allowing accurate guidance during surgery, a reduction in the time required for tissue location and an anatomical environment for surgical recognition. All these approaches have been included in the concept known as (radio) Guided intraOperative Scintigraphic Tumour Targeting (GOSTT). This article offers a general view of different nuclear medicine and allied technologies used for several GOSTT procedures, and illustrates the crossing of technological frontiers in radio-guided surgery. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.
OPSO - The OpenGL based Field Acquisition and Telescope Guiding System
NASA Astrophysics Data System (ADS)
Škoda, P.; Fuchs, J.; Honsa, J.
2006-07-01
We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.
Autonomous pedestrian localization technique using CMOS camera sensors
NASA Astrophysics Data System (ADS)
Chun, Chanwoo
2014-09-01
We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.
An electrically tunable plenoptic camera using a liquid crystal microlens array.
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
An electrically tunable plenoptic camera using a liquid crystal microlens array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074
2015-05-15
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
An electrically tunable plenoptic camera using a liquid crystal microlens array
NASA Astrophysics Data System (ADS)
Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng
2015-05-01
Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.
Laser-Induced-Fluorescence Photogrammetry and Videogrammetry
NASA Technical Reports Server (NTRS)
Danehy, Paul; Jones, Tom; Connell, John; Belvin, Keith; Watson, Kent
2004-01-01
An improved method of dot-projection photogrammetry and an extension of the method to encompass dot-projection videogrammetry overcome some deficiencies of dot-projection photogrammetry as previously practiced. The improved method makes it possible to perform dot-projection photogrammetry or videogrammetry on targets that have previously not been amenable to dot-projection photogrammetry because they do not scatter enough light. Such targets include ones that are transparent, specularly reflective, or dark. In standard dot-projection photogrammetry, multiple beams of white light are projected onto the surface of an object of interest (denoted the target) to form a known pattern of bright dots. The illuminated surface is imaged in one or more cameras oriented at a nonzero angle or angles with respect to a central axis of the illuminating beams. The locations of the dots in the image(s) contain stereoscopic information on the locations of the dots, and, hence, on the location, shape, and orientation of the illuminated surface of the target. The images are digitized and processed to extract this information. Hardware and software to implement standard dot-projection photogrammetry are commercially available. Success in dot-projection photogrammetry depends on achieving sufficient signal-to-noise ratios: that is, it depends on scattering of enough light by the target so that the dots as imaged in the camera(s) stand out clearly against the ambient-illumination component of the image of the target. In one technique used previously to increase the signal-to-noise ratio, the target is illuminated by intense, pulsed laser light and the light entering the camera(s) is band-pass filtered at the laser wavelength. Unfortunately, speckle caused by the coherence of the laser light engenders apparent movement in the projected dots, thereby giving rise to errors in the measurement of the centroids of the dots and corresponding errors in the computed shape and location of the surface of the target. The improved method is denoted laser-induced-fluorescence photogrammetry.
Look to the Sky. An All-Purpose Interdisciplinary Guide to Astronomy. Grades 4-12.
ERIC Educational Resources Information Center
DeBruin, Jerry; Murad, Don
This guide features materials and activities about stars for integration into other academic disciplines. Part one describes how to begin to look to the sky, including usage of the camera, binoculars, and telescope. Part two, "Keep Up to Date," introduces information on resource materials, such as astronomy books, magazines, newsletters,…
ERIC Educational Resources Information Center
McMillan, Samuel, Ed.; Quinto, Frances, Ed.
Designed as a teacher's guide to stimulate student interest, creativity, and achievement, this teaching guide includes 132 projects which involve the use of photography as an instructional tool. The volume is divided into subject areas with grade levels ranging from kindergarten through higher education. Most projects are multidisciplinary, and…
Backlighting Direct-Drive Cryogenic DT Implosions on OMEGA
NASA Astrophysics Data System (ADS)
Stoeckl, C.
2016-10-01
X-ray backlighting has been frequently used to measure the in-flight characteristics of an imploding shell in both direct- and indirect-drive inertial confinement fusion implosions. These measurements provide unique insight into the early time and stagnation stages of an implosion and guide the modeling efforts to improve the target designs. Backlighting a layered DT implosion on OMEGA is a particular challenge because the opacity of the DT shell is low, the shell velocity is high, the size and wall thickness of the shell is small, and the self-emission from the hot core at the onset of burn is exceedingly bright. A framing-camera-based crystal imaging system with a Si Heα backlighter at 1.865keV driven by 10-ps short pulses from OMEGA EP was developed to meet these radiography challenges. A fast target inserter was developed to accurately place the Si backlighter foil at a distance of 5 mm to the implosion target following the removal of the cryogenic shroud and an ultra-stable triggering system was implemented to reliably trigger the framing camera coincident with the arrival of the OMEGA EP pulse. This talk will report on a series of implosions in which the DT shell is imaged for a range of convergence ratios and in-flight aspect ratios. The images acquired have been analyzed for low-mode shape variations, the DT shell thickness, the level of ablator mixing into the DT fuel (even 0.1% of carbon mix can be reliably inferred), the areal density of the DT shell, and the impact of the support stalk. The measured implosion performance will be compared with hydrodynamic simulations that include imprint (up to mode 200), cross-beam energy transfer, nonlocal thermal transport, and initial low-mode perturbations such as power imbalance and target misalignment. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Target recognitions in multiple-camera closed-circuit television using color constancy
NASA Astrophysics Data System (ADS)
Soori, Umair; Yuen, Peter; Han, Ji Wen; Ibrahim, Izzati; Chen, Wentao; Hong, Kan; Merfort, Christian; James, David; Richardson, Mark
2013-04-01
People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people's dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by the proposed ELRCC algorithm. This amount of improvement represents a reduction of probability of false alarm by about a factor of 5 at the probability of detection of 0.5. Our study concerns mainly the detection of colored targets; and issues for the recognition of white or gray targets will be addressed in a forthcoming study.
NASA Astrophysics Data System (ADS)
Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Yoshimura, S.; Seto, Y.; Kato, S.; Takahashi, M.; Momose, T.
2016-08-01
As well as pre-operative roadmapping by 18F-Fluoro-2-deoxy-2-D-glucose (FDG) positron emission tomography, intra-operative localization of the tracer is important to identify local margins for less-invasive surgery, especially FDG-guided surgery. The objective of this paper is to develop a laparoscopic Compton camera and system aimed at use for intra-operative FDG imaging for accurate and less-invasive dissections. The laparoscopic Compton camera consists of four layers of a 12-pixel cross-shaped array of GFAG crystals (2× 2× 3 mm3) and through silicon via multi-pixel photon counters and dedicated individual readout electronics based on a dynamic time-over-threshold method. Experimental results yielded a spatial resolution of 4 mm (FWHM) for a 10 mm working distance and an absolute detection efficiency of 0.11 cps kBq-1, corresponding to an intrinsic detection efficiency of ˜0.18%. In an experiment using a NEMA-like well-shaped FDG phantom, a φ 5× 10 mm cylindrical hot spot was clearly obtained even in the presence of a background distribution surrounding the Compton camera and the hot spot. We successfully obtained reconstructed images of a resected lymph node and primary tumor ex vivo after FDG administration to a patient having esophageal cancer. These performance characteristics indicate a new possibility of FDG-directed surgery by using a Compton camera intra-operatively.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
Holländer, Sebastian W; Klingen, Hans Joachim; Fritz, Marliese; Djalali, Peter; Birk, Dieter
2014-11-01
Despite advances in instruments and techniques in laparoscopic surgery, one thing remains uncomfortable: the camera assistance. The aim of this study was to investigate the benefit of a joystick-guided camera holder (SoloAssist®, Aktormed, Barbing, Germany) for laparoscopic surgery and to compare the robotic assistance to human assistance. 1033 consecutive laparoscopic procedures were performed assisted by the SoloAssist®. Failures and aborts were documented and nine surgeons were interviewed by questionnaire regarding their experiences. In 71 of 1033 procedures, robotic assistance was aborted and the procedure was continued manually, mostly because of frequent changes of position, narrow spaces, and adverse angular degrees. One case of short circuit was reported. Emergency stop was necessary in three cases due to uncontrolled movement into the abdominal cavity. Eight of nine surgeons prefer robotic to human assistance, mostly because of a steady image and self-control. The SoloAssist® robot is a reliable system for laparoscopic procedures. Emergency shutdown was necessary in only three cases. Some minor weak spots could have been identified. Most surgeons prefer robotic assistance to human assistance. We feel that the SoloAssist® makes standard laparoscopic surgery more comfortable and further development is desirable, but it cannot fully replace a human assistant.
Kim, Joongheon; Kim, Jong-Kook
2016-01-01
This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.
An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.
Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing
2017-03-20
In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.
An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks
Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing
2017-01-01
In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537
Visual Target Tracking in the Presence of Unknown Observer Motion
NASA Technical Reports Server (NTRS)
Williams, Stephen; Lu, Thomas
2009-01-01
Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.
Analysis of the effect on optical equipment caused by solar position in target flight measure
NASA Astrophysics Data System (ADS)
Zhu, Shun-hua; Hu, Hai-bin
2012-11-01
Optical equipment is widely used to measure flight parameters in target flight performance test, but the equipment is sensitive to the sun's rays. In order to avoid the disadvantage of sun's rays directly shines to the optical equipment camera lens when measuring target flight parameters, the angle between observation direction and the line which connects optical equipment camera lens and the sun should be kept at a big range, The calculation method of the solar azimuth and altitude to the optical equipment at any time and at any place on the earth, the equipment observation direction model and the calculating model of angle between observation direction and the line which connects optical equipment camera lens are introduced in this article. Also, the simulation of the effect on optical equipment caused by solar position at different time, different date, different month and different target flight direction is given in this article.
Vision-guided gripping of a cylinder
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.
Daytime Aspect Camera for Balloon Altitudes
NASA Technical Reports Server (NTRS)
Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.
2002-01-01
We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40 km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600- to 1000-nm region of the spectrum, successfully provides daytime aspect information of approx. 10 arcsec resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models used to design the camera, but the daytime stellar magnitude limit was lower than expected due to longitudinal chromatic aberration in the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.
Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi
2016-08-30
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.
NASA Astrophysics Data System (ADS)
de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.
2011-05-01
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.
NASA Astrophysics Data System (ADS)
Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Linkosalmi, Maiju; Melih Tanis, Cemal; Tuovinen, Juha-Pekka; Nadir Arslan, Ali
2018-01-01
In recent years, monitoring of the status of ecosystems using low-cost web (IP) or time lapse cameras has received wide interest. With broad spatial coverage and high temporal resolution, networked cameras can provide information about snow cover and vegetation status, serve as ground truths to Earth observations and be useful for gap-filling of cloudy areas in Earth observation time series. Networked cameras can also play an important role in supplementing laborious phenological field surveys and citizen science projects, which also suffer from observer-dependent observation bias. We established a network of digital surveillance cameras for automated monitoring of phenological activity of vegetation and snow cover in the boreal ecosystems of Finland. Cameras were mounted at 14 sites, each site having 1-3 cameras. Here, we document the network, basic camera information and access to images in the permanent data repository (http://www.zenodo.org/communities/phenology_camera/). Individual DOI-referenced image time series consist of half-hourly images collected between 2014 and 2016 (https://doi.org/10.5281/zenodo.1066862). Additionally, we present an example of a colour index time series derived from images from two contrasting sites.
Apollo 8 Mission image,Target of Opportunity (T/O) 10
1968-12-21
Apollo 8,Moon,Target of Opportunity (T/O) 10, Various targets. Latitude 18 degrees South,Longitude 163.50 degrees West. Camera Tilt Mode: High Oblique. Direction: South. Sun Angle 12 degrees. Original Film Magazine was labeled E. Camera Data: 70mm Hasselblad; F-Stop: F-5.6; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Other Photographic Coverage: Lunar Orbiter 1 (LO I) S-3. Flight Date: December 21-27,1968.
Technology development: Future use of NASA's large format camera is uncertain
NASA Astrophysics Data System (ADS)
Rey, Charles F.; Fliegel, Ilene H.; Rohner, Karl A.
1990-06-01
The Large Format Camera, developed as a project to verify an engineering concept or design, has been flown only once, in 1984, on the shuttle Challenger. Since this flight, the camera has been in storage. NASA had expected that, following the camera's successful demonstration, other government agencies or private companies with special interests in photographic applications would absorb the costs for further flights using the Large Format Camera. But, because shuttle transportation costs for the Large Format Camera were estimated to be approximately $20 million (in 1987 dollars) per flight and the market for selling Large Format Camera products was limited, NASA was not successful in interesting other agencies or private companies in paying the costs. Using the camera on the space station does not appear to be a realistic alternative. Using the camera aboard NASA's Earth Resources Research (ER-2) aircraft may be feasible. Until the final disposition of the camera is decided, NASA has taken actions to protect it from environmental deterioration. The Government Accounting Office (GAO) recommends that the NASA Administrator should consider, first, using the camera on an aircraft such as the ER-2. NASA plans to solicit the private sector for expressions of interest in such use of the camera, at no cost to the government, and will be guided by the private sector response. Second, GAO recommends that if aircraft use is determined to be infeasible, NASA should consider transferring the camera to a museum, such as the National Air and Space Museum.
Could digital imaging be an alternative for digital colorimeters?
Caglar, Alper; Yamanel, Kivanc; Gulsahi, Kamran; Bagis, Bora; Ozcan, Mutlu
2010-12-01
This study evaluated the colour parameters of composite and ceramic shade guides determined using a colorimeter and digital imaging method with illuminants at different colour temperatures. Two different resin composite shade guides, namely Charisma (Heraeus Kulzer) and Premise (Kerr Corporation), and two different ceramic shade guides, Vita Lumin Vacuum (VITA Zahnfabrik) and Noritake (Noritake Co.), were evaluated at three different colour temperatures (2,700 K, 2,700-6,500 K, and 6500 K) of illuminants. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2 and C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc.). The data were analysed using two-way ANOVA, and Pearson's correlation. While mean L* values of both composite and ceramic shade guides were not affected from the colour temperature, L* values obtained with the colorimeter showed significantly lower values than those of the digital imaging (p < 0.01). At combined 2,700-6500 K colour temperature, the means of a* values obtained from colorimeter and digital imaging did not show significant differences (p > 0.05). For both composite and ceramic shade guides, L* and b* values obtained from colorimeter and digital imaging method presented a high level of correlation. High-level correlations were also acquired for a* values in all shade guides except for the Charisma composite shade guide. Digital imaging method could be an alternative for the colorimeters unless the proper object-camera distance, digital camera settings and suitable illumination conditions could be supplied. However, variations in shade guides, especially for composites, may affect the correlation.
2001-04-04
One of NASA's newest education publications made its debut at the arnual National Council of Teachers of Mathematics (NCTM) conference held in Orlando, Florida April 5-7. How High Is It? An Educator's Guide with Activities Focused on Scale Models of Distances was presented by Carla Rosenberg of the National Center for Microgravity Research at Glenn Research Center. Rosenberg, an author of the Guide, led teachers in several hands-on activities from the Guide. This image is from a digital still camera; higher resolution is not available.
How High Is It? Workshop at NCTM
NASA Technical Reports Server (NTRS)
2001-01-01
One of NASA's newest education publications made its debut at the arnual National Council of Teachers of Mathematics (NCTM) conference held in Orlando, Florida April 5-7. How High Is It? An Educator's Guide with Activities Focused on Scale Models of Distances was presented by Carla Rosenberg of the National Center for Microgravity Research at Glenn Research Center. Rosenberg, an author of the Guide, led teachers in several hands-on activities from the Guide. This image is from a digital still camera; higher resolution is not available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chemerisov, S.; Bailey, J.; Heltemes, T.
A series of four one-day irradiations was conducted with 100Mo-enriched disk targets. After irradiation, the enriched disks were removed from the target and dissolved. The resulting solution was processed using a NorthStar RadioGenix™ 99mTc generator either at Argonne National Laboratory or at the NorthStar Medical Radioisotopes facility. Runs on the RadioGenix system produced inconsistent analytical results for 99mTc in the Tc/Mo solution. These inconsistencies were attributed to the impurities in the solution or improper column packing. During the irradiations, the performance of the optic transitional radiation (OTR) and infrared cameras was tested in high radiation field. The OTR cameras survivedmore » all irradiations, while the IR cameras failed every time. The addition of X-ray and neutron shielding improved camera survivability and decreased the number of upsets.« less
NASA Astrophysics Data System (ADS)
Olweny, Ephrem O.; Tan, Yung K.; Faddegon, Stephen; Jackson, Neil; Wehner, Eleanor F.; Best, Sara L.; Park, Samuel K.; Thapa, Abhas; Cadeddu, Jeffrey A.; Zuzak, Karel J.
2012-03-01
Digital light processing hyperspectral imaging (DLP® HSI) was adapted for use during laparoscopic surgery by coupling a conventional laparoscopic light guide with a DLP-based Agile Light source (OL 490, Optronic Laboratories, Orlando, FL), incorporating a 0° laparoscope, and a customized digital CCD camera (DVC, Austin, TX). The system was used to characterize renal ischemia in a porcine model.
Soft x-ray streak camera for laser fusion applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stradling, G.L.
This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV aremore » also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.« less
NASA Astrophysics Data System (ADS)
Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.
2007-03-01
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Optofluidic Fluorescent Imaging Cytometry on a Cell Phone
Zhu, Hongying; Mavandadi, Sam; Coskun, Ahmet F.; Yaglidere, Oguzhan; Ozcan, Aydogan
2012-01-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical sciences. Cost-effective translation of these technologies to remote and resource-limited environments could create new opportunities especially for telemedicine applications. Toward this direction, here we demonstrate the integration of imaging cytometry and fluorescent microscopy on a cell phone using a compact, lightweight, and cost-effective optofluidic attachment. In this cell-phone-based optofluidic imaging cytometry platform, fluorescently labeled particles or cells of interest are continuously delivered to our imaging volume through a disposable microfluidic channel that is positioned above the existing camera unit of the cell phone. The same microfluidic device also acts as a multilayered optofluidic waveguide and efficiently guides our excitation light, which is butt-coupled from the side facets of our microfluidic channel using inexpensive light-emitting diodes. Since the excitation of the sample volume occurs through guided waves that propagate perpendicular to the detection path, our cell-phone camera can record fluorescent movies of the specimens as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the target solution of interest. We tested the performance of our cell-phone-based imaging cytometer by measuring the density of white blood cells in human blood samples, which provided a decent match to a commercially available hematology analyzer. We further characterized the imaging quality of the same platform to demonstrate a spatial resolution of ~2 μm. This cell-phone-enabled optofluidic imaging flow cytometer could especially be useful for rapid and sensitive imaging of bodily fluids for conducting various cell counts (e.g., toward monitoring of HIV+ patients) or rare cell analysis as well as for screening of water quality in remote and resource-poor settings. PMID:21774454
Optofluidic fluorescent imaging cytometry on a cell phone.
Zhu, Hongying; Mavandadi, Sam; Coskun, Ahmet F; Yaglidere, Oguzhan; Ozcan, Aydogan
2011-09-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical sciences. Cost-effective translation of these technologies to remote and resource-limited environments could create new opportunities especially for telemedicine applications. Toward this direction, here we demonstrate the integration of imaging cytometry and fluorescent microscopy on a cell phone using a compact, lightweight, and cost-effective optofluidic attachment. In this cell-phone-based optofluidic imaging cytometry platform, fluorescently labeled particles or cells of interest are continuously delivered to our imaging volume through a disposable microfluidic channel that is positioned above the existing camera unit of the cell phone. The same microfluidic device also acts as a multilayered optofluidic waveguide and efficiently guides our excitation light, which is butt-coupled from the side facets of our microfluidic channel using inexpensive light-emitting diodes. Since the excitation of the sample volume occurs through guided waves that propagate perpendicular to the detection path, our cell-phone camera can record fluorescent movies of the specimens as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the target solution of interest. We tested the performance of our cell-phone-based imaging cytometer by measuring the density of white blood cells in human blood samples, which provided a decent match to a commercially available hematology analyzer. We further characterized the imaging quality of the same platform to demonstrate a spatial resolution of ~2 μm. This cell-phone-enabled optofluidic imaging flow cytometer could especially be useful for rapid and sensitive imaging of bodily fluids for conducting various cell counts (e.g., toward monitoring of HIV+ patients) or rare cell analysis as well as for screening of water quality in remote and resource-poor settings.
NASA Astrophysics Data System (ADS)
Kim, Do-Hwi; Han, Kuk-Il; Choi, Jun-Hyuk; Kim, Tae-Kuk
2017-05-01
Infrared (IR) signal emitted from objects over 0 degree Kelvin has been used to detect and recognize the characteristics of those objects. Recently more delicate IR sensors have been applied for various guided missiles and they affect a crucial influence on object's survivability. Especially, in marine environment it is more vulnerable to be attacked by IR guided missiles since there are nearly no objects for concealment. To increase the survivability of object, the IR signal of the object needs to be analyzed properly by considering various marine environments. IR signature of a naval ship consists of the emitted energy from ship surface and the reflected energy by external sources. Surface property such as the emissivity and the absorptivity on the naval ship varies with different paints applied on the surface and the reflected IR signal is also affected by the surface radiative property, the sensor's geometric position and various climatic conditions in marine environment. Since the direct measurement of IR signal using IR camera is costly and time consuming job, computer simulation methods are developing rapidly to replace those experimental tasks. In this study, we are demonstrate a way of analyzing the IR signal characteristics by using the measured background IR signals using an IR camera and the estimated target IR signals from the computer simulation to find the seasonal trends of IR threats of a naval ship. Through this process, measured weather data are used to analyze more accurate IR signal conditions for the naval ship. The seasonal change of IR signal contrast between the naval ship and the marine background shows that the highest contrast radiant intensity (CRI) value is appeared in early summer.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
NASA Technical Reports Server (NTRS)
Vaughan, O. H., Jr.
1990-01-01
Information on the data obtained from the Mesoscale Lightning Experiment flown on STS-26 is provided. The experiment used onboard TV cameras and a 35 mm film camera to obtain data. Data from the 35 mm camera are presented. During the mission, the crew had difficulty locating the various targets of opportunity with the TV cameras. To obtain as much data as possible in the short observational timeline allowed due to other commitments, the crew opted to use the hand-held 35 mm camera.
View of camera station located northeast of Building 70022, facing ...
View of camera station located northeast of Building 70022, facing northwest - Naval Ordnance Test Station Inyokern, Randsburg Wash Facility Target Test Towers, Tower Road, China Lake, Kern County, CA
Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target
NASA Astrophysics Data System (ADS)
Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.
2016-06-01
In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.
Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype
NASA Astrophysics Data System (ADS)
Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille
2012-06-01
The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.
Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide
2016-07-27
a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and
Development of an LYSO based gamma camera for positron and scinti-mammography
NASA Astrophysics Data System (ADS)
Liang, H.-C.; Jan, M.-L.; Lin, W.-C.; Yu, S.-F.; Su, J.-L.; Shen, L.-H.
2009-08-01
In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 × 90 mm2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.
Martian Terrain Near Curiosity Precipice Target
2016-12-06
This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140
Orbital docking system centerline color television camera system test
NASA Technical Reports Server (NTRS)
Mongan, Philip T.
1993-01-01
A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.
Construct and face validity of a virtual reality-based camera navigation curriculum.
Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J
2012-10-01
Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Patel, Manish N; Hemal, Ashok K
2016-10-01
Optical imaging is a relatively inexpensive, fast, and sensitive addition to a surgeon's arsenal for the non-invasive detection of malignant dissemination. Optical cameras in the near infrared spectrum are able to successfully identify injected indocyanine green in lymphatic channels and sentinel lymph nodes. The use of this technology is now being used in the operating room to help with lymph node dissection and improve the prognosis of patients diagnosed with muscle invasive bladder cancer. Indocyanine green has the potential for many more applications due to its versatility. In the future, there is a potential to use it for lymphangiography during nephroureterctomy for upper tract urothelial carcinoma, adrenal surgery for partial or radical adrenalectomy. Further investigations at multiple centers will validate this technique and its efficiency.
State-Estimation Algorithm Based on Computer Vision
NASA Technical Reports Server (NTRS)
Bayard, David; Brugarolas, Paul
2007-01-01
An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.
On-sky performance of the tip-tilt correction system for GLAS using an EMCCD camera
NASA Astrophysics Data System (ADS)
Skvarč, Jure; Tulloch, Simon
2008-07-01
Adaptive optics systems based on laser guide stars still need a natural guide star (NGS) to correct for the image motion caused by the atmosphere and by imperfect telescope tracking. The ability to properly compensate for this motion using a faint NGS is critical to achieve large sky coverage. For the laser guide system (GLAS) on the 4.2 m William Herschel Telescope we designed and tested in the laboratory and on-sky a tip-tilt correction system based on a PC running Linux and an EMCCD technology camera. The control software allows selection of different centroiding algorithms and loop control methods as well as the control parameters. Parameter analysis has been performed using tip-tilt only correction before the laser commissioning and the selected sets of parameters were then used during commissioning of the laser guide star system. We have established the SNR of the guide star as a function of magnitude, depending on the image sampling frequency and on the dichroic used in the optical system; achieving a measurable improvement using full AO correction with NGSes down to magnitude range R=16.5 to R=18. A minimum SNR of about 10 was established to be necessary for a useful correction. The system was used to produce 0.16 arcsecond images in H band using bright NGS and laser correction during GLAS commissioning runs.
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
Maritime microwave radar and electro-optical data fusion for homeland security
NASA Astrophysics Data System (ADS)
Seastrand, Mark J.
2004-09-01
US Customs is responsible for monitoring all incoming air and maritime traffic, including the island of Puerto Rico as a US territory. Puerto Rico offers potentially obscure points of entry to drug smugglers. This environment sets forth a formula for an illegal drug trade - based relatively near the continental US. The US Customs Caribbean Air and Marine Operations Center (CAMOC), located in Puntas Salinas, has the charter to monitor maritime and Air Traffic Control (ATC) radars. The CAMOC monitors ATC radars and advises the Air and Marine Branch of US Customs of suspicious air activity. In turn, the US Coast Guard and/or US Customs will launch air and sea assets as necessary. The addition of a coastal radar and camera system provides US Customs a maritime monitoring capability for the northwestern end of Puerto Rico (Figure 1). Command and Control of the radar and camera is executed at the CAMOC, located 75 miles away. The Maritime Microwave Surveillance Radar performs search, primary target acquisition and target tracking while the Midwave Infrared (MWIR) camera performs target identification. This wide area surveillance, using a combination of radar and MWIR camera, offers the CAMOC a cost and manpower effective approach to monitor, track and identify maritime targets.
Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors
Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.
2017-01-01
Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.
Robust human detection, tracking, and recognition in crowded urban areas
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
In this paper, we present algorithms we recently developed to support an automated security surveillance system for very crowded urban areas. In our approach for human detection, the color features are obtained by taking the difference of R, G, B spectrum and converting R, G, B to HSV (Hue, Saturation, Value) space. Morphological patch filtering and regional minimum and maximum segmentation on the extracted features are applied for target detection. The human tracking process approach includes: 1) Color and intensity feature matching track candidate selection; 2) Separate three parallel trackers for color, bright (above mean intensity), and dim (below mean intensity) detections, respectively; 3) Adaptive track gate size selection for reducing false tracking probability; and 4) Forward position prediction based on previous moving speed and direction for continuing tracking even when detections are missed from frame to frame. The Human target recognition is improved with a Super-Resolution Image Enhancement (SRIE) process. This process can improve target resolution by 3-5 times and can simultaneously process many targets that are tracked. Our approach can project tracks from one camera to another camera with a different perspective viewing angle to obtain additional biometric features from different perspective angles, and to continue tracking the same person from the 2nd camera even though the person moved out of the Field of View (FOV) of the 1st camera with `Tracking Relay'. Finally, the multiple cameras at different view poses have been geo-rectified to nadir view plane and geo-registered with Google- Earth (or other GIS) to obtain accurate positions (latitude, longitude, and altitude) of the tracked human for pin-point targeting and for a large area total human motion activity top-view. Preliminary tests of our algorithms indicate than high probability of detection can be achieved for both moving and stationary humans. Our algorithms can simultaneously track more than 100 human targets with averaged tracking period (time length) longer than the performance of the current state-of-the-art.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Robust and Effective Component-based Banknote Recognition for the Blind
Hasanuzzaman, Faiz M.; Yang, Xiaodong; Tian, YingLi
2012-01-01
We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. PMID:22661884
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Kriechbaumer, Thomas; Blackburn, Kim; Breckon, Toby P.; Hamilton, Oliver; Rivas Casado, Monica
2015-01-01
Autonomous survey vessels can increase the efficiency and availability of wide-area river environment surveying as a tool for environment protection and conservation. A key challenge is the accurate localisation of the vessel, where bank-side vegetation or urban settlement preclude the conventional use of line-of-sight global navigation satellite systems (GNSS). In this paper, we evaluate unaided visual odometry, via an on-board stereo camera rig attached to the survey vessel, as a novel, low-cost localisation strategy. Feature-based and appearance-based visual odometry algorithms are implemented on a six degrees of freedom platform operating under guided motion, but stochastic variation in yaw, pitch and roll. Evaluation is based on a 663 m-long trajectory (>15,000 image frames) and statistical error analysis against ground truth position from a target tracking tachymeter integrating electronic distance and angular measurements. The position error of the feature-based technique (mean of ±0.067 m) is three times smaller than that of the appearance-based algorithm. From multi-variable statistical regression, we are able to attribute this error to the depth of tracked features from the camera in the scene and variations in platform yaw. Our findings inform effective strategies to enhance stereo visual localisation for the specific application of river monitoring. PMID:26694411
Autonomous Exploration for Gathering Increased Science
NASA Technical Reports Server (NTRS)
Bornstein, Benjamin J.; Castano, Rebecca; Estlin, Tara A.; Gaines, Daniel M.; Anderson, Robert C.; Thompson, David R.; DeGranville, Charles K.; Chien, Steve A.; Tang, Benyang; Burl, Michael C.;
2010-01-01
The Autonomous Exploration for Gathering Increased Science System (AEGIS) provides automated targeting for remote sensing instruments on the Mars Exploration Rover (MER) mission, which at the time of this reporting has had two rovers exploring the surface of Mars (see figure). Currently, targets for rover remote-sensing instruments must be selected manually based on imagery already on the ground with the operations team. AEGIS enables the rover flight software to analyze imagery onboard in order to autonomously select and sequence targeted remote-sensing observations in an opportunistic fashion. In particular, this technology will be used to automatically acquire sub-framed, high-resolution, targeted images taken with the MER panoramic cameras. This software provides: 1) Automatic detection of terrain features in rover camera images, 2) Feature extraction for detected terrain targets, 3) Prioritization of terrain targets based on a scientist target feature set, and 4) Automated re-targeting of rover remote-sensing instruments at the highest priority target.
Toslak, Devrim; Liu, Changgeng; Alam, Minhaj Nur; Yao, Xincheng
2018-06-01
A portable fundus imager is essential for emerging telemedicine screening and point-of-care examination of eye diseases. However, existing portable fundus cameras have limited field of view (FOV) and frequently require pupillary dilation. We report here a miniaturized indirect ophthalmoscopy-based nonmydriatic fundus camera with a snapshot FOV up to 67° external angle, which corresponds to a 101° eye angle. The wide-field fundus camera consists of a near-infrared light source (LS) for retinal guidance and a white LS for color retinal imaging. By incorporating digital image registration and glare elimination methods, a dual-image acquisition approach was used to achieve reflection artifact-free fundus photography.
Mini gamma camera, camera system and method of use
Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.
2001-01-01
A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.
NASA Technical Reports Server (NTRS)
1994-01-01
In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.
Lawrence L.C. Jones; Martin G. Raphael
1993-01-01
Inexpensive camera systems have been successfully used to detect the occurrence of martens, fishers, and other wildlife species. The use of cameras is becoming widespread, and we give suggestions for standardizing techniques so that comparisons of data can occur across the geographic range of the target species. Details are given on equipment needs, setting up the...
NASA Technical Reports Server (NTRS)
1996-01-01
PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.
IR Camera Report for the 7 Day Production Test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holloway, Michael Andrew
2016-02-22
The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
Soft X-ray streak camera for laser fusion applications
NASA Astrophysics Data System (ADS)
Stradling, G. L.
1981-04-01
The development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development is reviewed as well as laser fusion and laser fusion diagnostics. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.
ERIC Educational Resources Information Center
Mihalka, Gwendolyn C.; Bolton, Gerre M.
This guide for film study and film making in the secondary English class arranges materials in a sequential order and divides them into four major sections. The section on background information includes: Table of Contents, Design for Use of the Guide, Rationale for the Use of Film Study in the English Class, Objectives for a Film Study and Film…
The Last Meter: Blind Visual Guidance to a Target.
Manduchi, Roberto; Coughlan, James M
2014-01-01
Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.
Effects of camera location on the reconstruction of 3D flare trajectory with two cameras
NASA Astrophysics Data System (ADS)
Özsaraç, Seçkin; Yeşilkaya, Muhammed
2015-05-01
Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zeng, Luan
2017-11-01
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
Astatine-211 imaging by a Compton camera for targeted radiotherapy.
Nagao, Yuto; Yamaguchi, Mitsutaka; Watanabe, Shigeki; Ishioka, Noriko S; Kawachi, Naoki; Watabe, Hiroshi
2018-05-24
Astatine-211 is a promising radionuclide for targeted radiotherapy. It is required to image the distribution of targeted radiotherapeutic agents in a patient's body for optimization of treatment strategies. We proposed to image 211 At with high-energy photons to overcome some problems in conventional planar or single-photon emission computed tomography imaging. We performed an imaging experiment of a point-like 211 At source using a Compton camera, and demonstrated the capability of imaging 211 At with the high-energy photons for the first time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Videogrammetric Model Deformation Measurement Technique
NASA Technical Reports Server (NTRS)
Burner, A. W.; Liu, Tian-Shu
2001-01-01
The theory, methods, and applications of the videogrammetric model deformation (VMD) measurement technique used at NASA for wind tunnel testing are presented. The VMD technique, based on non-topographic photogrammetry, can determine static and dynamic aeroelastic deformation and attitude of a wind-tunnel model. Hardware of the system includes a video-rate CCD camera, a computer with an image acquisition frame grabber board, illumination lights, and retroreflective or painted targets on a wind tunnel model. Custom software includes routines for image acquisition, target-tracking/identification, target centroid calculation, camera calibration, and deformation calculations. Applications of the VMD technique at five large NASA wind tunnels are discussed.
Development of the FPI+ as facility science instrument for SOFIA cycle four observations
NASA Astrophysics Data System (ADS)
Pfüller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Krabbe, Alfred
2016-08-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a heavily modified Boeing 747SP aircraft, accommodating a 2.5m infrared telescope. This airborne observation platform takes astronomers to flight altitudes of up to 13.7 km (45,000ft) and therefore allows an unobstructed view of the infrared universe at wavelengths between 0.3 m and 1600 m. SOFIA is currently completing its fourth cycle of observations and utilizes eight different imaging and spectroscopic science instruments. New instruments for SOFIAs cycle 4 observations are the High-resolution Airborne Wideband Camera-plus (HAWC+) and the Focal Plane Imager (FPI+). The latter is an integral part of the telescope assembly and is used on every SOFIA flight to ensure precise tracking on the desired targets. The FPI+ is used as a visual-light photometer in its role as facility science instrument. Since the upgrade of the FPI camera and electronics in 2013, it uses a thermo-electrically cooled science grade EM-CCD sensor inside a commercial-off-the-shelf Andor camera. The back-illuminated sensor has a peak quantum efficiency of 95% and the dark current is as low as 0.01 e-/pix/sec. With this new hardware the telescope has successfully tracked on 16th magnitude stars and thus the sky coverage, e.g. the area of sky that has suitable tracking stars, has increased to 99%. Before its use as an integrated tracking imager, the same type of camera has been used as a standalone diagnostic tool to analyze the telescope pointing stability at frequencies up to 200 Hz (imaging with 400 fps). These measurements help to improve the telescope pointing control algorithms and therefore reduce the image jitter in the focal plane. Science instruments benefit from this improvement with smaller image sizes for longer exposure times. The FPI has also been used to support astronomical observations like stellar occultations by the dwarf planet Pluto and a number of exoplanet transits. Especially the observation of the occultation events benefits from the high camera sensitivity, fast readout capability and the low read noise and it was possible to achieve high time resolution on the photometric light curves. This paper will give an overview of the development from the standalone diagnostic camera to the upgraded guiding/tracking camera, fully integrated into the telescope, while still offering the diagnostic capabilities and finally to the use as a facility science instrument on SOFIA.
Video-Camera-Based Position-Measuring System
NASA Technical Reports Server (NTRS)
Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert
2005-01-01
A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.
Methods for multiple-telescope beam imaging and guiding in the near-infrared
NASA Astrophysics Data System (ADS)
Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.
2018-05-01
Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.
A Daytime Aspect Camera for Balloon Altitudes
NASA Technical Reports Server (NTRS)
Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.; Six, N. Frank (Technical Monitor)
2001-01-01
We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600-1000 nm region of the spectrum, successfully provided daytime aspect information of approximately 10 arcsecond resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models, but the daytime stellar magnitude limit was lower than expected due to dispersion of red light by the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.
A quasi-dense matching approach and its calibration application with Internet photos.
Wan, Yanli; Miao, Zhenjiang; Wu, Q M Jonathan; Wang, Xifu; Tang, Zhen; Wang, Zhifei
2015-03-01
This paper proposes a quasi-dense matching approach to the automatic acquisition of camera parameters, which is required for recovering 3-D information from 2-D images. An affine transformation-based optimization model and a new matching cost function are used to acquire quasi-dense correspondences with high accuracy in each pair of views. These correspondences can be effectively detected and tracked at the sub-pixel level in multiviews with our neighboring view selection strategy. A two-layer iteration algorithm is proposed to optimize 3-D quasi-dense points and camera parameters. In the inner layer, different optimization strategies based on local photometric consistency and a global objective function are employed to optimize the 3-D quasi-dense points and camera parameters, respectively. In the outer layer, quasi-dense correspondences are resampled to guide a new estimation and optimization process of the camera parameters. We demonstrate the effectiveness of our algorithm with several experiments.
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574
Brassine, Eléanor; Parker, Daniel
2015-01-01
Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.
Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera
NASA Astrophysics Data System (ADS)
He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning
2017-12-01
This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.
Learning visuomotor transformations for gaze-control and grasping.
Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf
2005-08-01
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.
NASA Astrophysics Data System (ADS)
Dayton, M.; Datte, P.; Carpenter, A.; Eckart, M.; Manuel, A.; Khater, H.; Hargrove, D.; Bell, P.
2017-08-01
The National Ignition Facility's (NIF) harsh radiation environment can cause electronics to malfunction during high-yield DT shots. Until now there has been little experience fielding electronic-based cameras in the target chamber under these conditions; hence, the performance of electronic components in NIF's radiation environment was unknown. It is possible to purchase radiation tolerant devices, however, they are usually qualified for radiation environments different to NIF, such as space flight or nuclear reactors. This paper presents the results from a series of online experiments that used two different prototype camera systems built from non-radiation hardened components and one commercially available camera that permanently failed at relatively low total integrated dose. The custom design built in Livermore endured a 5 × 1015 neutron shot without upset, while the other custom design upset at 2 × 1014 neutrons. These results agreed with offline testing done with a flash x-ray source and a 14 MeV neutron source, which suggested a methodology for developing and qualifying electronic systems for NIF. Further work will likely lead to the use of embedded electronic systems in the target chamber during high-yield shots.
24/7 security system: 60-FPS color EMCCD camera with integral human recognition
NASA Astrophysics Data System (ADS)
Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.
2007-04-01
An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.
Research on target tracking algorithm based on spatio-temporal context
NASA Astrophysics Data System (ADS)
Li, Baiping; Xu, Sanmei; Kang, Hongjuan
2017-07-01
In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.
Conceptual design of a neutron camera for MAST Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiszflog, M., E-mail: matthias.weiszflog@physics.uu.se; Sangaroon, S.; Cecconello, M.
2014-11-15
This paper presents two different conceptual designs of neutron cameras for Mega Ampere Spherical Tokamak (MAST) Upgrade. The first one consists of two horizontal cameras, one equatorial and one vertically down-shifted by 65 cm. The second design, viewing the plasma in a poloidal section, also consists of two cameras, one radial and the other one with a diagonal view. Design parameters for the different cameras were selected on the basis of neutron transport calculations and on a set of target measurement requirements taking into account the predicted neutron emissivities in the different MAST Upgrade operating scenarios. Based on a comparisonmore » of the cameras’ profile resolving power, the horizontal cameras are suggested as the best option.« less
Fluorescence-guided mapping of sentinel lymph nodes in gynecological malignancies
NASA Astrophysics Data System (ADS)
Hirsch, Ole; Szyc, Łukasz; Muallem, Mustafa Zelal; Ignat, Iulia; Chekerov, Radoslav; Macdonald, Rainer; Sehouli, Jalid; Braicu, Ioana; Grosenick, Dirk
2017-07-01
We have successfully applied a custom-made handheld fluorescence camera for intraoperative fluorescence detection of indocyanine green in a feasibility study on sentinel lymph node mapping in patients with vulvar, cervical, endometrial and ovarian cancer.
Kotze, Ben; Jordaan, Gerrit
2014-08-25
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.
Kuo, Wen-Kai; Syu, Siang-He; Lin, Peng-Zhi; Yu, Hsin Her
2016-02-01
This paper reports on a transmitted-type dual-channel guided-mode resonance (GMR) sensor system that uses phase-shifting interferometry (PSI) to achieve tunable phase detection sensitivity. Five interference images are captured for the PSI phase calculation within ∼15 s by using a liquid crystal retarder and a USB web camera. The GMR sensor structure is formed by a nanoimprinting process, and the dual-channel sensor device structure for molding is fabricated using a 3D printer. By changing the rotation angle of the analyzer in front of the camera in the PSI system, the sensor detection sensitivity can be tuned. The proposed system may achieve high throughput as well as high sensitivity. The experimental results show that an optimal detection sensitivity of 6.82×10(-4) RIU can be achieved.
Kotze, Ben; Jordaan, Gerrit
2014-01-01
Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548
Prototype of a single probe Compton camera for laparoscopic surgery
NASA Astrophysics Data System (ADS)
Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.
2017-02-01
Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.
A Kinect™ camera based navigation system for percutaneous abdominal puncture
NASA Astrophysics Data System (ADS)
Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao
2016-08-01
Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26 ± 1.94 mm, 2.92 ± 1.67 mm, and 5.23 ± 2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40 ± 2.72 mm, and its lateral and longitudinal component were 4.30 ± 2.51 mm and 3.80 ± 3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable, and that the second generation Kinect™-based navigation is superior to the first-generation Kinect™, and has potential of clinical application in percutaneous abdominal puncture.
Resolution for color photography
NASA Astrophysics Data System (ADS)
Hubel, Paul M.; Bautsch, Markus
2006-02-01
Although it is well known that luminance resolution is most important, the ability to accurately render colored details, color textures, and colored fabrics cannot be overlooked. This includes the ability to accurately render single-pixel color details as well as avoiding color aliasing. All consumer digital cameras on the market today record in color and the scenes people are photographing are usually color. Yet almost all resolution measurements made on color cameras are done using a black and white target. In this paper we present several methods for measuring and quantifying color resolution. The first method, detailed in a previous publication, uses a slanted-edge target of two colored surfaces in place of the standard black and white edge pattern. The second method employs the standard black and white targets recommended in the ISO standard, but records these onto the camera through colored filters thus giving modulation between black and one particular color component; red, green, and blue color separation filters are used in this study. The third method, conducted at Stiftung Warentest, an independent consumer organization of Germany, uses a whitelight interferometer to generate fringe pattern targets of varying color and spatial frequency.
System Architecture of the Dark Energy Survey Camera Readout Electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Theresa; /FERMILAB; Ballester, Otger
2010-05-27
The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less
Recent developments for the Large Binocular Telescope Guiding Control Subsystem
NASA Astrophysics Data System (ADS)
Golota, T.; De La Peña, M. D.; Biddick, C.; Lesser, M.; Leibold, T.; Miller, D.; Meeks, R.; Hahn, T.; Storm, J.; Sargent, T.; Summers, D.; Hill, J.; Kraus, J.; Hooper, S.; Fisher, D.
2014-07-01
The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flaugher, B.; Diehl, H. T.; Alvarez, O.
2015-11-15
The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less
Flaugher, B.
2015-04-11
The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less
Motion camera based on a custom vision sensor and an FPGA architecture
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel
1998-09-01
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Optimising Camera Traps for Monitoring Small Mammals
Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce
2013-01-01
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
DNA targeting specificity of RNA-guided Cas9 nucleases.
Hsu, Patrick D; Scott, David A; Weinstein, Joshua A; Ran, F Ann; Konermann, Silvana; Agarwala, Vineeta; Li, Yinqing; Fine, Eli J; Wu, Xuebing; Shalem, Ophir; Cradick, Thomas J; Marraffini, Luciano A; Bao, Gang; Zhang, Feng
2013-09-01
The Streptococcus pyogenes Cas9 (SpCas9) nuclease can be efficiently targeted to genomic loci by means of single-guide RNAs (sgRNAs) to enable genome editing. Here, we characterize SpCas9 targeting specificity in human cells to inform the selection of target sites and avoid off-target effects. Our study evaluates >700 guide RNA variants and SpCas9-induced indel mutation levels at >100 predicted genomic off-target loci in 293T and 293FT cells. We find that SpCas9 tolerates mismatches between guide RNA and target DNA at different positions in a sequence-dependent manner, sensitive to the number, position and distribution of mismatches. We also show that SpCas9-mediated cleavage is unaffected by DNA methylation and that the dosage of SpCas9 and sgRNA can be titrated to minimize off-target modification. To facilitate mammalian genome engineering applications, we provide a web-based software tool to guide the selection and validation of target sequences as well as off-target analyses.
Application of infrared uncooled cameras in surveillance systems
NASA Astrophysics Data System (ADS)
Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.
2013-10-01
The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.
STS-116 MS Fuglesang uses digital camera on the STBD side of the S0 Truss during EVA 4
2006-12-19
S116-E-06882 (18 Dec. 2006) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses a digital still camera during the mission's fourth session of extravehicular activity (EVA) while Space Shuttle Discovery was docked with the International Space Station. Astronaut Robert L. Curbeam Jr. (out of frame), mission specialist, worked in tandem with Fuglesang, using specially-prepared, tape-insulated tools, to guide the array wing neatly inside its blanket box during the 6-hour, 38-minute spacewalk.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
On the development of radiation tolerant surveillance camera from consumer-grade components
NASA Astrophysics Data System (ADS)
Klemen, Ambrožič; Luka, Snoj; Lars, Öhlin; Jan, Gunnarsson; Niklas, Barringer
2017-09-01
In this paper an overview on the process of designing a radiation tolerant surveillance camera from consumer grade components and commercially available particle shielding materials is given. This involves utilization of Monte-Carlo particle transport code MCNP6 and ENDF/B-VII.0 nuclear data libraries, as well as testing the physical electrical systems against γ radiation, utilizing JSI TRIGA mk. II fuel elements as a γ-ray sources. A new, aluminum, 20 cm × 20 cm × 30 cm irradiation facility with electrical power and signal wire guide-tube to the reactor platform, was designed and constructed and used for irradiation of large electronic and optical components assemblies with activated fuel elements. Electronic components to be used in the camera were tested against γ-radiation in an independent manner, to determine their radiation tolerance. Several camera designs were proposed and simulated using MCNP, to determine incident particle and dose attenuation factors. Data obtained from the measurements and MCNP simulations will be used to finalize the design of 3 surveillance camera models, with different radiation tolerances.
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Standoff aircraft IR characterization with ABB dual-band hyper spectral imager
NASA Astrophysics Data System (ADS)
Prel, Florent; Moreau, Louis; Lantagne, Stéphane; Bullis, Ritchie D.; Roy, Claude; Vallières, Christian; Levesque, Luc
2012-09-01
Remote sensing infrared characterization of rapidly evolving events generally involves the combination of a spectro-radiometer and infrared camera(s) as separated instruments. Time synchronization, spatial coregistration, consistent radiometric calibration and managing several systems are important challenges to overcome; they complicate the target infrared characterization data processing and increase the sources of errors affecting the final radiometric accuracy. MR-i is a dual-band Hyperspectal imaging spectro-radiometer, that combines two 256 x 256 pixels infrared cameras and an infrared spectro-radiometer into one single instrument. This field instrument generates spectral datacubes in the MWIR and LWIR. It is designed to acquire the spectral signatures of rapidly evolving events. The design is modular. The spectrometer has two output ports configured with two simultaneously operated cameras to either widen the spectral coverage or to increase the dynamic range of the measured amplitudes. Various telescope options are available for the input port. Recent platform developments and field trial measurements performances will be presented for a system configuration dedicated to the characterization of airborne targets.
Markerless laser registration in image-guided oral and maxillofacial surgery.
Marmulla, Rüdiger; Lüth, Tim; Mühling, Joachim; Hassfeld, Stefan
2004-07-01
The use of registration markers in computer-assisted surgery is combined with high logistic costs and efforts. Markerless patient registration using laser scan surface registration techniques is a new challenging method. The present study was performed to evaluate the clinical accuracy in finding defined target points within the surgical site after markerless patient registration in image-guided oral and maxillofacial surgery. Twenty consecutive patients with different cranial diseases were scheduled for computer-assisted surgery. Data set alignment between the surgical site and the computed tomography (CT) data set was performed by markerless laser scan surface registration of the patient's face. Intraoral rigidly attached registration markers were used as target points, which had to be detected by an infrared pointer. The Surgical Segment Navigator SSN++ has been used for all procedures. SSN++ is an investigative product based on the SSN system that had previously been developed by the presenting authors with the support of Carl Zeiss (Oberkochen, Germany). SSN++ is connected to a Polaris infrared camera (Northern Digital, Waterloo, Ontario, Canada) and to a Minolta VI 900 3D digitizer (Tokyo, Japan) for high-resolution laser scanning. Minimal differences in shape between the laser scan surface and the surface generated from the CT data set could be detected. Nevertheless, high-resolution laser scan of the skin surface allows for a precise patient registration (mean deviation 1.1 mm, maximum deviation 1.8 mm). Radiation load, logistic costs, and efforts arising from the planning of computer-assisted surgery of the head can be reduced because native (markerless) CT data sets can be used for laser scan-based surface registration.
Video camera system for locating bullet holes in targets at a ballistics tunnel
NASA Technical Reports Server (NTRS)
Burner, A. W.; Rummler, D. R.; Goad, W. K.
1990-01-01
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
Enhanced technologies for unattended ground sensor systems
NASA Astrophysics Data System (ADS)
Hartup, David C.
2010-04-01
Progress in several technical areas is being leveraged to advantage in Unattended Ground Sensor (UGS) systems. This paper discusses advanced technologies that are appropriate for use in UGS systems. While some technologies provide evolutionary improvements, other technologies result in revolutionary performance advancements for UGS systems. Some specific technologies discussed include wireless cameras and viewers, commercial PDA-based system programmers and monitors, new materials and techniques for packaging improvements, low power cueing sensor radios, advanced long-haul terrestrial and SATCOM radios, and networked communications. Other technologies covered include advanced target detection algorithms, high pixel count cameras for license plate and facial recognition, small cameras that provide large stand-off distances, video transmissions of target activity instead of still images, sensor fusion algorithms, and control center hardware. The impact of each technology on the overall UGS system architecture is discussed, along with the advantages provided to UGS system users. Areas of analysis include required camera parameters as a function of stand-off distance for license plate and facial recognition applications, power consumption for wireless cameras and viewers, sensor fusion communication requirements, and requirements to practically implement video transmission through UGS systems. Examples of devices that have already been fielded using technology from several of these areas are given.
Fuzzy logic control for camera tracking system
NASA Technical Reports Server (NTRS)
Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant
1992-01-01
A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.
Deadpool: A how-to-build guide
USDA-ARS?s Scientific Manuscript database
An easy-to-customize, low-cost, low disturbance proximal sensing cart for field-based high-throughput phenotyping is described. General dimensions and build guidelines are provided. The cart, named Deadpool, supports mounting multiple proximal sensors and cameras for characterizing plant traits grow...
Robotic retroperitoneal partial nephrectomy: a step-by-step guide.
Ghani, Khurshid R; Porter, James; Menon, Mani; Rogers, Craig
2014-08-01
To describe a step-by-step guide for successful implementation of the retroperitoneal approach to robotic partial nephrectomy (RPN) PATIENTS AND METHODS: The patient is placed in the flank position and the table fully flexed to increase the space between the 12th rib and iliac crest. Access to the retroperitoneal space is obtained using a balloon-dilating device. Ports include a 12-mm camera port, two 8-mm robotic ports and a 12-mm assistant port placed in the anterior axillary line cephalad to the anterior superior iliac spine, and 7-8 cm caudal to the ipsilateral robotic port. Positioning and port placement strategies for successful technique include: (i) Docking robot directly over the patient's head parallel to the spine; (ii) incision for camera port ≈1.9 cm (1 fingerbreadth) above the iliac crest, lateral to the triangle of Petit; (iii) Seldinger technique insertion of kidney-shaped balloon dilator into retroperitoneal space; (iv) Maximising distance between all ports; (v) Ensuring camera arm is placed in the outer part of the 'sweet spot'. The retroperitoneal approach to RPN permits direct access to the renal hilum, no need for bowel mobilisation and excellent visualisation of posteriorly located tumours. © 2014 The Authors. BJU International © 2014 BJU International.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Calibration Target as Seen by Mars Hand Lens Imager
2012-02-07
During pre-flight testing, the Mars Hand Lens Imager MAHLI camera on NASA Mars rover Curiosity took this image of the MAHLI calibration target from a distance of 3.94 inches 10 centimeters away from the target.
Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination
Fasano, Giancarmine; Grassi, Michele
2017-01-01
In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective-n-Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal. PMID:28946651
Hardware in the Loop Performance Assessment of LIDAR-Based Spacecraft Pose Determination.
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele
2017-09-24
In this paper an original, easy to reproduce, semi-analytic calibration approach is developed for hardware-in-the-loop performance assessment of pose determination algorithms processing point cloud data, collected by imaging a non-cooperative target with LIDARs. The laboratory setup includes a scanning LIDAR, a monocular camera, a scaled-replica of a satellite-like target, and a set of calibration tools. The point clouds are processed by uncooperative model-based algorithms to estimate the target relative position and attitude with respect to the LIDAR. Target images, acquired by a monocular camera operated simultaneously with the LIDAR, are processed applying standard solutions to the Perspective- n -Points problem to get high-accuracy pose estimates which can be used as a benchmark to evaluate the accuracy attained by the LIDAR-based techniques. To this aim, a precise knowledge of the extrinsic relative calibration between the camera and the LIDAR is essential, and it is obtained by implementing an original calibration approach which does not need ad-hoc homologous targets (e.g., retro-reflectors) easily recognizable by the two sensors. The pose determination techniques investigated by this work are of interest to space applications involving close-proximity maneuvers between non-cooperative platforms, e.g., on-orbit servicing and active debris removal.
Multi-target detection and positioning in crowds using multiple camera surveillance
NASA Astrophysics Data System (ADS)
Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng
2018-04-01
In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.
Slant path range gated imaging of static and moving targets
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Elmqvist, Magnus; Karlsson, Kjell; Gustafsson, Ove; Chevalier, Tomas
2012-06-01
This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando. The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The human target was holding various items and also performing certain tasks some of high of relevance in defence and security. One of the main purposes with this investigation was to compare the recognition of these human targets and their activities with the resolution information obtained from conventional resolution charts. The data collection of human targets was also made from out roof top laboratory at about 13 m height above ground. The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests. The paper will present images for both passive and active modes obtained at different elevations and discuss the results from both technical and system perspectives.
A portable fluorescence microscopic imaging system for cholecystectomy
NASA Astrophysics Data System (ADS)
Ye, Jian; Yang, Chaoyu; Gan, Qi; Ma, Rong; Zhang, Zeshu; Chang, Shufang; Shao, Pengfei; Zhang, Shiwu; Liu, Chenhai; Xu, Ronald
2016-03-01
In this paper we proposed a portable fluorescence microscopic imaging system to prevent iatrogenic biliary injuries from occurring during cholecystectomy due to misidentification of the cystic structures. The system consisted of a light source module, a CMOS camera, a Raspberry Pi computer and a 5 inch HDMI LCD. Specifically, the light source module was composed of 690 nm and 850 nm LEDs, allowing the CMOS camera to simultaneously acquire both fluorescence and background images. The system was controlled by Raspberry Pi using Python programming with the OpenCV library under Linux. We chose Indocyanine green(ICG) as a fluorescent contrast agent and then tested fluorescence intensities of the ICG aqueous solution at different concentration levels by our fluorescence microscopic system compared with the commercial Xenogen IVIS system. The spatial resolution of the proposed fluorescence microscopic imaging system was measured by a 1951 USAF resolution target and the dynamic response was evaluated quantitatively with an automatic displacement platform. Finally, we verified the technical feasibility of the proposed system in mouse models of bile duct, performing both correct and incorrect gallbladder resection. Our experiments showed that the proposed system can provide clear visualization of the confluence between the cystic duct and common bile duct or common hepatic duct, suggesting that this is a potential method for guiding cholecystectomy. The proposed portable system only cost a total of $300, potentially promoting its use in resource-limited settings.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Attitude identification for SCOLE using two infrared cameras
NASA Technical Reports Server (NTRS)
Shenhar, Joram
1991-01-01
An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.
Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras
NASA Astrophysics Data System (ADS)
Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota
2017-02-01
Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.
GUIDE-Seq enables genome-wide profiling of off-target cleavage by CRISPR-Cas nucleases
Nguyen, Nhu T.; Liebers, Matthew; Topkar, Ved V.; Thapar, Vishal; Wyvekens, Nicolas; Khayter, Cyd; Iafrate, A. John; Le, Long P.; Aryee, Martin J.; Joung, J. Keith
2014-01-01
CRISPR RNA-guided nucleases (RGNs) are widely used genome-editing reagents, but methods to delineate their genome-wide off-target cleavage activities have been lacking. Here we describe an approach for global detection of DNA double-stranded breaks (DSBs) introduced by RGNs and potentially other nucleases. This method, called Genome-wide Unbiased Identification of DSBs Enabled by Sequencing (GUIDE-Seq), relies on capture of double-stranded oligodeoxynucleotides into breaks Application of GUIDE-Seq to thirteen RGNs in two human cell lines revealed wide variability in RGN off-target activities and unappreciated characteristics of off-target sequences. The majority of identified sites were not detected by existing computational methods or ChIP-Seq. GUIDE-Seq also identified RGN-independent genomic breakpoint ‘hotspots’. Finally, GUIDE-Seq revealed that truncated guide RNAs exhibit substantially reduced RGN-induced off-target DSBs. Our experiments define the most rigorous framework for genome-wide identification of RGN off-target effects to date and provide a method for evaluating the safety of these nucleases prior to clinical use. PMID:25513782
Optical Meteor Systems Used by the NASA Meteoroid Environment Office
NASA Technical Reports Server (NTRS)
Kingery, A. M.; Blaauw, R. C.; Cooke, W. J.; Moser, D. E.
2015-01-01
The NASA Meteoroid Environment Office (MEO) uses two main meteor camera networks to characterize the meteoroid environment: an all sky system and a wide field system to study cm and mm size meteors respectively. The NASA All Sky Fireball Network consists of fifteen meteor video cameras in the United States, with plans to expand to eighteen cameras by the end of 2015. The camera design and All-Sky Guided and Real-time Detection (ASGARD) meteor detection software [1, 2] were adopted from the University of Western Ontario's Southern Ontario Meteor Network (SOMN). After seven years of operation, the network has detected over 12,000 multi-station meteors, including meteors from at least 53 different meteor showers. The network is used for speed distribution determination, characterization of meteor showers and sporadic sources, and for informing the public on bright meteor events. The NASA Wide Field Meteor Network was established in December of 2012 with two cameras and expanded to eight cameras in December of 2014. The two camera configuration saw 5470 meteors over two years of operation with two cameras, and has detected 3423 meteors in the first five months of operation (Dec 12, 2014 - May 12, 2015) with eight cameras. We expect to see over 10,000 meteors per year with the expanded system. The cameras have a 20 degree field of view and an approximate limiting meteor magnitude of +5. The network's primary goal is determining the nightly shower and sporadic meteor fluxes. Both camera networks function almost fully autonomously with little human interaction required for upkeep and analysis. The cameras send their data to a central server for storage and automatic analysis. Every morning the servers automatically generates an e-mail and web page containing an analysis of the previous night's events. The current status of the networks will be described, alongside with preliminary results. In addition, future projects, CCD photometry and broadband meteor color camera system, will be discussed.
Structural basis for microRNA targeting
Schirle, Nicole T.; Sheu-Gruttadauria, Jessica; MacRae, Ian J.
2014-10-31
MicroRNAs (miRNAs) control expression of thousands of genes in plants and animals. miRNAs function by guiding Argonaute proteins to complementary sites in messenger RNAs (mRNAs) targeted for repression. In this paper, we determined crystal structures of human Argonaute-2 (Ago2) bound to a defined guide RNA with and without target RNAs representing miRNA recognition sites. These structures suggest a stepwise mechanism, in which Ago2 primarily exposes guide nucleotides (nt) 2 to 5 for initial target pairing. Pairing to nt 2 to 5 promotes conformational changes that expose nt 2 to 8 and 13 to 16 for further target recognition. Interactions withmore » the guide-target minor groove allow Ago2 to interrogate target RNAs in a sequence-independent manner, whereas an adenosine binding-pocket opposite guide nt 1 further facilitates target recognition. Spurious slicing of miRNA targets is avoided through an inhibitory coordination of one catalytic magnesium ion. Finally, these results explain the conserved nucleotide-pairing patterns in animal miRNA target sites first observed over two decades ago.« less
Genome-scale measurement of off-target activity using Cas9 toxicity in high-throughput screens.
Morgens, David W; Wainberg, Michael; Boyle, Evan A; Ursu, Oana; Araya, Carlos L; Tsui, C Kimberly; Haney, Michael S; Hess, Gaelen T; Han, Kyuho; Jeng, Edwin E; Li, Amy; Snyder, Michael P; Greenleaf, William J; Kundaje, Anshul; Bassik, Michael C
2017-05-05
CRISPR-Cas9 screens are powerful tools for high-throughput interrogation of genome function, but can be confounded by nuclease-induced toxicity at both on- and off-target sites, likely due to DNA damage. Here, to test potential solutions to this issue, we design and analyse a CRISPR-Cas9 library with 10 variable-length guides per gene and thousands of negative controls targeting non-functional, non-genic regions (termed safe-targeting guides), in addition to non-targeting controls. We find this library has excellent performance in identifying genes affecting growth and sensitivity to the ricin toxin. The safe-targeting guides allow for proper control of toxicity from on-target DNA damage. Using this toxicity as a proxy to measure off-target cutting, we demonstrate with tens of thousands of guides both the nucleotide position-dependent sensitivity to single mismatches and the reduction of off-target cutting using truncated guides. Our results demonstrate a simple strategy for high-throughput evaluation of target specificity and nuclease toxicity in Cas9 screens.
Genome-scale measurement of off-target activity using Cas9 toxicity in high-throughput screens
Morgens, David W.; Wainberg, Michael; Boyle, Evan A.; Ursu, Oana; Araya, Carlos L.; Tsui, C. Kimberly; Haney, Michael S.; Hess, Gaelen T.; Han, Kyuho; Jeng, Edwin E.; Li, Amy; Snyder, Michael P.; Greenleaf, William J.; Kundaje, Anshul; Bassik, Michael C.
2017-01-01
CRISPR-Cas9 screens are powerful tools for high-throughput interrogation of genome function, but can be confounded by nuclease-induced toxicity at both on- and off-target sites, likely due to DNA damage. Here, to test potential solutions to this issue, we design and analyse a CRISPR-Cas9 library with 10 variable-length guides per gene and thousands of negative controls targeting non-functional, non-genic regions (termed safe-targeting guides), in addition to non-targeting controls. We find this library has excellent performance in identifying genes affecting growth and sensitivity to the ricin toxin. The safe-targeting guides allow for proper control of toxicity from on-target DNA damage. Using this toxicity as a proxy to measure off-target cutting, we demonstrate with tens of thousands of guides both the nucleotide position-dependent sensitivity to single mismatches and the reduction of off-target cutting using truncated guides. Our results demonstrate a simple strategy for high-throughput evaluation of target specificity and nuclease toxicity in Cas9 screens. PMID:28474669
Instrumentation for Aim Point Determination in the Close-in Battle
2007-12-01
Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com/Products/ Camcorder/DigitalMemory/files/scx210wl.pdf). ........ 5 Figure 5...target. One way of making a measurement is to mount a small “ lipstick ” camera to the rifle with a mount similar to the laser-tag transmitter mount...technology.com/contractors/surveillance/viotac-inc/viotac-inc1.html). Figure 4. Rugged camcorder with remote “ lipstick ” camera (http://www.samsung.com
User-assisted visual search and tracking across distributed multi-camera networks
NASA Astrophysics Data System (ADS)
Raja, Yogesh; Gong, Shaogang; Xiang, Tao
2011-11-01
Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.
Composite Wavelet Filters for Enhanced Automated Target Recognition
NASA Technical Reports Server (NTRS)
Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin
2012-01-01
Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.
Highly Complementary Target RNAs Promote Release of Guide RNAs from Human Argonaute2
De, Nabanita; Young, Lisa; Lau, Pick-Wei; Meisner, Nicole-Claudia; Morrissey, David V.; MacRae, Ian J.
2013-01-01
SUMMARY Argonaute proteins use small RNAs to guide the silencing of complementary target RNAs in many eukaryotes. Although small RNA biogenesis pathways are well studied, mechanisms for removal of guide RNAs from Argonaute are poorly understood. Here we show that the Argonaute2 (Ago2) guide RNA complex is extremely stable, with a half-life on the order of days. However, highly complementary target RNAs destabilize the complex and significantly accelerate release of the guide RNA from Ago2. This “unloading” activity can be enhanced by mismatches between the target and the guide 5′ end and attenuated by mismatches to the guide 3′ end. The introduction of 3′ mismatches leads to more potent silencing of abundant mRNAs in mammalian cells. These findings help to explain why the 3′ ends of mammalian microRNAs (miRNAs) rarely match their targets, suggest a mechanism for sequence-specific small RNA turnover, and offer insights for controlling small RNAs in mammalian cells. PMID:23664376
Structural Basis for Guide RNA Processing and Seed-Dependent DNA Targeting by CRISPR-Cas12a.
Swarts, Daan C; van der Oost, John; Jinek, Martin
2017-04-20
The CRISPR-associated protein Cas12a (Cpf1), which has been repurposed for genome editing, possesses two distinct nuclease activities: endoribonuclease activity for processing its own guide RNAs and RNA-guided DNase activity for target DNA cleavage. To elucidate the molecular basis of both activities, we determined crystal structures of Francisella novicida Cas12a bound to guide RNA and in complex with an R-loop formed by a non-cleavable guide RNA precursor and a full-length target DNA. Corroborated by biochemical experiments, these structures reveal the mechanisms of guide RNA processing and pre-ordering of the seed sequence in the guide RNA that primes Cas12a for target DNA binding. Furthermore, the R-loop complex structure reveals the strand displacement mechanism that facilitates guide-target hybridization and suggests a mechanism for double-stranded DNA cleavage involving a single active site. Together, these insights advance our mechanistic understanding of Cas12a enzymes and may contribute to further development of genome editing technologies. Copyright © 2017 Elsevier Inc. All rights reserved.
NUSC Technical Publications Guide.
1985-05-01
Facility personnel especially that of A. Castelluzzo, E. Deland, J. Gesel , and E. Szlosek (all of Code 4343). Reviewed and Approved: 14 July 1980 D...their technical content and format. Review and approve the manual outline, the review manuscript, and the final camera - reproducible copy. Conduct in
Erosion Patterns May Guide Mars Rover to Rocks Recently Exposed
2013-12-09
These two images come from the HiRISE camera on NASA Mars Reconnaissance Orbiter. Images of locations in Gale Crater taken from orbit around Mars reveal evidence of erosion in recent geological times and development of small scarps, or vertical surfaces
Hyperspectral imaging using a color camera and its application for pathogen detection
USDA-ARS?s Scientific Manuscript database
This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...
The calibration of video cameras for quantitative measurements
NASA Technical Reports Server (NTRS)
Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.
1993-01-01
Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.
Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
NASA Technical Reports Server (NTRS)
Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)
2014-01-01
A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.
Printed circuit board for a CCD camera head
Conder, Alan D.
2002-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Composite x-ray pinholes for time-resolved microphotography of laser compressed targets.
Attwood, D T; Weinstein, B W; Wuerker, R F
1977-05-01
Composite x-ray pinholes having dichroic properties are presented. These pinholes permit both x-ray imaging and visible alignment with micron accuracy by presenting different apparent apertures in these widely disparate regions of the spectrum. Their use is mandatory in certain applications in which the x-ray detection consists of a limited number of resolvable elements whose use one wishes to maximize. Mating the pinhole camera with an x-ray streaking camera is described, along with experiments which spatially and temporally resolve the implosion of laser irradiated targets.
Accurate shade image matching by using a smartphone camera.
Tam, Weng-Kong; Lee, Hsi-Jian
2017-04-01
Dental shade matching by using digital images may be feasible when suitable color features are properly manipulated. Separating the color features into feature spaces facilitates favorable matching. We propose using support vector machines (SVM), which are outstanding classifiers, in shade classification. A total of 1300 shade tab images were captured using a smartphone camera with auto-mode settings and no flash. The images were shot at angled distances of 14-20cm from a shade guide at a clinic equipped with light tubes that produced a 4000K color temperature. The Group 1 samples comprised 1040 tab images, for which the shade guide was randomly positioned in the clinic, and the Group 2 samples comprised 260 tab images, for which the shade guide had a fixed position in the clinic. Rectangular content was cropped manually on each shade tab image and further divided into 10×2 blocks. The color features extracted from the blocks were described using a feature vector. The feature vectors in each group underwent SVM training and classification by using the "leave-one-out" strategy. The top one and three accuracies of Group 1 were 0.86 and 0.98, respectively, and those of Group 2 were 0.97 and 1.00, respectively. This study provides a feasible technique for dental shade classification that uses the camera of a mobile device. The findings reveal that the proposed SVM classification might outperform the shade-matching results of previous studies that have performed similarity measurements of ΔE levels or used an S, a*, b* feature set. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Moving target feature phenomenology data collection at China Lake
NASA Astrophysics Data System (ADS)
Gross, David C.; Hill, Jeff; Schmitz, James L.
2002-08-01
This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.
Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.
2017-08-01
The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.
Eccentricity error identification and compensation for high-accuracy 3D optical measurement
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2016-01-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation. PMID:26900265
Eccentricity error identification and compensation for high-accuracy 3D optical measurement.
He, Dong; Liu, Xiaoli; Peng, Xiang; Ding, Yabin; Gao, Bruce Z
2013-07-01
The circular target has been widely used in various three-dimensional optical measurements, such as camera calibration, photogrammetry and structured light projection measurement system. The identification and compensation of the circular target systematic eccentricity error caused by perspective projection is an important issue for ensuring accurate measurement. This paper introduces a novel approach for identifying and correcting the eccentricity error with the help of a concentric circles target. Compared with previous eccentricity error correction methods, our approach does not require taking care of the geometric parameters of the measurement system regarding target and camera. Therefore, the proposed approach is very flexible in practical applications, and in particular, it is also applicable in the case of only one image with a single target available. The experimental results are presented to prove the efficiency and stability of the proposed approach for eccentricity error compensation.
The Sensor Irony: How Reliance on Sensor Technology is Limiting Our View of the Battlefield
2010-05-10
thermal ) camera, as well as a laser illuminator/range finder.73 Similar to the MQ- 1 , the MQ-9 Reaper is primarily a strike asset for emerging targets...Wescam 14TS. 1 Both systems have an Electro-optical (daylight) TV camera, an Infra-red ( thermal ) camera, as well as a laser illuminator/range finder...Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour
Video sensor with range measurement capability
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)
2008-01-01
A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.
NASA Astrophysics Data System (ADS)
House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor
2017-03-01
PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.
Compact CdZnTe-based gamma camera for prostate cancer imaging
NASA Astrophysics Data System (ADS)
Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.
2011-06-01
In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera have been completed. The results show better than 6-mm resolution at a distance of 1 cm. Details of the test results are discussed in this paper.
Dynamic characteristics of far-field radiation of current modulated phase-locked diode laser arrays
NASA Technical Reports Server (NTRS)
Elliott, R. A.; Hartnett, K.
1987-01-01
A versatile and powerful streak camera/frame grabber system for studying the evolution of the near and far field radiation patterns of diode lasers was assembled and tested. Software needed to analyze and display the data acquired with the steak camera/frame grabber system was written and the total package used to record and perform preliminary analyses on the behavior of two types of laser, a ten emitter gain guided array and a flared waveguide Y-coupled array. Examples of the information which can be gathered with this system are presented.
NASA Astrophysics Data System (ADS)
Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team
2017-12-01
The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\
'Illinois' and 'New York' Wiped Clean
NASA Technical Reports Server (NTRS)
2004-01-01
This panoramic camera image was taken by NASA's Mars Exploration Rover Spirit on sol 79 after completing a two-location brushing on the rock dubbed 'Mazatzal.' A coating of fine, dust-like material was successfully removed from targets named 'Illinois' (right) and 'New York' (left), revealing the weathered rock underneath. In this image, Spirit's panoramic camera mast assembly, or camera head, can be seen shadowing Mazatzal's surface. This approximate true color image was taken with the 601, 535 and 482 nanometer filters.
The center of the two brushed spots are approximately 10 centimeters (3.9 inches) apart and will be aggressively analyzed by the instruments on the robotic arm on sol 80. Plans for sol 81 are to grind into the New York target to get past any weathered rock and expose the original, internal rock underneath.Sniper detection using infrared camera: technical possibilities and limitations
NASA Astrophysics Data System (ADS)
Kastek, M.; Dulski, R.; Trzaskawka, P.; Bieszczad, G.
2010-04-01
The paper discusses technical possibilities to build an effective system for sniper detection using infrared cameras. Descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. Cooled and uncooled detectors were considered. Three phases of sniper activities were taken into consideration: before, during and after the shot. On the basis of experimental data the parameters defining the target were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets. The simulation of detection ranges was done for the assumed scenario of sniper detection task. The infrared sniper detection system was discussed, capable of fulfilling the requirements. The discussion of the results of analysis and simulations was finally presented.
Making Movies: From Script to Screen.
ERIC Educational Resources Information Center
Bobker, Lee R.
This book is a guide to the making of films. It covers preparation (scripting, storyboarding, budgeting, casting, and crew selection), filming (directing, camera operating, and sound recording), and postproduction (editing, sound dubbing, laboratory processing, and trial screening). Distribution of films is discussed in detail. Possible careers in…
Inconspicuous echolocation in hoary bats (Lasiurus cinereus)
Aaron J. Corcoran; Theodore J. Weller
2018-01-01
Echolocation allows bats to occupy diverse nocturnal niches. Bats almost always use echolocation, even when other sensory stimuli are available to guide navigation. Here, using arrays of calibrated infrared cameras and ultrasonic microphones, we demonstrate that hoary bats (Lasiurus cinereus) use previously unknown echolocation behaviours that...
Josephs, Eric A.; Kocak, D. Dewran; Fitzgibbon, Christopher J.; McMenemy, Joshua; Gersbach, Charles A.; Marszalek, Piotr E.
2015-01-01
CRISPR-associated endonuclease Cas9 cuts DNA at variable target sites designated by a Cas9-bound RNA molecule. Cas9's ability to be directed by single ‘guide RNA’ molecules to target nearly any sequence has been recently exploited for a number of emerging biological and medical applications. Therefore, understanding the nature of Cas9's off-target activity is of paramount importance for its practical use. Using atomic force microscopy (AFM), we directly resolve individual Cas9 and nuclease-inactive dCas9 proteins as they bind along engineered DNA substrates. High-resolution imaging allows us to determine their relative propensities to bind with different guide RNA variants to targeted or off-target sequences. Mapping the structural properties of Cas9 and dCas9 to their respective binding sites reveals a progressive conformational transformation at DNA sites with increasing sequence similarity to its target. With kinetic Monte Carlo (KMC) simulations, these results provide evidence of a ‘conformational gating’ mechanism driven by the interactions between the guide RNA and the 14th–17th nucleotide region of the targeted DNA, the stabilities of which we find correlate significantly with reported off-target cleavage rates. KMC simulations also reveal potential methodologies to engineer guide RNA sequences with improved specificity by considering the invasion of guide RNAs into targeted DNA duplex. PMID:26384421
Si, Xingfeng; Kays, Roland
2014-01-01
Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
NASA Technical Reports Server (NTRS)
Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)
1985-01-01
Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.
Design and realization of an AEC&AGC system for the CCD aerial camera
NASA Astrophysics Data System (ADS)
Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun
2015-08-01
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
NASA Astrophysics Data System (ADS)
Morison, Ian
2017-02-01
1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.
Autonomous Selection of a Rover Laser Target on Mars
2016-07-21
NASA's Curiosity Mars rover autonomously selects some of the targets for the laser and telescopic camera of the rover's Chemistry and Camera (ChemCam) instrument. For example, on-board software analyzed the image on the left, chose the target highlighted with the yellow dot, and pointed ChemCam to acquire laser analysis and the image on the right. Most ChemCam targets are still selected by scientists discussing rocks or soil seen in images the rover has sent to Earth, but the autonomous targeting provides an added capability. It can offer a head start on acquiring composition information at a location just reached by a drive. The software for target selection and instrument pointing is called AEGIS, for Autonomous Exploration for Gathering Increased Science. The image on the left was taken by the left eye of Curiosity's stereo Navigation Camera (Navcam) a few minutes after the rover completed a drive of about 43 feet (13 meters) on July 14, 2016, during the 1,400th Martian day, or sol, of the rover's work on Mars. Using AEGIS for target selection and pointing based on the Navcam imagery, Curiosity's ChemCam zapped a grid of nine points on a rock chosen for meeting criteria set by the science team. In this run, parameters were set to find bright-toned outcrop rock rather than darker rocks, which in this area tend to be loose on the surface. Within less than 30 minutes after the Navcam image was taken, ChemCam had used its laser on all nine points and had taken before-and-after images of the target area with its remote micro-imager (RMI) camera. The image at right combines those two RMI exposures. The nine laser targets are marked in red at the center. On the Navcam image at left, the yellow dot identifies the selected target area, which is about 2.2 inches (5.6 centimeters) in diameter. An unannotated version of this Sol 1400 Navcam image is available. ChemCam records spectra of glowing plasma generated when the laser hits a target point. These spectra provide information about the chemical elements present in the target. The light-toned patch of bedrock identified by AEGIS on Sol 1400 appears, geochemically, to belong to the "Stimson" sandstone unit of lower Mount Sharp. In mid-2016, Curiosity typically uses AEGIS for selecting a ChemCam target more than once per week. http://photojournal.jpl.nasa.gov/catalog/PIA20762
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
2010-01-01
open garage leading to the building interior. The UAV is positioned north of a potential ingress to the building. As the mission begins, the UAV...camera, the difficulty in detecting and navigating around obstacles using this non- stereo camera necessitated a precomputed map of all obstacles and
The development of large-aperture test system of infrared camera and visible CCD camera
NASA Astrophysics Data System (ADS)
Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying
2015-10-01
Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.
Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring
NASA Astrophysics Data System (ADS)
Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.
2014-12-01
Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
Partial DNA-guided Cas9 enables genome editing with reduced off-target activity
Yin, Hao; Song, Chun-Qing; Suresh, Sneha; Kwan, Suet-Yan; Wu, Qiongqiong; Walsh, Stephen; Ding, Junmei; Bogorad, Roman L; Zhu, Lihua Julie; Wolfe, Scot A; Koteliansky, Victor; Xue, Wen; Langer, Robert; Anderson, Daniel G
2018-01-01
CRISPR–Cas9 is a versatile RNA-guided genome editing tool. Here we demonstrate that partial replacement of RNA nucleotides with DNA nucleotides in CRISPR RNA (crRNA) enables efficient gene editing in human cells. This strategy of partial DNA replacement retains on-target activity when used with both crRNA and sgRNA, as well as with multiple guide sequences. Partial DNA replacement also works for crRNA of Cpf1, another CRISPR system. We find that partial DNA replacement in the guide sequence significantly reduces off-target genome editing through focused analysis of off-target cleavage, measurement of mismatch tolerance and genome-wide profiling of off-target sites. Using the structure of the Cas9–sgRNA complex as a guide, the majority of the 3′ end of crRNA can be replaced with DNA nucleotide, and the 5 - and 3′-DNA-replaced crRNA enables efficient genome editing. Cas9 guided by a DNA–RNA chimera may provide a generalized strategy to reduce both the cost and the off-target genome editing in human cells. PMID:29377001
Testing of a Composite Wavelet Filter to Enhance Automated Target Recognition in SONAR
NASA Technical Reports Server (NTRS)
Chiang, Jeffrey N.
2011-01-01
Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low resolution SONAR and camera videos taken from Unmanned Underwater Vehicles (UUVs). These SONAR images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both SONAR and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this report.
Low power multi-camera system and algorithms for automated threat detection
NASA Astrophysics Data System (ADS)
Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin
2013-05-01
A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.
Guided exploration in virtual environments
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas
2001-06-01
We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.
Automatic Orientation of Large Blocks of Oblique Images
NASA Astrophysics Data System (ADS)
Rupnik, E.; Nex, F.; Remondino, F.
2013-05-01
Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.
RM-10A robotic manipulator system
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.R.; Coughlan, J.B.; Harvey, H.W.
1988-01-01
The REMOTE RM-10A is a man-replacement manipulator system that has been developed specifically for use in radioactive and other hazardous environments. It can be teleoperated, with man-in-the-loop, for unstructured tasks or programmed to perform routine tasks automatically much like robots in the automated manufacturing industry. The RM-10A is a servomanipulator utilizing a closed-loop, microprocessor-based control system. The system consists of a slave assembly, master control station, and interconnecting cabling. The slave assembly is the part of the system that enters the hostile environment. It is man-like is size and configuration with two identical arms attached to a torso structure. Eachmore » arm attaches to the torso using two captive screws and two guide pins. The guide pins position and stabilize an arm during removal and reinstallation and also align the two electrical connectors located in the arm support plate and torso. These features allow easy remote replacement of an arm, and commonality of the arms allow interchangeability. The water-resistant slave assembly is equipped with gaskets and O-ring seals in the torso and arm and camera assemblies. In addition, each slave arm's elbow, wrist, and tong are protected by replaceable polyurethane boots. An upper camera assembly, consisting of a color television (TV) camera, 6:1 zoom lens, and a pan/tilt unit, mount to the torso to provide remote viewing capability.« less
2012-08-17
This image shows the calibration target for the Chemistry and Camera ChemCam instrument on NASA Curiosity rover. The calibration target is one square and a group of nine circles that look dark in the black-and-white image.
MAHLI Calibration Target in Ultraviolet Light
2012-02-07
During pre-flight testing in March 2011, the Mars Hand Lens Imager MAHLI camera on NASA Mars rover Curiosity took this image of the MAHLI calibration target under illumination from MAHLI two ultraviolet LEDs light emitting diodes.
SFR test fixture for hemispherical and hyperhemispherical camera systems
NASA Astrophysics Data System (ADS)
Tamkin, John M.
2017-08-01
Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.
Toward image guided robotic surgery: system validation.
Herrell, Stanley D; Kwartowitz, David Morgan; Milhoua, Paul M; Galloway, Robert L
2009-02-01
Navigation for current robotic assisted surgical techniques is primarily accomplished through a stereo pair of laparoscopic camera images. These images provide standard optical visualization of the surface but provide no subsurface information. Image guidance methods allow the visualization of subsurface information to determine the current position in relationship to that of tracked tools. A robotic image guided surgical system was designed and implemented based on our previous laboratory studies. A series of experiments using tissue mimicking phantoms with injected target lesions was performed. The surgeon was asked to resect "tumor" tissue with and without the augmentation of image guidance using the da Vinci robotic surgical system. Resections were performed and compared to an ideal resection based on the radius of the tumor measured from preoperative computerized tomography. A quantity called the resection ratio, that is the ratio of resected tissue compared to the ideal resection, was calculated for each of 13 trials and compared. The mean +/- SD resection ratio of procedures augmented with image guidance was smaller than that of procedures without image guidance (3.26 +/- 1.38 vs 9.01 +/- 1.81, p <0.01). Additionally, procedures using image guidance were shorter (average 8 vs 13 minutes). It was demonstrated that there is a benefit from the augmentation of laparoscopic video with updated preoperative images. Incorporating our image guided system into the da Vinci robotic system improved overall tissue resection, as measured by our metric. Adding image guidance to the da Vinci robotic surgery system may result in the potential for improvements such as the decreased removal of benign tissue while maintaining an appropriate surgical margin.
Robust 3D Position Estimation in Wide and Unconstrained Indoor Environments
Mossel, Annette
2015-01-01
In this paper, a system for 3D position estimation in wide, unconstrained indoor environments is presented that employs infrared optical outside-in tracking of rigid-body targets with a stereo camera rig. To overcome limitations of state-of-the-art optical tracking systems, a pipeline for robust target identification and 3D point reconstruction has been investigated that enables camera calibration and tracking in environments with poor illumination, static and moving ambient light sources, occlusions and harsh conditions, such as fog. For evaluation, the system has been successfully applied in three different wide and unconstrained indoor environments, (1) user tracking for virtual and augmented reality applications, (2) handheld target tracking for tunneling and (3) machine guidance for mining. The results of each use case are discussed to embed the presented approach into a larger technological and application context. The experimental results demonstrate the system’s capabilities to track targets up to 100 m. Comparing the proposed approach to prior art in optical tracking in terms of range coverage and accuracy, it significantly extends the available tracking range, while only requiring two cameras and providing a relative 3D point accuracy with sub-centimeter deviation up to 30 m and low-centimeter deviation up to 100 m. PMID:26694388
A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.
2003-06-01
The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less
CRISPR-Cas9 nuclear dynamics and target recognition in living cells
Ma, Hanhui; Tu, Li-Chun; Zhang, Shaojie; Grunwald, David
2016-01-01
The bacterial CRISPR-Cas9 system has been repurposed for genome engineering, transcription modulation, and chromosome imaging in eukaryotic cells. However, the nuclear dynamics of clustered regularly interspaced short palindromic repeats (CRISPR)–associated protein 9 (Cas9) guide RNAs and target interrogation are not well defined in living cells. Here, we deployed a dual-color CRISPR system to directly measure the stability of both Cas9 and guide RNA. We found that Cas9 is essential for guide RNA stability and that the nuclear Cas9–guide RNA complex levels limit the targeting efficiency. Fluorescence recovery after photobleaching measurements revealed that single mismatches in the guide RNA seed sequence reduce the target residence time from >3 h to as low as <2 min in a nucleotide identity- and position-dependent manner. We further show that the duration of target residence correlates with cleavage activity. These results reveal that CRISPR discriminates between genuine versus mismatched targets for genome editing via radical alterations in residence time. PMID:27551060
Pancam: A Multispectral Imaging Investigation on the NASA 2003 Mars Exploration Rover Mission
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Dingizian, A.; Brown, D.; Morris, R. V.; Arneson, H. M.; Johnson, M. J.
2003-01-01
One of the six science payload elements carried on each of the NASA Mars Exploration Rovers (MER; Figure 1) is the Panoramic Camera System, or Pancam. Pancam consists of three major components: a pair of digital CCD cameras, the Pancam Mast Assembly (PMA), and a radiometric calibration target. The PMA provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. The calibration target provides a set of reference color and grayscale standards for calibration validation, and a shadow post for quantification of the direct vs. diffuse illumination of the scene. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360 of azimuth and from zenith to nadir, providing a complete view of the scene around the rover in up to 12 unique wavelengths. The major characteristics of Pancam are summarized.
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
Feasibility evaluation of a motion detection system with face images for stereotactic radiosurgery.
Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo
2011-01-01
In stereotactic radiosurgery we can irradiate a targeted volume precisely with a narrow high-energy x-ray beam, and thus the motion of a targeted area may cause side effects to normal organs. This paper describes our motion detection system with three USB cameras. To reduce the effect of change in illuminance in a tracking area we used an infrared light and USB cameras that were sensitive to the infrared light. The motion detection of a patient was performed by tracking his/her ears and nose with three USB cameras, where pattern matching between a predefined template image for each view and acquired images was done by an exhaustive search method with a general-purpose computing on a graphics processing unit (GPGPU). The results of the experiments showed that the measurement accuracy of our system was less than 0.7 mm, amounting to less than half of that of our previous system.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-12-09
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time.
Wang, Fei; Dong, Hang; Chen, Yanan; Zheng, Nanning
2016-01-01
Strong demands for accurate non-cooperative target measurement have been arising recently for the tasks of assembling and capturing. Spherical objects are one of the most common targets in these applications. However, the performance of the traditional vision-based reconstruction method was limited for practical use when handling poorly-textured targets. In this paper, we propose a novel multi-sensor fusion system for measuring and reconstructing textureless non-cooperative spherical targets. Our system consists of four simple lasers and a visual camera. This paper presents a complete framework of estimating the geometric parameters of textureless spherical targets: (1) an approach to calibrate the extrinsic parameters between a camera and simple lasers; and (2) a method to reconstruct the 3D position of the laser spots on the target surface and achieve the refined results via an optimized scheme. The experiment results show that our proposed calibration method can obtain a fine calibration result, which is comparable to the state-of-the-art LRF-based methods, and our calibrated system can estimate the geometric parameters with high accuracy in real time. PMID:27941705
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
CRISPR-Cas9 Structures and Mechanisms.
Jiang, Fuguo; Doudna, Jennifer A
2017-05-22
Many bacterial clustered regularly interspaced short palindromic repeats (CRISPR)-CRISPR-associated (Cas) systems employ the dual RNA-guided DNA endonuclease Cas9 to defend against invading phages and conjugative plasmids by introducing site-specific double-stranded breaks in target DNA. Target recognition strictly requires the presence of a short protospacer adjacent motif (PAM) flanking the target site, and subsequent R-loop formation and strand scission are driven by complementary base pairing between the guide RNA and target DNA, Cas9-DNA interactions, and associated conformational changes. The use of CRISPR-Cas9 as an RNA-programmable DNA targeting and editing platform is simplified by a synthetic single-guide RNA (sgRNA) mimicking the natural dual trans-activating CRISPR RNA (tracrRNA)-CRISPR RNA (crRNA) structure. This review aims to provide an in-depth mechanistic and structural understanding of Cas9-mediated RNA-guided DNA targeting and cleavage. Molecular insights from biochemical and structural studies provide a framework for rational engineering aimed at altering catalytic function, guide RNA specificity, and PAM requirements and reducing off-target activity for the development of Cas9-based therapies against genetic diseases.
ERIC Educational Resources Information Center
Greer, Martin L.
Directed to the class or individual with limited film making equipment, this paper presents a "hands on" guide to the production of animated cartoons. Its 14 sections deal with the following topics: understanding animation; choosing subject matter for an animation; writing a script; getting the timing right; choosing a camera and projector;…
Evaluation of Dental Shade Guide Variability Using Cross-Polarized Photography.
Gurrea, Jon; Gurrea, Marta; Bruguera, August; Sampaio, Camila S; Janal, Malvin; Bonfante, Estevam; Coelho, Paulo G; Hirata, Ronaldo
2016-01-01
This study evaluated color variability in the A hue between the VITA Classical (VITA Zahnfabrik) shade guide and four other VITA-coded ceramic shade guides using a Canon EOS 60D camera and software (Photoshop CC, Adobe). A total of 125 photographs were taken, 5 per shade tab for each of 5 shades (A1 to A4) from the following shade guides: VITA Classical (control), IPS e.max Ceram (Ivoclar Vivadent), IPS d.SIGN (Ivoclar Vivadent), Initial ZI (GC), and Creation CC (Creation Willi Geller). Photos were processed with Adobe Photoshop CC to allow standardized evaluation of hue, chroma, and value between shade tabs. None of the VITA-coded shade tabs fully matched the VITA Classical shade tab for hue, chroma, or value. The VITA-coded shade guides evaluated herein showed an overall unmatched shade in all tabs when compared with the control, suggesting that shade selection should be made using the guide produced by the manufacturer of the ceramic intended for the final restoration.
A Kinect(™) camera based navigation system for percutaneous abdominal puncture.
Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao
2016-08-07
Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26 ± 1.94 mm, 2.92 ± 1.67 mm, and 5.23 ± 2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40 ± 2.72 mm, and its lateral and longitudinal component were 4.30 ± 2.51 mm and 3.80 ± 3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable, and that the second generation Kinect(™)-based navigation is superior to the first-generation Kinect(™), and has potential of clinical application in percutaneous abdominal puncture.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Curiosity First Rock Star, Up-Close
2012-08-17
This close-up image shows the first target NASA Curiosity rover aims to zap with its Chemistry and Camera ChemCam instrument. The instrument will analyze that spark with a telescope and identify the chemical elements in the target.
Adaptive target binarization method based on a dual-camera system
NASA Astrophysics Data System (ADS)
Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing
2018-01-01
An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung
2016-01-01
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156
A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.
Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung
2016-10-25
Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael
2014-12-01
Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.
The Zwicky Transient Facility Camera
NASA Astrophysics Data System (ADS)
Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.
2016-08-01
The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.
A novel SPECT camera for molecular imaging of the prostate
NASA Astrophysics Data System (ADS)
Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir
2011-10-01
The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.
A user's guide to the Mariner 9 television reduced data record
NASA Technical Reports Server (NTRS)
Seidman, J. B.; Green, W. B.; Jepsen, P. L.; Ruiz, R. M.; Thorpe, T. E.
1973-01-01
The Mariner 9 television experiment used two cameras to photograph Mars from an orbiting spacecraft. For quantitative analysis of the image data transmitted to earth, the pictures were processed by digital computer to remove camera-induced distortions. The removal process was performed by the JPL Image Processing Laboratory (IPL) using calibration data measured during prelaunch testing of the cameras. The Reduced Data Record (RDR) is the set of data which results from the distortion-removal, or decalibration, process. The principal elements of the RDR are numerical data on magnetic tape and photographic data. Numerical data are the result of correcting for geometric and photometric distortions and residual-image effects. Photographic data are reproduced on negative and positive transparency films, strip contact and enlargement prints, and microfiche positive transparency film. The photographic data consist of two versions of each TV frame created by applying two special enhancement processes to the numerical data.
Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study.
Chung, Heewon; Ko, Hoon; Thap, Tharoeun; Jeong, Changwon; Noh, Se-Eung; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone's built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone's built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient's HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis.
Smartphone-Based Cardiac Rehabilitation Program: Feasibility Study
Chung, Heewon; Yoon, Kwon-Ha; Lee, Jinseok
2016-01-01
We introduce a cardiac rehabilitation program (CRP) that utilizes only a smartphone, with no external devices. As an efficient guide for cardiac rehabilitation exercise, we developed an application to automatically indicate the exercise intensity by comparing the estimated heart rate (HR) with the target heart rate zone (THZ). The HR is estimated using video images of a fingertip taken by the smartphone’s built-in camera. The introduced CRP app includes pre-exercise, exercise with intensity guidance, and post-exercise. In the pre-exercise period, information such as THZ, exercise type, exercise stage order, and duration of each stage are set up. In the exercise with intensity guidance, the app estimates HR from the pulse obtained using the smartphone’s built-in camera and compares the estimated HR with the THZ. Based on this comparison, the app adjusts the exercise intensity to shift the patient’s HR to the THZ during exercise. In the post-exercise period, the app manages the ratio of the estimated HR to the THZ and provides a questionnaire on factors such as chest pain, shortness of breath, and leg pain during exercise, as objective and subjective evaluation indicators. As a key issue, HR estimation upon signal corruption due to motion artifacts is also considered. Through the smartphone-based CRP, we estimated the HR accuracy as mean absolute error and root mean squared error of 6.16 and 4.30bpm, respectively, with signal corruption due to motion artifacts being detected by combining the turning point ratio and kurtosis. PMID:27551969
Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption
NASA Astrophysics Data System (ADS)
Mathavaraj, S.; Pandiyan, R.; Padhi, R.
2017-12-01
A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.
Yamanel, Kivanc; Caglar, Alper; Özcan, Mutlu; Gulsah, Kamran; Bagis, Bora
2010-12-01
This study evaluated the color parameters of resin composite shade guides determined using a colorimeter and digital imaging method. Four composite shade guides, namely: two nanohybrid (Grandio [Voco GmbH, Cuxhaven, Germany]; Premise [KerrHawe SA, Bioggio, Switzerland]) and two hybrid (Charisma [Heraeus Kulzer, GmbH & Co. KG, Hanau, Germany]; Filtek Z250 [3M ESPE, Seefeld, Germany]) were evaluated. Ten shade tabs were selected (A1, A2, A3, A3,5, A4, B1, B2, B3, C2, C3) from each shade guide. CIE Lab values were obtained using digital imaging and a colorimeter (ShadeEye NCC Dental Chroma Meter, Shofu Inc., Kyoto, Japan). The data were analyzed using two-way analysis of variance and Bonferroni post hoc test. Overall, the mean ΔE values from different composite pairs demonstrated statistically significant differences when evaluated with the colorimeter (p < 0.001) but there was no significant difference with the digital imaging method (p = 0.099). With both measurement methods in total, 80% of the shade guide pairs from different composites (97/120) showed color differences greater than 3.7 (moderately perceptible mismatch), and 49% (59/120) had obvious mismatch (ΔE > 6.8). For all shade pairs evaluated, the most significant shade mismatches were obtained between Grandio-Filtek Z250 (p = 0.021) and Filtek Z250-Premise (p = 0.01) regarding ΔE mean values, whereas the best shade match was between Grandio-Charisma (p = 0.255) regardless of the measurement method. The best color match (mean ΔE values) was recorded for A1, A2, and A3 shade pairs in both methods. When proper object-camera distance, digital camera settings, and suitable illumination conditions are provided, digital imaging method could be used in the assessment of color parameters. Interchanging use of shade guides from different composite systems should be avoided during color selection. © 2010, COPYRIGHT THE AUTHORS. JOURNAL COMPILATION © 2010, WILEY PERIODICALS, INC.
Dramatic Enhancement of Genome Editing by CRISPR/Cas9 Through Improved Guide RNA Design
Farboud, Behnom; Meyer, Barbara J.
2015-01-01
Success with genome editing by the RNA-programmed nuclease Cas9 has been limited by the inability to predict effective guide RNAs and DNA target sites. Not all guide RNAs have been successful, and even those that were, varied widely in their efficacy. Here we describe and validate a strategy for Caenorhabditis elegans that reliably achieved a high frequency of genome editing for all targets tested in vivo. The key innovation was to design guide RNAs with a GG motif at the 3′ end of their target-specific sequences. All guides designed using this simple principle induced a high frequency of targeted mutagenesis via nonhomologous end joining (NHEJ) and a high frequency of precise DNA integration from exogenous DNA templates via homology-directed repair (HDR). Related guide RNAs having the GG motif shifted by only three nucleotides showed severely reduced or no genome editing. We also combined the 3′ GG guide improvement with a co-CRISPR/co-conversion approach. For this co-conversion scheme, animals were only screened for genome editing at designated targets if they exhibited a dominant phenotype caused by Cas9-dependent editing of an unrelated target. Combining the two strategies further enhanced the ease of mutant recovery, thereby providing a powerful means to obtain desired genetic changes in an otherwise unaltered genome. PMID:25695951
Sensor-guided threat countermeasure system
Stuart, Brent C.; Hackel, Lloyd A.; Hermann, Mark R.; Armstrong, James P.
2012-12-25
A countermeasure system for use by a target to protect against an incoming sensor-guided threat. The system includes a laser system for producing a broadband beam and means for directing the broadband beam from the target to the threat. The countermeasure system comprises the steps of producing a broadband beam and directing the broad band beam from the target to blind or confuse the incoming sensor-guided threat.
2005-12-19
Using the JMars targeting software, eighth grade students from Charleston Middle School in Charleston, IL, selected the location of -8.37N and 276.66E for capture by the THEMIS visible camera during Mars Odyssey sixth orbit of Mars on Nov. 22, 2005
New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing
NASA Astrophysics Data System (ADS)
Kemper, G.
2012-07-01
A huge number of small and medium sized sensors entered the market. Today's mid format sensors reach 80 MPix and allow to run projects of medium size, comparable with the first big format digital cameras about 6 years ago. New high quality lenses and new developments in the integration prepared the market for photogrammetric work. Companies as Phase One or Hasselblad and producers or integrators as Trimble, Optec, and others utilized these cameras for professional image production. In combination with small camera stabilizers they can be used also in small aircraft and make the equipment small and easy transportable e.g. for rapid assessment purposes. The combination of different camera sensors enables multi or hyper-spectral installations e.g. useful for agricultural or environmental projects. Arrays of oblique viewing cameras are in the market as well, in many cases these are small and medium format sensors combined as rotating or shifting devices or just as a fixed setup. Beside the proper camera installation and integration, also the software that controls the hardware and guides the pilot has to solve much more tasks than a normal FMS did in the past. Small and relatively cheap Laser Scanners (e.g. Riegl) are in the market and a proper combination with MS Cameras and an integrated planning and navigation is a challenge that has been solved by different softwares. Turnkey solutions are available e.g. for monitoring power line corridors where taking images is just a part of the job. Integration of thermal camera systems with laser scanner and video capturing must be combined with specific information of the objects stored in a database and linked when approaching the navigation point.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
Aerial surveillance based on hierarchical object classification for ground target detection
NASA Astrophysics Data System (ADS)
Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo
2015-03-01
Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.
Designing manufacturable filters for a 16-band plenoptic camera using differential evolution
NASA Astrophysics Data System (ADS)
Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert
2017-05-01
A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.
NASA Astrophysics Data System (ADS)
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
2015-03-01
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
Mechanism of duplex DNA destabilization by RNA-guided Cas9 nuclease during target interrogation
Mekler, Vladimir; Minakhin, Leonid; Severinov, Konstantin
2017-01-01
The prokaryotic clustered regularly interspaced short palindromic repeats (CRISPR)-associated 9 (Cas9) endonuclease cleaves double-stranded DNA sequences specified by guide RNA molecules and flanked by a protospacer adjacent motif (PAM) and is widely used for genome editing in various organisms. The RNA-programmed Cas9 locates the target site by scanning genomic DNA. We sought to elucidate the mechanism of initial DNA interrogation steps that precede the pairing of target DNA with guide RNA. Using fluorometric and biochemical assays, we studied Cas9/guide RNA complexes with model DNA substrates that mimicked early intermediates on the pathway to the final Cas9/guide RNA–DNA complex. The results show that Cas9/guide RNA binding to PAM favors separation of a few PAM-proximal protospacer base pairs allowing initial target interrogation by guide RNA. The duplex destabilization is mediated, in part, by Cas9/guide RNA affinity for unpaired segments of nontarget strand DNA close to PAM. Furthermore, our data indicate that the entry of double-stranded DNA beyond a short threshold distance from PAM into the Cas9/single-guide RNA (sgRNA) interior is hindered. We suggest that the interactions unfavorable for duplex DNA binding promote DNA bending in the PAM-proximal region during early steps of Cas9/guide RNA–DNA complex formation, thus additionally destabilizing the protospacer duplex. The mechanism that emerges from our analysis explains how the Cas9/sgRNA complex is able to locate the correct target sequence efficiently while interrogating numerous nontarget sequences associated with correct PAMs. PMID:28484024
Mechanism of duplex DNA destabilization by RNA-guided Cas9 nuclease during target interrogation.
Mekler, Vladimir; Minakhin, Leonid; Severinov, Konstantin
2017-05-23
The prokaryotic clustered regularly interspaced short palindromic repeats (CRISPR)-associated 9 (Cas9) endonuclease cleaves double-stranded DNA sequences specified by guide RNA molecules and flanked by a protospacer adjacent motif (PAM) and is widely used for genome editing in various organisms. The RNA-programmed Cas9 locates the target site by scanning genomic DNA. We sought to elucidate the mechanism of initial DNA interrogation steps that precede the pairing of target DNA with guide RNA. Using fluorometric and biochemical assays, we studied Cas9/guide RNA complexes with model DNA substrates that mimicked early intermediates on the pathway to the final Cas9/guide RNA-DNA complex. The results show that Cas9/guide RNA binding to PAM favors separation of a few PAM-proximal protospacer base pairs allowing initial target interrogation by guide RNA. The duplex destabilization is mediated, in part, by Cas9/guide RNA affinity for unpaired segments of nontarget strand DNA close to PAM. Furthermore, our data indicate that the entry of double-stranded DNA beyond a short threshold distance from PAM into the Cas9/single-guide RNA (sgRNA) interior is hindered. We suggest that the interactions unfavorable for duplex DNA binding promote DNA bending in the PAM-proximal region during early steps of Cas9/guide RNA-DNA complex formation, thus additionally destabilizing the protospacer duplex. The mechanism that emerges from our analysis explains how the Cas9/sgRNA complex is able to locate the correct target sequence efficiently while interrogating numerous nontarget sequences associated with correct PAMs.
MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redler, G; Bernard, D; Templeton, A
2015-06-15
Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing themore » scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive approach, employing multiple simulation techniques and experiments, is taken to demonstrate the feasibility of a novel scatter imaging modality for the necessary real-time image guidance.« less
The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity
NASA Astrophysics Data System (ADS)
Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.
2009-08-01
The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.
Comparing scat detection dogs, cameras, and hair snares for surveying carnivores
Long, Robert A.; Donovan, T.M.; MacKay, Paula; Zielinski, William J.; Buzas, Jeffrey S.
2007-01-01
Carnivores typically require large areas of habitat, exist at low natural densities, and exhibit elusive behavior - characteristics that render them difficult to study. Noninvasive survey methods increasingly provide means to collect extensive data on carnivore occupancy, distribution, and abundance. During the summers of 2003-2004, we compared the abilities of scat detection dogs, remote cameras, and hair snares to detect black bears (Ursus americanus), fishers (Martes pennanti), and bobcats (Lynx rufus) at 168 sites throughout Vermont. All 3 methods detected black bears; neither fishers nor bobcats were detected by hair snares. Scat detection dogs yielded the highest raw detection rate and probability of detection (given presence) for each of the target species, as well as the greatest number of unique detections (i.e., occasions when only one method detected the target species). We estimated that the mean probability of detecting the target species during a single visit to a site with a detection dog was 0.87 for black bears, 0.84 for fishers, and 0.27 for bobcats. Although the cost of surveying with detection dogs was higher than that of remote cameras or hair snares, the efficiency of this method rendered it the most cost-effective survey method.
Inferred UV Fluence Focal-Spot Profiles from Soft X-Ray Pinhole Camera Measurements on OMEGA
NASA Astrophysics Data System (ADS)
Theobald, W.; Sorce, C.; Epstein, R.; Keck, R. L.; Kellogg, C.; Kessler, T. J.; Kwiatkowski, J.; Marshall, F. J.; Seka, W.; Shvydky, A.; Stoeckl, C.
2017-10-01
The drive uniformity of OMEGA cryogenic implosions is affected by UV beamfluence variations on target, which require careful monitoring at full laser power. This is routinely performed with multiple pinhole cameras equipped with charge-injection devices (CID's) that record the x-ray emission in the 3- to 7-keV photon energy range from an Au-coated target. The technique relies on the knowledge of the relation between x-ray fluence Fx and UV fluence FUV ,Fx FUVγ , with a measured γ = 3.42 for the CID-based diagnostic and 1-ns laser pulse. It is demonstrated here that using a back-thinned charge-coupled-device camera with softer filtration for x-rays with photon energies <2 keV and well calibrated pinhole provides a lower γ 2 and a larger dynamic range in the measured UV fluence. Inferred UV fluence profiles were measured for 100-ps and 1-ns laser pulses and were compared to directly measured profiles from a UV equivalent-target-plane diagnostic. Good agreement between both techniques is reported for selected beams. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Iron-Nickel Meteorite Zapped by Mars Rover Laser
2016-11-02
The dark, golf-ball-size object in this composite, colorized view from the Chemistry and Camera (ChemCam) instrument on NASA's Curiosity Mars rover shows a grid of shiny dots where ChemCam had fired laser pulses used for determining the chemical elements in the target's composition. The analysis confirmed that this object, informally named "Egg Rock," is an iron-nickel meteorite. Iron-nickel meteorites are a common class of space rocks found on Earth, and previous examples have been found on Mars, but Egg Rock is the first on Mars to be examined with a laser-firing spectrometer. The laser pulses on Oct. 30, 2016, induced bursts of glowing gas at the target, and ChemCam's spectrometer read the wavelengths of light from those bursts to gain information about the target's composition. The laser pulses also burned through the dark outer surface, exposing bright interior material. This view combines two images taken later the same day by ChemCam's remote micro-imager (RMI) camera, with color added from an image taken by Curiosity's Mast Camera (Mastcam). A Mastcam image of Egg Rock is at PIA21134. http://photojournal.jpl.nasa.gov/catalog/PIA21133
Time-resolved soft-x-ray studies of energy transport in layered and planar laser-driven targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stradling, G.L.
New low-energy x-ray diagnostic techniques are used to explore energy-transport processes in laser heated plasmas. Streak cameras are used to provide 15-psec time-resolution measurements of subkeV x-ray emission. A very thin (50 ..mu..g/cm/sup 2/) carbon substrate provides a low-energy x-ray transparent window to the transmission photocathode of this soft x-ray streak camera. Active differential vacuum pumping of the instrument is required. The use of high-sensitivity, low secondary-electron energy-spread CsI photocathodes in x-ray streak cameras is also described. Significant increases in sensitivity with only a small and intermittant decrease in dynamic range were observed. These coherent, complementary advances in subkeV, time-resolvedmore » x-ray diagnostic capability are applied to energy-transport investigations of 1.06-..mu..m laser plasmas. Both solid disk targets of a variety of Z's as well as Be-on-Al layered-disk targets were irradiated with 700-psec laser pulses of selected intensity between 3 x 10/sup 14/ W/cm/sup 2/ and 1 x 10/sup 15/ W/cm/sup 2/.« less
Virtual Laparoscopic Training System Based on VCH Model.
Tang, Jiangzhou; Xu, Lang; He, Longjun; Guan, Songluan; Ming, Xing; Liu, Qian
2017-04-01
Laparoscopy has been widely used to perform abdominal surgeries, as it is advantageous in that the patients experience lower post-surgical trauma, shorter convalescence, and less pain as compared to traditional surgery. Laparoscopic surgeries require precision; therefore, it is imperative to train surgeons to reduce the risk of operation. Laparoscopic simulators offer a highly realistic surgical environment by using virtual reality technology, and it can improve the training efficiency of laparoscopic surgery. This paper presents a virtual Laparoscopic surgery system. The proposed system utilizes the Visible Chinese Human (VCH) to construct the virtual models and simulates real-time deformation with both improved special mass-spring model and morph target animation. Meanwhile, an external device that integrates two five-degrees-of-freedom (5-DOF) manipulators was designed and made to interact with the virtual system. In addition, the proposed system provides a modular tool based on Unity3D to define the functions and features of instruments and organs, which could help users to build surgical training scenarios quickly. The proposed virtual laparoscopic training system offers two kinds of training mode, skills training and surgery training. In the skills training mode, the surgeons are mainly trained for basic operations, such as laparoscopic camera, needle, grasp, electric coagulation, and suturing. In the surgery-training mode, the surgeons can practice cholecystectomy and removal of hepatic cysts by guided or non-guided teaching.
OpenCV and TYZX : video surveillance for tracking.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Jim; Spencer, Andrew; Chu, Eric
2008-08-01
As part of the National Security Engineering Institute (NSEI) project, several sensors were developed in conjunction with an assessment algorithm. A camera system was developed in-house to track the locations of personnel within a secure room. In addition, a commercial, off-the-shelf (COTS) tracking system developed by TYZX was examined. TYZX is a Bay Area start-up that has developed its own tracking hardware and software which we use as COTS support for robust tracking. This report discusses the pros and cons of each camera system, how they work, a proposed data fusion method, and some visual results. Distributed, embedded image processingmore » solutions show the most promise in their ability to track multiple targets in complex environments and in real-time. Future work on the camera system may include three-dimensional volumetric tracking by using multiple simple cameras, Kalman or particle filtering, automated camera calibration and registration, and gesture or path recognition.« less
Pattern-Recognition System for Approaching a Known Target
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang
2008-01-01
A closed-loop pattern-recognition system is designed to provide guidance for maneuvering a small exploratory robotic vehicle (rover) on Mars to return to a landed spacecraft to deliver soil and rock samples that the spacecraft would subsequently bring back to Earth. The system could be adapted to terrestrial use in guiding mobile robots to approach known structures that humans could not approach safely, for such purposes as reconnaissance in military or law-enforcement applications, terrestrial scientific exploration, and removal of explosive or other hazardous items. The system has been demonstrated in experiments in which the Field Integrated Design and Operations (FIDO) rover (a prototype Mars rover equipped with a video camera for guidance) is made to return to a mockup of Mars-lander spacecraft. The FIDO rover camera autonomously acquires an image of the lander from a distance of 125 m in an outdoor environment. Then under guidance by an algorithm that performs fusion of multiple line and texture features in digitized images acquired by the camera, the rover traverses the intervening terrain, using features derived from images of the lander truss structure. Then by use of precise pattern matching for determining the position and orientation of the rover relative to the lander, the rover aligns itself with the bottom of ramps extending from the lander, in preparation for climbing the ramps to deliver samples to the lander. The most innovative aspect of the system is a set of pattern-recognition algorithms that govern a three-phase visual-guidance sequence for approaching the lander. During the first phase, a multifeature fusion algorithm integrates the outputs of a horizontal-line-detection algorithm and a wavelet-transform-based visual-area-of-interest algorithm for detecting the lander from a significant distance. The horizontal-line-detection algorithm is used to determine candidate lander locations based on detection of a horizontal deck that is part of the lander.
Video Guidance Sensors Using Remotely Activated Targets
NASA Technical Reports Server (NTRS)
Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.
2004-01-01
Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and transmitted from, the tracking vehicle. In the first proposed VGS system, the tracking vehicle would transmit a pulse of light. Upon reception of the pulse, circuitry on the tracked vehicle would activate the target lights. During the pulse, the target image acquired by the camera would be digitized. When the pulse was turned off, the target lights would be turned off and the background video image would be digitized. The second proposed system would function similarly to the first proposed system, except that the transmitted synchronizing signal would be a radio pulse instead of a light pulse. In this system, the signal receptor would be a rectifying antenna. If the signal contained sufficient power, the output of the rectifying antenna could be used to activate the target lights, making it unnecessary to include a battery or other power supply for the targets on the tracked vehicle.
NASA Astrophysics Data System (ADS)
Clarke, Fraser; Lynn, James; Thatte, Niranjan; Tecza, Matthias
2014-08-01
We have developed a simple but effective guider for use with the Oxford-SWIFT integral field spectrograph on the Palomar 200-inch telescope. The guider uses mainly off-the-shelf components, including commercial amateur astronomy software to interface with the CCD camera, calculating guiding corrections, and send guide commands to the telescope. The only custom piece of software is an driver to provide an interface between the Palomar telescope control system and the industry standard 'ASCOM' system. Using existing commercial software provided a very cheap guider (<$5000) with minimal (<15 minutes) commissioning time. The final system provides sub-arcsecond guiding, and could easily be adapted to any other professional telescope
Measuring Stellar Temperatures: An Astrophysical Laboratory for Undergraduate Students
ERIC Educational Resources Information Center
Cenadelli, D.; Zeni, M.
2008-01-01
While astrophysics is a fascinating subject, it hardly lends itself to laboratory experiences accessible to undergraduate students. In this paper, we describe a feasible astrophysical laboratory experience in which the students are guided to take several stellar spectra, using a telescope, a spectrograph and a CCD camera, and perform a full data…
Guide for the Preparation of Scientific Papers for Publication. Second Edition.
ERIC Educational Resources Information Center
Martinsson, Anders
Updating a 1968 publication, this document presents rules and explanatory comments for use by authors and editors involved in the preparation of a scientific manuscript for professional typesetting prior to publication. It is noted that the guidelines should also be useful for authors producing camera-ready typescript with word processing…
Lights, Camera, Action: Facilitating the Design and Production of Effective Instructional Videos
ERIC Educational Resources Information Center
Di Paolo, Terry; Wakefield, Jenny S.; Mills, Leila A.; Baker, Laura
2017-01-01
This paper outlines a rudimentary process intended to guide faculty in K-12 and higher education through the steps involved to produce video for their classes. The process comprises four steps: planning, development, delivery and reflection. Each step is infused with instructional design information intended to support the collaboration between…
Photographic documentation, a practical guide for non professional forensic photography.
Ozkalipci, Onder; Volpellier, Muriel
2010-01-01
Forensic photography is essential for documentation of evidence of torture. Consent of the alleged victim should be sought in all cases. The article gives information about when and how to take pictures of what as well as image authentication, audit trail, storage, faulty pictures and the kind of camera to use.
Graphic Communications--Preparatory Area. Book I--Typography and Modern Typesetting. Student Manual.
ERIC Educational Resources Information Center
Hertz, Andrew
Designed to develop in the student skills in all of the preparatory functions of the graphic communications industry, this student guide covers copy preparation, art preparation, typography, camera, stripping, production management, and forms design, preparation, and analysis. In addition to the skills areas, material is included on the history of…
Guide to Using Onionskin Analysis Code (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Morzinski, Jerome Arthur
2016-09-15
This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less
Gan, Qi; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Hu, Chuanzhen; Shao, Pengfei; Xu, Ronald X.
2016-01-01
We propose a projective navigation system for fluorescence imaging and image display in a natural mode of visual perception. The system consists of an excitation light source, a monochromatic charge coupled device (CCD) camera, a host computer, a projector, a proximity sensor and a Complementary metal–oxide–semiconductor (CMOS) camera. With perspective transformation and calibration, our surgical navigation system is able to achieve an overall imaging speed higher than 60 frames per second, with a latency of 330 ms, a spatial sensitivity better than 0.5 mm in both vertical and horizontal directions, and a projection bias less than 1 mm. The technical feasibility of image-guided surgery is demonstrated in both agar-agar gel phantoms and an ex vivo chicken breast model embedding Indocyanine Green (ICG). The biological utility of the system is demonstrated in vivo in a classic model of ICG hepatic metabolism. Our benchtop, ex vivo and in vivo experiments demonstrate the clinical potential for intraoperative delineation of disease margin and image-guided resection surgery. PMID:27391764
Thermal Imaging of Medical Saw Blades and Guides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Steffner, Thomas E
2007-01-01
Better Than New, LLC., has developed a surface treatment to reduce the friction and wear of orthopedic saw blades and guides. The medical saw blades were thermally imaged while sawing through fresh animal bone and an IR camera was used to measure the blade temperature as it exited the bone. The thermal performance of as-manufactured saw blades was compared to surface-treated blades, and a freshly used blade was used for temperature calibration purposes in order to account for any emissivity changes due to organic transfer layers. Thermal imaging indicates that the treated saw blades cut faster and cooler than untreatedmore » blades. In orthopedic surgery, saw guides are used to perfectly size the bone to accept a prosthesis. However, binding can occur between the blade and guide because of misalignment. This condition increases the saw blade temperature and may result in tissue damage. Both treated ad untreated saw guides were also studied. The treated saw guide operated at a significantly lower temperature than untreated guide. Saw blades and guides that operate at a cooler temperature are expected to reduce the amount of tissue damage (thermal necrosis) and may reduce the number of post-operative complications.« less
Ground moving target geo-location from monocular camera mounted on a micro air vehicle
NASA Astrophysics Data System (ADS)
Guo, Li; Ang, Haisong; Zheng, Xiangming
2011-08-01
The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the moving target Kalman filter(KF). Experimental results show that our method can instantaneously geo-locate the moving target by operator's single click and can reach 15 meters accuracy for an MAV flying at 200 meters above the ground.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Event-Driven Random-Access-Windowing CCD Imaging System
NASA Technical Reports Server (NTRS)
Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William
2004-01-01
A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).
Agostini, Denis; Marie, Pierre-Yves; Ben-Haim, Simona; Rouzet, François; Songy, Bernard; Giordano, Alessandro; Gimelli, Alessia; Hyafil, Fabien; Sciagrà, Roberto; Bucerius, Jan; Verberne, Hein J; Slart, Riemer H J A; Lindner, Oliver; Übleis, Christopher; Hacker, Marcus
2016-12-01
The trade-off between resolution and count sensitivity dominates the performance of standard gamma cameras and dictates the need for relatively high doses of radioactivity of the used radiopharmaceuticals in order to limit image acquisition duration. The introduction of cadmium-zinc-telluride (CZT)-based cameras may overcome some of the limitations against conventional gamma cameras. CZT cameras used for the evaluation of myocardial perfusion have been shown to have a higher count sensitivity compared to conventional single photon emission computed tomography (SPECT) techniques. CZT image quality is further improved by the development of a dedicated three-dimensional iterative reconstruction algorithm, based on maximum likelihood expectation maximization (MLEM), which corrects for the loss in spatial resolution due to line response function of the collimator. All these innovations significantly reduce imaging time and result in a lower patient's radiation exposure compared with standard SPECT. To guide current and possible future users of the CZT technique for myocardial perfusion imaging, the Cardiovascular Committee of the European Association of Nuclear Medicine, starting from the experience of its members, has decided to examine the current literature regarding procedures and clinical data on CZT cameras. The committee hereby aims 1) to identify the main acquisitions protocols; 2) to evaluate the diagnostic and prognostic value of CZT derived myocardial perfusion, and finally 3) to determine the impact of CZT on radiation exposure.
NASA Technical Reports Server (NTRS)
2005-01-01
On April 7, 2005, the Deep Impact spacecraft's Impactor Target Sensor camera recorded this image of M11, the Wild Duck cluster, a galactic open cluster located 6 thousand light years away. The camera is located on the impactor spacecraft, which will image comet Tempel 1 beginning 22 hours before impact until about 2 seconds before impact. Impact with comet Tempel 1 is planned for July 4, 2005.Development of a real time multiple target, multi camera tracker for civil security applications
NASA Astrophysics Data System (ADS)
Åkerlund, Hans
2009-09-01
A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.
Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-01-01
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946
Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-09-09
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.
Sample-Collection Drill Hole on Martian Sandstone Target Windjana
2014-05-06
This image from the Navigation Camera Navcam on NASA Curiosity Mars rover shows two holes at top center drilled into a sandstone target called Windjana. The farther hole, with larger pile of tailings around it, is a full-depth sampling hole.
2016-04-04
Final 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Test Operations Procedure (TOP) 03-2-827 Test Procedures for Video Target Scoring Using...ABSTRACT This Test Operations Procedure (TOP) describes typical equipment and procedures to setup and operate a Video Target Scoring System (VTSS) to...lights. 15. SUBJECT TERMS Video Target Scoring System, VTSS, witness screens, camera, target screen, light pole 16. SECURITY
NASA Astrophysics Data System (ADS)
Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki
We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.
ARGOS wavefront sensing: from detection to correction
NASA Astrophysics Data System (ADS)
Orban de Xivry, Gilles; Bonaglia, M.; Borelli, J.; Busoni, L.; Connot, C.; Esposito, S.; Gaessler, W.; Kulas, M.; Mazzoni, T.; Puglisi, A.; Rabien, S.; Storm, J.; Ziegleder, J.
2014-08-01
Argos is the ground-layer adaptive optics system for the Large Binocular Telescope. In order to perform its wide-field correction, Argos uses three laser guide stars which sample the atmospheric turbulence. To perform the correction, Argos has at disposal three different wavefront sensing measurements : its three laser guide stars, a NGS tip-tilt, and a third wavefront sensor. We present the wavefront sensing architecture and its individual components, in particular: the finalized Argos pnCCD camera detecting the 3 laser guide stars at 1kHz, high quantum efficiency and 4e- noise; the Argos tip-tilt sensor based on a quad-cell avalanche photo-diodes; and the Argos wavefront computer. Being in the middle of the commissioning, we present the first wavefront sensing configurations and operations performed at LBT, and discuss further improvements in the measurements of the 3 laser guide star slopes as detected by the pnCCD.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
Measurement of Flat Slab Deformations by the Multi-Image Photogrammetry Method
NASA Astrophysics Data System (ADS)
Marčiš, Marián; Fraštia, Marek; Augustín, Tomáš
2017-12-01
The use of photogrammetry during load tests of building components is a common practise all over the world. It is very effective thanks to its contactless approach, 3D measurement, fast data collection, and partial or full automation of image processing; it can deliver very accurate results. Multi-image convergent photogrammetry supported by artificial coded targets is the most accurate photogrammetric method when the targets are detected in an image with a higher degree of accuracy than a 0.1 pixel. It is possible to achieve an accuracy of 0.03 mm for all the points measured on the object observed if the camera is close enough to the object, and the positions of the camera and the number of shots are precisely planned. This contribution deals with the design of a special hanging frame for a DSLR camera used during the photogrammetric measurement of the deformation of flat concrete slab. The results of the photogrammetric measurements are compared to the results from traditional contact measurement techniques during load tests.
NASA Astrophysics Data System (ADS)
Havens, Timothy C.; Spain, Christopher J.; Ho, K. C.; Keller, James M.; Ton, Tuan T.; Wong, David C.; Soumekh, Mehrdad
2010-04-01
Forward-looking ground-penetrating radar (FLGPR) has received a significant amount of attention for use in explosivehazards detection. A drawback to FLGPR is that it results in an excessive number of false detections. This paper presents our analysis of the explosive-hazards detection system tested by the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD). The NVESD system combines an FLGPR with a visible-spectrum color camera. We present a target detection algorithm that uses a locally-adaptive detection scheme with spectrum-based features. The remaining FLGPR detections are then projected into the camera imagery and image-based features are collected. A one-class classifier is then used to reduce the number of false detections. We show that our proposed FLGPR target detection algorithm, coupled with our camera-based false alarm (FA) reduction method, is effective at reducing the number of FAs in test data collected at a US Army test facility.
Optoelectronic System Measures Distances to Multiple Targets
NASA Technical Reports Server (NTRS)
Liebe, Carl Christian; Abramovici, Alexander; Bartman, Randall; Chapsky, Jacob; Schmalz, John; Coste, Keith; Litty, Edward; Lam, Raymond; Jerebets, Sergei
2007-01-01
An optoelectronic metrology apparatus now at the laboratory-prototype stage of development is intended to repeatedly determine distances of as much as several hundred meters, at submillimeter accuracy, to multiple targets in rapid succession. The underlying concept of optoelectronic apparatuses that can measure distances to targets is not new; such apparatuses are commonly used in general surveying and machining. However, until now such apparatuses have been, variously, constrained to (1) a single target or (2) multiple targets with a low update rate and a requirement for some a priori knowledge of target geometry. When fully developed, the present apparatus would enable measurement of distances to more than 50 targets at an update rate greater than 10 Hz, without a requirement for a priori knowledge of target geometry. The apparatus (see figure) includes a laser ranging unit (LRU) that includes an electronic camera (photo receiver), the field of view of which contains all relevant targets. Each target, mounted at a fiducial position on an object of interest, consists of a small lens at the output end of an optical fiber that extends from the object of interest back to the LRU. For each target and its optical fiber, there is a dedicated laser that is used to illuminate the target via the optical fiber. The targets are illuminated, one at a time, with laser light that is modulated at a frequency of 10.01 MHz. The modulated laser light is emitted by the target, from where it returns to the camera (photodetector), where it is detected. Both the outgoing and incoming 10.01-MHz laser signals are mixed with a 10-MHz local-oscillator to obtain beat notes at 10 kHz, and the difference between the phases of the beat notes is measured by a phase meter. This phase difference serves as a measure of the total length of the path traveled by light going out through the optical fiber and returning to the camera (photodetector) through free space. Because the portion of the path length inside the optical fiber is not ordinarily known and can change with temperature, it is also necessary to measure the phase difference associated with this portion and subtract it from the aforementioned overall phase difference to obtain the phase difference proportional to only the free-space path length, which is the distance that one seeks to measure. Therefore, the apparatus includes a photodiode and a circulator that enable measurement of the phase difference associated with propagation from the LRU inside the fiber to the target, reflection from the fiber end, and propagation back inside the fiber to the LRU. Because this phase difference represents twice the optical path length of the fiber, this phase difference is divided in two before subtraction from the aforementioned total-path-length phase difference. Radiation-induced changes in the photodetectors in this apparatus can affect the measurements. To enable calibration for the purpose of compensation for these changes, the apparatus includes an additional target at a known short distance, located inside the camera. If the measured distance to this target changes, then the change is applied to the other targets.
NASA Astrophysics Data System (ADS)
Sampat, Nitin; Grim, John F.; O'Hara, James E.
1998-04-01
The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.
Dental Shade Guide Variability for Hues B, C, and D Using Cross-Polarized Photography.
Sampaio, Camila S; Gurrea, Jon; Gurrea, Marta; Bruguera, August; Atria, Pablo J; Janal, Malvin; Bonfante, Estevam A; Coelho, Paulo G; Hirata, Ronaldo
2018-04-20
This study evaluated the color variability of hues B, C, and D between the VITA Classical shade guide (Vita Zahnfabrik) and four other VITA-coded ceramic shade guides using a digital camera (Canon EOS 60D) and computer software (Adobe Photoshop CC). A cross-polarizing filter was used to standardize external light sources influencing color match. A total of 275 pictures were taken, 5 per shade tab, for 11 shades (B1, B2, B3, B4, C1, C2, C3, C4, D2, D3, and D4), from the following shade guides: VITA Classical (control); IPS e.max Ceram (Ivoclar Vivadent); IPS d.SIGN (Ivoclar Vivadent); Initial ZI (GC); and Creation CC (Creation Willi Geller). Pictures were evaluated using Adobe Photoshop CC for standardization of hue, chroma, and value between shade tabs. The VITA-coded shade guides evaluated here showed an overall unmatched shade in all their tabs when compared to the control, suggesting that shade selection should be made with the corresponding manufacturer guide of the ceramic intended for the final restoration.
Research on a solid state-streak camera based on an electro-optic crystal
NASA Astrophysics Data System (ADS)
Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang
2006-06-01
With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.
Numerical analysis of wavefront measurement characteristics by using plenoptic camera
NASA Astrophysics Data System (ADS)
Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun
2016-01-01
To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.
Kidd, David G; Brethwaite, Andrew
2014-05-01
This study identified the areas behind vehicles where younger and older children are not visible and measured the extent to which vehicle technologies improve visibility. Rear visibility of targets simulating the heights of a 12-15-month-old, a 30-36-month-old, and a 60-72-month-old child was assessed in 21 2010-2013 model year passenger vehicles with a backup camera or a backup camera plus parking sensor system. The average blind zone for a 12-15-month-old was twice as large as it was for a 60-72-month-old. Large SUVs had the worst rear visibility and small cars had the best. Increases in rear visibility provided by backup cameras were larger than the non-visible areas detected by parking sensors, but parking sensors detected objects in areas near the rear of the vehicle that were not visible in the camera or other fields of view. Overall, backup cameras and backup cameras plus parking sensors reduced the blind zone by around 90 percent on average and have the potential to prevent backover crashes if drivers use the technology appropriately. Copyright © 2014 Elsevier Ltd. All rights reserved.
Target for 100,000th Laser Shot by Curiosity on Mars
2013-12-05
Since landing on Mars in August 2012, NASA Curiosity Mars rover has fired the laser on its Chemistry and Camera ChemCam instrument more than 100,000 times at rock and soil targets up to about 23 feet 7 meters away.
Stereo View of Martian Rock Target 'Funzie'
2018-02-08
The surface of the Martian rock target in this stereo image includes small hollows with a "swallowtail" shape characteristic of some gypsum crystals, most evident in the lower left quadrant. These hollows may have resulted from the original crystallizing mineral subsequently dissolving away. The view appears three-dimensional when seen through blue-red glasses with the red lens on the left. The scene spans about 2.5 inches (6.5 centimeters). This rock target, called "Funzie," is near the southern, uphill edge of "Vera Rubin Ridge" on lower Mount Sharp. The stereo view combines two images taken from slightly different angles by the Mars Hand Lens Imager (MAHLI) camera on NASA's Curiosity Mars rover, with the camera about 4 inches (10 centimeters) above the target. Fig. 1 and Fig. 2 are the separate "right-eye" and "left-eye" images, taken on Jan. 11, 2018, during the 1,932nd Martian day, or sol, of the rover's work on Mars. Right-eye and left-eye images are available at https://photojournal.jpl.nasa.gov/catalog/PIA22212
Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker
Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung
2017-01-01
Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114
Harbour surveillance with cameras calibrated with AIS data
NASA Astrophysics Data System (ADS)
Palmieri, F. A. N.; Castaldo, F.; Marino, G.
The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.
Study on portable optical 3D coordinate measuring system
NASA Astrophysics Data System (ADS)
Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao
2009-05-01
A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.
Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry
NASA Astrophysics Data System (ADS)
Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng
2017-02-01
This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.
Real-time proton beam range monitoring by means of prompt-gamma detection with a collimated camera
NASA Astrophysics Data System (ADS)
Roellinghoff, F.; Benilov, A.; Dauvergne, D.; Dedes, G.; Freud, N.; Janssens, G.; Krimmer, J.; Létang, J. M.; Pinto, M.; Prieels, D.; Ray, C.; Smeets, J.; Stichelbaut, F.; Testa, E.
2014-03-01
Prompt-gamma profile was measured at WPE-Essen using 160 MeV protons impinging a movable PMMA target. A single collimated detector was used with time-of-flight (TOF) to reduce the background due to neutrons. The target entrance rise and the Bragg peak falloff retrieval precision was determined as a function of incident proton number by a fitting procedure using independent data sets. Assuming improved sensitivity of this camera design by using a greater number of detectors, retrieval precisions of 1 to 2 mm (rms) are expected for a clinical pencil beam. TOF improves the contrast-to-noise ratio and the performance of the method significantly.
A New Technique for Precision Photometry Using Alt/Az Telescopes
NASA Astrophysics Data System (ADS)
Kirkaptrick, Colin; Stacey, Piper; Swift, Jonathan
2018-06-01
We present and test a new method for flat field calibration of images obtained on telescopes with altitude-azimuth (Alt-Az) mounts. Telescopes using Alt-Az mounts typically employ a field “de-rotator” to account for changing parallactic angles of targets observed across the sky, or for long exposures of a single target. This “de-rotation” results in a changing orientation of the telescope optics with respect to the camera. This, in turn, can result in a flat field that is a function of camera orientation due to, for example, vignetting. In order to account for these changes we develop and test a new flat field technique using the observations of known transiting exoplanets.
Skylab 2: Photographic index and scene identification
NASA Technical Reports Server (NTRS)
Underwood, R. W.; Holland, J. W.
1973-01-01
A quick reference guide to the photographic imagery obtained on Skylab 2 is presented. Place names and descriptors used give sufficient information to identify frames for discussion purposes and are not intended to be used for ground nadir or geographic coverage purposes. The photographs are further identified with respect to the type of camera used in taking the pictures.
Camera for Quasars in the Early Universe (CQUEAN)
NASA Astrophysics Data System (ADS)
Kim, Eunbin; Park, W.; Lim, J.; Jeong, H.; Kim, J.; Oh, H.; Pak, S.; Im, M.; Kuehne, J.
2010-05-01
The early universe of z ɳ is where the first stars, galaxies, and quasars formed, starting the re-ionization of the universe. The discovery and the study of quasars in the early universe allow us to witness the beginning of history of astronomical objects. In order to perform a medium-deep, medium-wide, imaging survey of quasars, we are developing an optical CCD camera, CQUEAN (Camera for QUasars in EArly uNiverse) which uses a 1024*1024 pixel deep-depletion CCD. It has an enhanced QE than conventional CCD at wavelength band around 1μm, thus it will be an efficient tool for observation of quasars at z > 7. It will be attached to the 2.1m telescope at McDonald Observatory, USA. A focal reducer is designed to secure a larger field of view at the cassegrain focus of 2.1m telescope. For long stable exposures, auto-guiding system will be implemented by using another CCD camera viewing an off-axis field. All these instruments will be controlled by the software written in python on linux platform. CQUEAN is expected to see the first light during summer in 2010.
Development of the radial neutron camera system for the HL-2A tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi
2016-06-15
A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less
Darmanis, Spyridon; Toms, Andrew; Durman, Robert; Moore, Donna; Eyres, Keith
2007-07-01
To reduce the operating time in computer-assisted navigated total knee replacement (TKR), by improving communication between the infrared camera and the trackers placed on the patient. The innovation involves placing a routinely used laser pointer on top of the camera, so that the infrared cameras focus precisely on the trackers located on the knee to be operated on. A prospective randomized study was performed involving 40 patients divided into two groups, A and B. Both groups underwent navigated TKR, but for group B patients a laser pointer was used to improve the targeting capabilities of the cameras. Without the laser pointer, the camera had to move a mean 9.2 times in order to identify the trackers. With the introduction of the laser pointer, this was reduced to 0.9 times. Accordingly, the additional mean time required without the laser pointer was 11.6 minutes. Time delays are a major problem in computer-assisted surgery, and our technical suggestion can contribute towards reducing the delays associated with this particular application.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.
2011-01-01
The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.
ERIC Educational Resources Information Center
Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.
2017-01-01
Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…
New Galaxy-hunting Sky Camera Sees Redder Better | Berkeley Lab
) is now one of the best cameras on the planet for studying outer space at red wavelengths that are too . Mosaic-3's primary mission is to carry out a survey of roughly one-eighth of the sky (5,500 square survey is just one layer in the galaxy survey that is locating targets for DESI. Data from this survey
R&D 100, 2016: Ultrafast X-ray Imager
Porter, John; Claus, Liam; Sanchez, Marcos; Robertson, Gideon; Riley, Nathan; Rochau, Greg
2018-06-13
The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.
R&D 100, 2016: Ultrafast X-ray Imager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, John; Claus, Liam; Sanchez, Marcos
The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.
van Duren, B H; Sugand, K; Wescott, R; Carrington, R; Hart, A
2018-05-01
Hip fractures contribute to a significant clinical burden globally with over 1.6 million cases per annum and up to 30% mortality rate within the first year. Insertion of a dynamic hip screw (DHS) is a frequently performed procedure to treat extracapsular neck of femur fractures. Poorly performed DHS fixation of extracapsular neck of femur fractures can result in poor mobilisation, chronic pain, and increased cut-out rate requiring revision surgery. A realistic, affordable, and portable fluoroscopic simulation system can improve performance metrics in trainees, including the tip-apex distance (the only clinically validated outcome), and improve outcomes. We developed a digital fluoroscopic imaging simulator using orthogonal cameras to track coloured markers attached to the guide-wire which created a virtual overlay on fluoroscopic images of the hip. To test the accuracy with which the augmented reality system could track a guide-wire, a standard workshop femur was used to calibrate the system with a positional marker fixed to indicate the apex; this allowed for comparison between guide-wire tip-apex distance (TAD) calculated by the system to be compared to that physically measured. Tests were undertaken to determine: (1) how well the apex could be targeted; (2) the accuracy of the calculated TAD. (3) The number of iterations through the algorithm giving the optimal accuracy-time relationship. The calculated TAD was found to have an average root mean square error of 4.2 mm. The accuracy of the algorithm was shown to increase with the number of iterations up to 20 beyond which the error asymptotically converged to an error of 2 mm. This work demonstrates a novel augmented reality simulation of guide-wire insertion in DHS surgery. To our knowledge this has not been previously achieved. In contrast to virtual reality, augmented reality is able to simulate fluoroscopy while allowing the trainee to interact with real instrumentation and performing the procedure on workshop bone models. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.
Progress with the lick adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D T; Olivier, S S; Bauman, B
2000-03-01
Progress and results of observations with the Lick Observatory Laser Guide Star Adaptive Optics System are presented. This system is optimized for diffraction-limited imaging in the near infrared, 1-2 micron wavelength bands. We describe our development efforts in a number of component areas including, a redesign of the optical bench layout, the commissioning of a new infrared science camera, and improvements to the software and user interface. There is also an ongoing effort to characterize the system performance with both natural and laser guide stars and to fold this data into a refined system model. Such a model can bemore » used to help plan future observations, for example, predicting the point-spread function as a function of seeing and guide star magnitude.« less
Computerized digital dermoscopy.
Gewirtzman, A J; Braun, R P
2003-01-01
Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.
PredGuid+A: Orion Entry Guidance Modified for Aerocapture
NASA Technical Reports Server (NTRS)
Lafleur, Jarret
2013-01-01
PredGuid+A software was developed to enable a unique numerical predictor-corrector aerocapture guidance capability that builds on heritage Orion entry guidance algorithms. The software can be used for both planetary entry and aerocapture applications. Furthermore, PredGuid+A implements a new Delta-V minimization guidance option that can take the place of traditional targeting guidance and can result in substantial propellant savings. PredGuid+A allows the user to set a mode flag and input a target orbit's apoapsis and periapsis. Using bank angle control, the guidance will then guide the vehicle to the appropriate post-aerocapture orbit using one of two algorithms: Apoapsis Targeting or Delta-V Minimization (as chosen by the user). Recently, the PredGuid guidance algorithm was adapted for use in skip-entry scenarios for NASA's Orion multi-purpose crew vehicle (MPCV). To leverage flight heritage, most of Orion's entry guidance routines are adapted from the Apollo program.
Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera
NASA Technical Reports Server (NTRS)
Parker, Vance C.; Crews, Jeanne Lee
1988-01-01
The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
The First Light of the Subaru Laser Guide Star Adaptive Optics System
NASA Astrophysics Data System (ADS)
Takami, H.; Hayano, Y.; Oya, S.; Hattori, M.; Watanabe, M.; Guyon, O.; Eldred, M.; Colley, S.; Saito, Y.; Itoh, M.; Dinkins, M.
Subaru Telescope has been operating 36 element curvature sensor AO system for the Cassegrain focus since 2000. We have developed a new AO system for the Nasmyth focus. The AO system has 188 element curvature wavefront sensor and bimorph deformable mirror. It is the largest format system for this type of sensor . The deformable mirror has also 188 element with 90 mm effective aperture and 130 mm blank size. The real time controller is 4 CPU real time Linux OS computer and the update speed is now 1.5 kHz. The AO system also has laser guide star system. The laser is sum frequency solid state laser generating 589 nm light. We have achieved 4.7 W output power with excellent beam quality of M^2=1.1 and good stability. The laser is installed in a clean room on the Nasmyth platform. The laser beam is transferred by photonic crystal optical fiber with 35 m to the 50 cm laser launching telescope mounted behind the Subaru 2ry mirror. The field of view of the low order wavefront sensor for tilt guide star in LGS mode is 2.7 arcmin in diameter. The AO system had the first light with natural guide star in October 2006. The Strehl ratio was > 0.5 at K band under the 0.8 arcsec visible seeing. We also has projected laser beam on the sky during the same engineering run. Three instruments will be used with the AO system. Infrared camera and spectrograph (IRCS), High dynamic range IR camera (HiCIAO) for exosolar planet detection, and visible 3D spectrograph.
Calibration and verification of thermographic cameras for geometric measurements
NASA Astrophysics Data System (ADS)
Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.
2011-03-01
Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better results for the Nec camera.
Magnetic field effect on spoke behaviour
NASA Astrophysics Data System (ADS)
Hnilica, Jaroslav; Slapanska, Marta; Klein, Peter; Vasina, Petr
2016-09-01
The investigations of the non-reactive high power impulse magnetron sputtering (HiPIMS) discharge using high-speed camera imaging, optical emission spectroscopy and electrical probes showed that plasma is not homogeneously distributed over the target surface, but it is concentrated in regions of higher local plasma density called spokes rotating above the erosion racetrack. Magnetic field effect on spoke behaviour was studied by high-speed camera imaging in HiPIMS discharge using 3 inch titanium target. An employed camera enabled us to record two successive images in the same pulse with time delay of 3 μs between them, which allowed us to determine the number of spokes, spoke rotation velocity and spoke rotation frequency. The experimental conditions covered pressure range from 0.15 to 5 Pa, discharge current up to 350 A and magnetic fields of 37, 72 and 91 mT. Increase of the magnetic field influenced the number of spokes observed at the same pressure and at the same discharge current. Moreover, the investigation revealed different characteristic spoke shapes depending on the magnetic field strength - both diffusive and triangular shapes were observed for the same target material. The spoke rotation velocity was independent on the magnetic field strength. This research has been financially supported by the Czech Science Foundation in frame of the project 15-00863S.
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing
2008-02-01
Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.
Superficial vessel reconstruction with a multiview camera system
Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan
2016-01-01
Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1 mm. PMID:26759814
3D thermography for improving temperature measurements in thermal vacuum testing
NASA Astrophysics Data System (ADS)
Robinson, D. W.; Simpson, R.; Parian, J. A.; Cozzani, A.; Casarosa, G.; Sablerolle, S.; Ertel, H.
2017-09-01
The application of thermography to thermal vacuum (TV) testing of spacecrafts is becoming a vital additional tool in the mapping of structures during thermal cycles and thermal balance (TB) testing. Many of the customers at the European Space Agency (ESA) test centre, European Space Research and Technology Centre (ESTEC), The Netherlands, now make use of a thermal camera during TB-TV campaigns. This complements the use of embedded thermocouples on the structure, providing the prospect of monitoring temperatures at high resolution and high frequency. For simple flat structures with a well-defined emissivity, it is possible to determine the surface temperatures with reasonable confidence. However, for most real spacecraft and sub-systems, the complexity of the structure's shape and its test environment creates inter-reflections from external structures. This and the additional complication of angular and spectral variations of the spacecraft surface emissivity make the interpretation of the radiation detected by a thermal camera more difficult in terms of determining a validated temperature with high confidence and well-defined uncertainty. One solution to this problem is: to map the geometry of the test specimen and thermal test environment; to model the surface temperatures and emissivity variations of the structures and materials; and to use this model to correct the apparent temperatures recorded by the thermal camera. This approach has been used by a team from NPL (National Physical Laboratory), Psi-tran, and PhotoCore, working with ESA, to develop a 3D thermography system to provide a means to validate thermal camera temperatures, based on a combination of thermal imaging photogrammetry and ray-tracing scene modeling. The system has been tested at ESTEC in ambient conditions with a dummy spacecraft structure containing a representative set of surface temperatures, shapes, and spacecraft materials, and with hot external sources and a high power lamp as a sun simulator. The results are presented here with estimated temperature measurement uncertainties and defined confidence levels according to the internationally accepted Guide to Uncertainty of Measurement as used in the IEC/ISO17025 test and measurement standard. This work is understood to represent the first application of well-understood thermal imaging theory, commercial photogrammetry software, and open-source ray-tracing software (adapted to realize the Planck function for thermal wavebands and target emission), and to produce from these elements a complete system for determining true surface temperatures for complex spacecraft-testing applications.
Krishnan, Prakash; Tarricone, Arthur; K-Raman, Purushothaman; Majeed, Farhan; Kapur, Vishal; Gujja, Karthik; Wiley, Jose; Vasquez, Miguel; Lascano, Rheoneil A; Quiles, Katherine G; Distin, Tashanne; Fontenelle, Ran; Atallah-Lajam, Farah; Kini, Annapoorna; Sharma, Samin
2018-01-01
The aim of this study was to compare 1-year outcomes for patients with femoropopliteal in-stent restenosis using directional atherectomy guided by intravascular ultrasound (IVUS) versus directional atherectomy guided by angiography. This was a retrospective analysis for patients with femoropopliteal in-stent restenosis treated with IVUS-guided directional atherectomy versus directional atherectomy guided by angiography from a single center between March 2012 and February 2016. Clinically driven target lesion revascularization was the primary endpoint and was evaluated through medical chart review as well as phone call follow up. Directional atherectomy guided by IVUS reduces clinically driven target lesion revascularization for patients with femoropopliteal in-stent restenosis.
Design and simulation of a sensor for heliostat field closed loop control
NASA Astrophysics Data System (ADS)
Collins, Mike; Potter, Daniel; Burton, Alex
2017-06-01
Significant research has been completed in pursuit of capital cost reductions for heliostats [1],[2]. The camera array closed loop control concept has potential to radically alter the way heliostats are controlled and installed by replacing high quality open loop targeting systems with low quality targeting devices that rely on measurement of image position to remove tracking errors during operation. Although the system could be used for any heliostat size, the system significantly benefits small heliostats by reducing actuation costs, enabling large numbers of heliostats to be calibrated simultaneously, and enabling calibration of heliostats that produce low irradiance (similar or less than ambient light images) on Lambertian calibration targets, such as small heliostats that are far from the tower. A simulation method for the camera array has been designed and verified experimentally. The simulation tool demonstrates that closed loop calibration or control is possible using this device.
Frost on Mars Rover Opportunity
NASA Technical Reports Server (NTRS)
2004-01-01
Frost can form on surfaces if enough water is present and the temperature is sufficiently low. On each of NASA's Mars Exploration Rovers, the calibration target for the panoramic camera provides a good place to look for such events. A thin frost was observed by Opportunity's panoramic camera on the rover's 257th sol (Oct. 13, 2004) 11 minutes after sunrise (left image). The presence of the frost is most clearly seen on the post in the center of the target, particularly when compared with the unsegmented outer ring of the target, which is white. The post is normally black. For comparison, note the difference in appearance in the image on the right, taken about three hours later, after the frost had dissipated. Frost has not been observed at Spirit, where the amount of atmospheric water vapor is observed to be appreciably lower. Both images were taken through a filter centered at a wavelength of 440 nanometers (blue).Study on the measurement system of the target polarization characteristics and test
NASA Astrophysics Data System (ADS)
Fu, Qiang; Zhu, Yong; Zhang, Su; Duan, Jin; Yang, Di; Zhan, Juntong; Wang, Xiaoman; Jiang, Hui-Lin
2015-10-01
The polarization imaging detection technology increased the polarization information on the basis of the intensity imaging, which is extensive application in the military and civil and other fields, the research on the polarization characteristics of target is particularly important. The research of the polarization reflection model was introduced in this paper, which describes the scattering vector light energy distribution in reflecting hemisphere polarization characteristics, the target polarization characteristics test system solutions was put forward, by the irradiation light source, measuring turntable and camera, etc, which illuminate light source shall direct light source, with laser light sources and xenon lamp light source, light source can be replaced according to the test need; Hemispherical structure is used in measuring circumarotate placed near its base material sample, equipped with azimuth and pitching rotation mechanism, the manual in order to adjust the azimuth Angle and high Angle observation; Measuring camera pump works, through the different in the way of motor control polaroid polarization test, to ensure the accuracy of measurement and imaging resolution. The test platform has set up by existing laboratory equipment, the laser is 532 nm, line polaroid camera, at the same time also set the sending and receiving optical system. According to the different materials such as wood, metal, plastic, azimuth Angle and zenith Angle in different observation conditions, measurement of target in the polarization scattering properties of different exposure conditions, implementation of hemisphere space pBRDF measurement.
Structural basis for the recognition of guide RNA and target DNA heteroduplex by Argonaute
Miyoshi, Tomohiro; Ito, Kosuke; Murakami, Ryo; Uchiumi, Toshio
2016-01-01
Argonaute proteins are key players in the gene silencing mechanisms mediated by small nucleic acids in all domains of life from bacteria to eukaryotes. However, little is known about the Argonaute protein that recognizes guide RNA/target DNA. Here, we determine the 2 Å crystal structure of Rhodobacter sphaeroides Argonaute (RsAgo) in a complex with 18-nucleotide guide RNA and its complementary target DNA. The heteroduplex maintains Watson–Crick base-pairing even in the 3′-region of the guide RNA between the N-terminal and PIWI domains, suggesting a recognition mode by RsAgo for stable interaction with the target strand. In addition, the MID/PIWI interface of RsAgo has a system that specifically recognizes the 5′ base-U of the guide RNA, and the duplex-recognition loop of the PAZ domain is important for the DNA silencing activity. Furthermore, we show that Argonaute discriminates the nucleic acid type (RNA/DNA) by recognition of the duplex structure of the seed region. PMID:27325485
Structural basis for the recognition of guide RNA and target DNA heteroduplex by Argonaute.
Miyoshi, Tomohiro; Ito, Kosuke; Murakami, Ryo; Uchiumi, Toshio
2016-06-21
Argonaute proteins are key players in the gene silencing mechanisms mediated by small nucleic acids in all domains of life from bacteria to eukaryotes. However, little is known about the Argonaute protein that recognizes guide RNA/target DNA. Here, we determine the 2 Å crystal structure of Rhodobacter sphaeroides Argonaute (RsAgo) in a complex with 18-nucleotide guide RNA and its complementary target DNA. The heteroduplex maintains Watson-Crick base-pairing even in the 3'-region of the guide RNA between the N-terminal and PIWI domains, suggesting a recognition mode by RsAgo for stable interaction with the target strand. In addition, the MID/PIWI interface of RsAgo has a system that specifically recognizes the 5' base-U of the guide RNA, and the duplex-recognition loop of the PAZ domain is important for the DNA silencing activity. Furthermore, we show that Argonaute discriminates the nucleic acid type (RNA/DNA) by recognition of the duplex structure of the seed region.
3D Rainbow Particle Tracking Velocimetry
NASA Astrophysics Data System (ADS)
Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang
2017-11-01
A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.
Machine Vision for Relative Spacecraft Navigation During Approach to Docking
NASA Technical Reports Server (NTRS)
Chien, Chiun-Hong; Baker, Kenneth
2011-01-01
This paper describes a machine vision system for relative spacecraft navigation during the terminal phase of approach to docking that: 1) matches high contrast image features of the target vehicle, as seen by a camera that is bore-sighted to the docking adapter on the chase vehicle, to the corresponding features in a 3d model of the docking adapter on the target vehicle and 2) is robust to on-orbit lighting. An implementation is provided for the case of the Space Shuttle Orbiter docking to the International Space Station (ISS) with quantitative test results using a full scale, medium fidelity mock-up of the ISS docking adapter mounted on a 6-DOF motion platform at the NASA Marshall Spaceflight Center Flight Robotics Laboratory and qualitative test results using recorded video from the Orbiter Docking System Camera (ODSC) during multiple orbiter to ISS docking missions. The Natural Feature Image Registration (NFIR) system consists of two modules: 1) Tracking which tracks the target object from image to image and estimates the position and orientation (pose) of the docking camera relative to the target object and 2) Acquisition which recognizes the target object if it is in the docking camera Field-of-View and provides an approximate pose that is used to initialize tracking. Detected image edges are matched to the 3d model edges whose predicted location, based on the pose estimate and its first time derivative from the previous frame, is closest to the detected edge1 . Mismatches are eliminated using a rigid motion constraint. The remaining 2d image to 3d model matches are used to make a least squares estimate of the change in relative pose from the previous image to the current image. The changes in position and in attitude are used as data for two Kalman filters whose outputs are smoothed estimate of position and velocity plus attitude and attitude rate that are then used to predict the location of the 3d model features in the next image.
A new compact, high sensitivity neutron imaging systema)
NASA Astrophysics Data System (ADS)
Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.
2012-10-01
We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.
3D imaging and wavefront sensing with a plenoptic objective
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.
2011-06-01
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.
X-ray imaging using digital cameras
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Edgar, Andrew
2012-03-01
The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
Single-Command Approach and Instrument Placement by a Robot on a Target
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang
2005-01-01
AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.
[Research Award providing funds for a tracking video camera
NASA Technical Reports Server (NTRS)
Collett, Thomas
2000-01-01
The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.
A structured light system to guide percutaneous punctures in interventional radiology
NASA Astrophysics Data System (ADS)
Nicolau, S. A.; Brenot, J.; Goffin, L.; Graebling, P.; Soler, L.; Marescaux, J.
2008-04-01
Interventional radiology is a new medical field which allows percutaneous punctures on patients for tumoral destruction or tissue analysis. The patient lies on a CT or MRI table and the practitioner guides the needle insertion iteratively using repetitive acquisitions (2D slices). We aim at designing a guidance system to reduce the number of CT/MRI acquisitions, and therefore decrease the irradiation and shorten the duration of intervention. We propose a system composed of two calibrated cameras and a structured light videoprojector. The cameras track at 15Hz the needle manipulated by the practitioner and a software displays the needle position with respect to a preoperative segmented image of the patient. To register the preoperative image in the camera frame, we firstly reconstruct the patient skin in 3D using the structured light. Then, the surfacic registration between the reconstructed skin and the segmented skin from the preoperative image is performed using the Iterative Closest Point (ICP) algorithm. Ensuring the quality of this registration is the most challenging task of the system. Indeed, a surfacic registration cannot correctly converge if the surfaces to be registered are too smooth. The main contribution of our work is the evaluation on patients of the conditions that can ensure a correct registration of the preoperative skin surface with the reconstructed one. Furthermore, in case of unfavourable conditions, we propose a method to create enough singularities on the patient abdomen so that the convergence is guaranteed. In the coming months, we plan to evaluate the full system during standard needle insertion on patients.
A New Method for Wide-field Near-IR Imaging with the Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Momcheva, Ivelina G.; van Dokkum, Pieter G.; van der Wel, Arjen; Brammer, Gabriel B.; MacKenty, John; Nelson, Erica J.; Leja, Joel; Muzzin, Adam; Franx, Marijn
2017-01-01
We present a new technique for wide and shallow observations using the near-infrared channel of Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST). Wide-field near-IR surveys with HST are generally inefficient, as guide star acquisitions make it impractical to observe more than one pointing per orbit. This limitation can be circumvented by guiding with gyros alone, which is possible as long as the telescope has three functional gyros. The method presented here allows us to observe mosaics of eight independent WFC3-IR pointings in a single orbit by utilizing the fact that HST drifts by only a very small amount in the 25 s between non-destructive reads of unguided exposures. By shifting the reads and treating them as independent exposures the full resolution of WFC3 can be restored. We use this “drift and shift” (DASH) method in the Cycle 23 COSMOS-DASH program, which will obtain 456 WFC3 H 160 pointings in 57 orbits, covering an area of 0.6 degree in the COSMOS field down to H 160 = 25. When completed, the program will more than triple the area of extra-galactic survey fields covered by near-IR imaging at HST resolution. We demonstrate the viability of the method with the first four orbits (32 pointings) of this program. We show that the resolution of the WFC3 camera is preserved, and that structural parameters of galaxies are consistent with those measured in guided observations.
Kaethner, R J; Stuermer, C A
1992-08-01
In a variety of species, developing retinal axons branch initially more widely in their visual target centers and only gradually restrict their terminal arbors to smaller and defined territories. Retinotectal axons in fish, however, appeared to grow in a directed manner and to arborize only at their retinotopic target sites. To visualize the dynamics of retinal axon growth and arbor formation in fish, time-lapse recordings were made of individual retinal ganglion cell axons in the tectum in live zebrafish embryos. Axons were labeled with the fluorescent carbocyanine dyes Dil or DiO inserted as crystals into defined regions of the retina, viewed with 40x and 100x objectives with an SIT camera, and recorded, with exposure times of 200 msec at 30 or 60 sec intervals, over time periods of up to 13 hr. (1) Growth cones advanced rapidly, but the advance was punctuated by periods of rest. During the rest periods, the growth cones broadened and developed filopodia, but during extension they were more streamlined. (2) Growth cones traveled unerringly into the direction of their retinotopic targets without branching en route. At their target and only there, the axons began to form terminal arborizations, a process that involved the emission and retraction of numerous short side branches. The area that was permanently occupied or touched by transient branches of the terminal arbor--"the exploration field"--was small and almost circular and covered not more than 5.3% of the entire tectal surface area, but represented up to six times the size of the arbor at any one time. These findings are consistent with the idea that retinal axons are guided to their retinotopic target sites by sets of positional markers, with a graded distribution over the axes of the tectum.
NASA Astrophysics Data System (ADS)
de Villiers, Jason; Jermy, Robert; Nicolls, Fred
2014-06-01
This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.
Mortezavi, Ashkan; Märzendorfer, Olivia; Donati, Olivio F; Rizzi, Gianluca; Rupp, Niels J; Wettstein, Marian S; Gross, Oliver; Sulser, Tullio; Hermanns, Thomas; Eberli, Daniel
2018-02-21
We evaluated the diagnostic accuracy of multiparametric magnetic resonance imaging and multiparametric magnetic resonance imaging/transrectal ultrasound fusion guided targeted biopsy against that of transperineal template saturation prostate biopsy to detect prostate cancer. We retrospectively analyzed the records of 415 men who consecutively presented for prostate biopsy between November 2014 and September 2016 at our tertiary care center. Multiparametric magnetic resonance imaging was performed using a 3 Tesla device without an endorectal coil, followed by transperineal template saturation prostate biopsy with the BiopSee® fusion system. Additional fusion guided targeted biopsy was done in men with a suspicious lesion on multiparametric magnetic resonance imaging, defined as Likert score 3 to 5. Any Gleason pattern 4 or greater was defined as clinically significant prostate cancer. The detection rates of multiparametric magnetic resonance imaging and fusion guided targeted biopsy were compared with the detection rate of transperineal template saturation prostate biopsy using the McNemar test. We obtained a median of 40 (range 30 to 55) and 3 (range 2 to 4) transperineal template saturation prostate biopsy and fusion guided targeted biopsy cores, respectively. Of the 124 patients (29.9%) without a suspicious lesion on multiparametric magnetic resonance imaging 32 (25.8%) were found to have clinically significant prostate cancer on transperineal template saturation prostate biopsy. Of the 291 patients (70.1%) with a Likert score of 3 to 5 clinically significant prostate cancer was detected in 129 (44.3%) by multiparametric magnetic resonance imaging fusion guided targeted biopsy, in 176 (60.5%) by transperineal template saturation prostate biopsy and in 187 (64.3%) by the combined approach. Overall 58 cases (19.9%) of clinically significant prostate cancer would have been missed if fusion guided targeted biopsy had been performed exclusively. The sensitivity of multiparametric magnetic resonance imaging and fusion guided targeted biopsy for clinically significant prostate cancer was 84.6% and 56.7% with a negative likelihood ratio of 0.35 and 0.46, respectively. Multiparametric magnetic resonance imaging alone should not be performed as a triage test due to a substantial number of false-negative cases with clinically significant prostate cancer. Systematic biopsy outperformed fusion guided targeted biopsy. Therefore, it will remain crucial in the diagnostic pathway of prostate cancer. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System
Manduchi, R.; Coughlan, J.; Ivanchenko, V.
2016-01-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System.
Manduchi, R; Coughlan, J; Ivanchenko, V
2008-07-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.
Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.
2015-01-01
Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264
Camera Development for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Moncada, Roberto Jose
2017-01-01
With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Dam, M A; Mignant, D L; Macintosh, B A
In this paper, the adaptive optics (AO) system at the W.M. Keck Observatory is characterized. The authors calculate the error budget of the Keck AO system operating in natural guide star mode with a near infrared imaging camera. By modeling the control loops and recording residual centroids, the measurement noise and band-width errors are obtained. The error budget is consistent with the images obtained. Results of sky performance tests are presented: the AO system is shown to deliver images with average Strehl ratios of up to 0.37 at 1.58 {micro}m using a bright guide star and 0.19 for a magnitudemore » 12 star.« less
Compact instrument for fluorescence image-guided surgery
NASA Astrophysics Data System (ADS)
Wang, Xinghua; Bhaumik, Srabani; Li, Qing; Staudinger, V. Paul; Yazdanfar, Siavash
2010-03-01
Fluorescence image-guided surgery (FIGS) is an emerging technique in oncology, neurology, and cardiology. To adapt intraoperative imaging for various surgical applications, increasingly flexible and compact FIGS instruments are necessary. We present a compact, portable FIGS system and demonstrate its use in cardiovascular mapping in a preclinical model of myocardial ischemia. Our system uses fiber optic delivery of laser diode excitation, custom optics with high collection efficiency, and compact consumer-grade cameras as a low-cost and compact alternative to open surgical FIGS systems. Dramatic size and weight reduction increases flexibility and access, and allows for handheld use or unobtrusive positioning over the surgical field.
Multispectral photography for earth resources
NASA Technical Reports Server (NTRS)
Wenderoth, S.; Yost, E.; Kalia, R.; Anderson, R.
1972-01-01
A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning.
Navy Budget (1992): Potential Reductions in Research, Development, Test, and Evaluation Programs
1991-09-01
Army’s fiber optic guided missile employs a video camera and single spool fiber payout system to provide a contin- uous data link to a ground station for...January 1991 the Navy’s technical design agent for the MK-48 tor- pedo has been directing a major research and testing effort. The results of these
TENTACLE Multi-Camera Immersive Surveillance System Phase 2
2015-04-16
successful in solving the most challenging video analytics problems and taking the advanced research concepts into working systems for end- users in both...commercial, space and military applications. Notable successes include winning the DARPA Urban Challenge , software autonomy to guide the NASA robots (spirit... challenging urban environments. CMU is developing a scalable and extensible architecture, improving search/pursuit/tracking capabilities, and addressing
Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.
2018-02-01
NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Depth perception camera for autonomous vehicle applications
NASA Astrophysics Data System (ADS)
Kornreich, Philipp
2013-05-01
An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.
NASA Astrophysics Data System (ADS)
Xie, Yijing; Thom, Maria; Ebner, Michael; Wykes, Victoria; Desjardins, Adrien; Miserocchi, Anna; Ourselin, Sebastien; McEvoy, Andrew W.; Vercauteren, Tom
2017-11-01
In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system.
Mechanistic Insights into Archaeal and Human Argonaute Substrate Binding and Cleavage Properties
Willkomm, Sarah; Zander, Adrian; Grohmann, Dina; Restle, Tobias
2016-01-01
Argonaute (Ago) proteins from all three domains of life are key players in processes that specifically regulate cellular nucleic acid levels. Some of these Ago proteins, among them human Argonaute2 (hAgo2) and Ago from the archaeal organism Methanocaldococcus jannaschii (MjAgo), are able to cleave nucleic acid target strands that are recognised via an Ago-associated complementary guide strand. Here we present an in-depth kinetic side-by-side analysis of hAgo2 and MjAgo guide and target substrate binding as well as target strand cleavage, which enabled us to disclose similarities and differences in the mechanistic pathways as a function of the chemical nature of the substrate. Testing all possible guide-target combinations (i.e. RNA/RNA, RNA/DNA, DNA/RNA and DNA/DNA) with both Ago variants we demonstrate that the molecular mechanism of substrate association is highly conserved among archaeal-eukaryotic Argonautes. Furthermore, we show that hAgo2 binds RNA and DNA guide strands in the same fashion. On the other hand, despite striking homology between the two Ago variants, MjAgo cannot orientate guide RNA substrates in a way that allows interaction with the target DNA in a cleavage-compatible orientation. PMID:27741323
Improving CRISPR-Cas specificity with chemical modifications in single-guide RNAs.
Ryan, Daniel E; Taussig, David; Steinfeld, Israel; Phadnis, Smruti M; Lunstad, Benjamin D; Singh, Madhurima; Vuong, Xuan; Okochi, Kenji D; McCaffrey, Ryan; Olesiak, Magdalena; Roy, Subhadeep; Yung, Chong Wing; Curry, Bo; Sampson, Jeffrey R; Bruhn, Laurakay; Dellinger, Douglas J
2018-01-25
CRISPR systems have emerged as transformative tools for altering genomes in living cells with unprecedented ease, inspiring keen interest in increasing their specificity for perfectly matched targets. We have developed a novel approach for improving specificity by incorporating chemical modifications in guide RNAs (gRNAs) at specific sites in their DNA recognition sequence ('guide sequence') and systematically evaluating their on-target and off-target activities in biochemical DNA cleavage assays and cell-based assays. Our results show that a chemical modification (2'-O-methyl-3'-phosphonoacetate, or 'MP') incorporated at select sites in the ribose-phosphate backbone of gRNAs can dramatically reduce off-target cleavage activities while maintaining high on-target performance, as demonstrated in clinically relevant genes. These findings reveal a unique method for enhancing specificity by chemically modifying the guide sequence in gRNAs. Our approach introduces a versatile tool for augmenting the performance of CRISPR systems for research, industrial and therapeutic applications. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Improving CRISPR–Cas specificity with chemical modifications in single-guide RNAs
Ryan, Daniel E; Taussig, David; Steinfeld, Israel; Phadnis, Smruti M; Lunstad, Benjamin D; Singh, Madhurima; Vuong, Xuan; Okochi, Kenji D; McCaffrey, Ryan; Olesiak, Magdalena; Roy, Subhadeep; Yung, Chong Wing; Curry, Bo; Sampson, Jeffrey R; Dellinger, Douglas J
2018-01-01
Abstract CRISPR systems have emerged as transformative tools for altering genomes in living cells with unprecedented ease, inspiring keen interest in increasing their specificity for perfectly matched targets. We have developed a novel approach for improving specificity by incorporating chemical modifications in guide RNAs (gRNAs) at specific sites in their DNA recognition sequence (‘guide sequence’) and systematically evaluating their on-target and off-target activities in biochemical DNA cleavage assays and cell-based assays. Our results show that a chemical modification (2′-O-methyl-3′-phosphonoacetate, or ‘MP’) incorporated at select sites in the ribose-phosphate backbone of gRNAs can dramatically reduce off-target cleavage activities while maintaining high on-target performance, as demonstrated in clinically relevant genes. These findings reveal a unique method for enhancing specificity by chemically modifying the guide sequence in gRNAs. Our approach introduces a versatile tool for augmenting the performance of CRISPR systems for research, industrial and therapeutic applications. PMID:29216382
Krishnan, Prakash; Tarricone, Arthur; K-Raman, Purushothaman; Majeed, Farhan; Kapur, Vishal; Gujja, Karthik; Wiley, Jose; Vasquez, Miguel; Lascano, Rheoneil A.; Quiles, Katherine G.; Distin, Tashanne; Fontenelle, Ran; Atallah-Lajam, Farah; Kini, Annapoorna; Sharma, Samin
2017-01-01
Background: The aim of this study was to compare 1-year outcomes for patients with femoropopliteal in-stent restenosis using directional atherectomy guided by intravascular ultrasound (IVUS) versus directional atherectomy guided by angiography. Methods and results: This was a retrospective analysis for patients with femoropopliteal in-stent restenosis treated with IVUS-guided directional atherectomy versus directional atherectomy guided by angiography from a single center between March 2012 and February 2016. Clinically driven target lesion revascularization was the primary endpoint and was evaluated through medical chart review as well as phone call follow up. Conclusions: Directional atherectomy guided by IVUS reduces clinically driven target lesion revascularization for patients with femoropopliteal in-stent restenosis. PMID:29265002
Small format digital photogrammetry for applications in the earth sciences
NASA Astrophysics Data System (ADS)
Rieke-Zapp, Dirk
2010-05-01
Small format digital photogrammetry for applications in the earth sciences Photogrammetry is often considered one of the most precise and versatile surveying techniques. The same camera and analysis software can be used for measurements from sub-millimetre to kilometre scale. Such a measurement device is well suited for application by earth scientists working in the field. In this case a small toolset and a straight forward setup best fit the needs of the operator. While a digital camera is typically already part of the field equipment of an earth scientist the main focus of the field work is often not surveying. Lack in photogrammetric training at the same time requires an easy to learn, straight forward surveying technique. A photogrammetric method was developed aimed primarily at earth scientists for taking accurate measurements in the field minimizing extra bulk and weight of the required equipment. The work included several challenges. A) Definition of an upright coordinate system without heavy and bulky tools like a total station or GNS-Sensor. B) Optimization of image acquisition and geometric stability of the image block. C) Identification of a small camera suitable for precise measurements in the field. D) Optimization of the workflow from image acquisition to preparation of images for stereo measurements. E) Introduction of students and non-photogrammetrists to the workflow. Wooden spheres were used as target points in the field. They were more rugged and available in different sizes than ping pong balls used in a previous setup. Distances between three spheres were introduced as scale information in a photogrammetric adjustment. The distances were measured with a laser distance meter accurate to 1 mm (1 sigma). The vertical angle between the spheres was measured with the same laser distance meter. The precision of the measurement was 0.3° (1 sigma) which is sufficient, i.e. better than inclination measurements with a geological compass. The upright coordinate system is important to measure the dip angle of geologic features in outcrop. The planimetric coordinate systems would be arbitrary, but may easily be oriented to compass north introducing a direction measurement of a compass. Wooden spheres and a Leica disto D3 laser distance meter added less than 0.150 kg to the field equipment considering that a suitable digital camera was already part of it. Identification of a small digital camera suitable for precise measurements was a major part of this work. A group of cameras were calibrated several times over different periods of time on a testfield. Further evaluation involved an accuracy assessment in the field comparing distances between signalized points calculated form a photogrammetric setup with coordinates derived from a total station survey. The smallest camera in the test required calibration on the job as the interior orientation changed significantly between testfield calibration and use in the field. We attribute this to the fact that the lens was retracted then the camera was switched off. Fairly stable camera geometry in a compact size camera with lens retracting system was accomplished for Sigma DP1 and DP2 cameras. While the pixel count of the cameras was less than for the Ricoh, the pixel pitch in the Sigma cameras was much larger. Hence, the same mechanical movement would have less per pixel effect for the Sigma cameras than for the Ricoh camera. A large pixel pitch may therefore compensate for some camera instability explaining why cameras with large sensors and larger pixel pitch typically yield better accuracy in object space. Both Sigma cameras weigh approximately 0.250 kg and may even be suitable for use with ultralight aerial vehicles (UAV) which have payload restriction of 0.200 to 0.300 kg. A set of other cameras that were available were also tested on a calibration field and on location showing once again that it is difficult to reason geometric stability from camera specifications. Image acquisition with geometrically stable cameras was fairly straight forward to cover the area of interest with stereo pairs for analysis. We limited our tests to setups with three to five images to minimize the amount of post processing. The laser dot of the laser distance meter was not visible for distances farther than 5-7 m with the naked eye which also limited the maximum stereo area that may be covered with this technique. Extrapolating the setup to fairly large areas showed no significant decrease in accuracy accomplished in object space. Working with a Sigma SD14 SLR camera on a 6 x 18 x 20 m3 volume the maximum length measurement error ranged between 20 and 30 mm depending on image setup and analysis. For smaller outcrops even the compact cameras yielded maximum length measurement errors in the mm range which was considered sufficient for measurements in the earth sciences. In many cases the resolution per pixel was the limiting factor of image analysis rather than accuracy. A field manual was developed guiding novice users and students to this technique. The technique does not simplify ease of use for precision; therefore successful users of the presented method easily grow into more advanced photogrammetric methods for high precision applications. Originally camera calibration was not part of the methodology for the novice operators. Recent introduction of Camera Calibrator which is a low cost, well automated software for camera calibration, allowed beginners to calibrate their camera within a couple minutes. The complete set of calibration parameters can be applied in ERDAS LPS software easing the workflow. Image orientation was performed in LPS 9.2 software which was also used for further image analysis.
NASA Technical Reports Server (NTRS)
Everett, Louis J.
1994-01-01
The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
Helmet-mounted displays in long-range-target visual acquisition
NASA Astrophysics Data System (ADS)
Wilkins, Donald F.
1999-07-01
Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.
Wang, Xu; Yang, Cheng-Xiong; Chen, Jia-Tong; Yan, Xiu-Ping
2014-04-01
The targetability of a theranostic probe is one of the keys to assuring its theranostic efficiency. Here we show the design and fabrication of a dual-targeting upconversion nanoplatform for two-color fluorescence imaging-guided photodynamic therapy (PDT). The nanoplatform was prepared from 3-aminophenylboronic acid functionalized upconversion nanocrystals (APBA-UCNPs) and hyaluronated fullerene (HAC60) via a specific diol-borate condensation. The two specific ligands of aminophenylboronic acid and hyaluronic acid provide synergistic targeting effects, high targetability, and hence a dramatically elevated uptake of the nanoplatform by cancer cells. The high generation yield of (1)O2 due to multiplexed Förster resonance energy transfer between APBA-UCNPs (donor) and HAC60 (acceptor) allows effective therapy. The present nanoplatform shows great potential for highly selective tumor-targeted imaging-guided PDT.
High Resolution Active Optics Observations from the Kepler Follow-up Observation Program
NASA Astrophysics Data System (ADS)
Gautier, Thomas N.; Ciardi, D. R.; Marcy, G. W.; Hirsch, L.
2014-01-01
The ground based follow-up observation program for candidate exoplanets discovered with the Kepler observatory has supported a major effort for high resolution imaging of candidate host stars using adaptive optics wave-front correction (AO), speckle imaging and lucky imaging. These images allow examination of the sky as close as a few tenths of an arcsecond from the host stars to detect background objects that might be the source of the Kepler transit signal instead of the host star. This poster reports on the imaging done with AO cameras on the Keck, Palomar 5m and Shane 3m (Lick Observatory) which have been used to obtain high resolution images of over 500 Kepler Object of Interest (KOI) exoplanet candidate host stars. All observations were made at near infrared wavelengths in the J, H and K bands, mostly using the host target star as the AO guide star. Details of the sensitivity to background objects actually attained by these observations and the number of background objects discovered are presented. Implications to the false positive rate of the Kepler candidates are discussed.
Generation and Performance of Automated Jarosite Mineral Detectors for Vis/NIR Spectrometers at Mars
NASA Technical Reports Server (NTRS)
Gilmore, M. S.; Bornstein, B.; Merrill, M. D.; Castano, R.; Greenwood, J. P.
2005-01-01
Sulfate salt discoveries at the Eagle and Endurance craters in Meridiani Planum by the Mars Exploration Rover Opportunity have proven mineralogically the existence and involvement of water in Mars past. Visible and near infrared spectrometers like the Mars Express OMEGA, the Mars Reconnaissance Orbiter CRISM and the 2009 Mars Science Laboratory Rover cameras are powerful tools for the identification of water-bearing salts and other high priority minerals at Mars. The increasing spectral resolution and rover mission lifetimes represented by these missions currently necessitate data compression in order to ease downlink restrictions. On board data processing techniques can be used to guide the selection, measurement and return of scientifically important data from relevant targets, thus easing bandwidth stress and increasing scientific return. We have developed an automated support vector machine (SVM) detector operating in the visible/near-infrared (VisNIR, 300-2500 nm) spectral range trained to recognize the mineral jarosite (typically KFe3(SO4)2(OH)6), positively identified by the Mossbauer spectrometer at Meridiani Planum. Additional information is included in the original extended abstract.
High-precision Orbit Fitting and Uncertainty Analysis of (486958) 2014 MU69
NASA Astrophysics Data System (ADS)
Porter, Simon B.; Buie, Marc W.; Parker, Alex H.; Spencer, John R.; Benecchi, Susan; Tanga, Paolo; Verbiscer, Anne; Kavelaars, J. J.; Gwyn, Stephen D. J.; Young, Eliot F.; Weaver, H. A.; Olkin, Catherine B.; Parker, Joel W.; Stern, S. Alan
2018-07-01
NASA’s New Horizons spacecraft will conduct a close flyby of the cold-classical Kuiper Belt Object (KBO) designated (486958) 2014 MU69 on 2019 January 1. At a heliocentric distance of 44 au, “MU69” will be the most distant object ever visited by a spacecraft. To enable this flyby, we have developed an extremely high-precision orbit fitting and uncertainty processing pipeline, making maximal use of the Hubble Space Telescope’s Wide Field Camera 3 (WFC3) and pre-release versions of the ESA Gaia Data Release 2 (DR2) catalog. This pipeline also enabled successful predictions of a stellar occultation by MU69 in 2017 July. We describe how we process the WFC3 images to match the Gaia DR2 catalog, extract positional uncertainties for this extremely faint target (typically 140 photons per WFC3 exposure), and translate those uncertainties into probability distribution functions for MU69 at any given time. We also describe how we use these uncertainties to guide New Horizons, plan stellar occultions of MU69, and derive MU69's orbital evolution and long-term stability.
A novel ultrasound-guided shoulder arthroscopic surgery
NASA Astrophysics Data System (ADS)
Tyryshkin, K.; Mousavi, P.; Beek, M.; Chen, T.; Pichora, D.; Abolmaesumi, P.
2006-03-01
This paper presents a novel ultrasound-guided computer system for arthroscopic surgery of the shoulder joint. Intraoperatively, the system tracks and displays the surgical instruments, such as arthroscope and arthroscopic burrs, relative to the anatomy of the patient. The purpose of this system is to improve the surgeon's perception of the three-dimensional space within the anatomy of the patient in which the instruments are manipulated and to provide guidance towards the targeted anatomy. Pre-operatively, computed tomography images of the patient are acquired to construct virtual threedimensional surface models of the shoulder bone structure. Intra-operatively, live ultrasound images of pre-selected regions of the shoulder are captured using an ultrasound probe whose three-dimensional position is tracked by an optical camera. These images are used to register the surface model to the anatomy of the patient in the operating room. An initial alignment is obtained by matching at least three points manually selected on the model to their corresponding points identified on the ultrasound images. The registration is then improved with an iterative closest point or a sequential least squares estimation technique. In the present study the registration results of these techniques are compared. After the registration, surgical instruments are displayed relative to the surface model of the patient on a graphical screen visible to the surgeon. Results of laboratory experiments on a shoulder phantom indicate acceptable registration results and sufficiently fast overall system performance to be applicable in the operating room.
NASA Astrophysics Data System (ADS)
Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee
2016-05-01
Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.
NASA Astrophysics Data System (ADS)
Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi
2011-03-01
Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.
NASA Astrophysics Data System (ADS)
Zhao, Ziyue; Gan, Xiaochuan; Zou, Zhi; Ma, Liqun
2018-01-01
The dynamic envelope measurement plays very important role in the external dimension design for high-speed train. Recently there is no digital measurement system to solve this problem. This paper develops an optoelectronic measurement system by using monocular digital camera, and presents the research of measurement theory, visual target design, calibration algorithm design, software programming and so on. This system consists of several CMOS digital cameras, several luminous targets for measuring, a scale bar, data processing software and a terminal computer. The system has such advantages as large measurement scale, high degree of automation, strong anti-interference ability, noise rejection and real-time measurement. In this paper, we resolve the key technology such as the transformation, storage and calculation of multiple cameras' high resolution digital image. The experimental data show that the repeatability of the system is within 0.02mm and the distance error of the system is within 0.12mm in the whole workspace. This experiment has verified the rationality of the system scheme, the correctness, the precision and effectiveness of the relevant methods.
Accurate attitude determination of the LACE satellite
NASA Technical Reports Server (NTRS)
Miglin, M. F.; Campion, R. E.; Lemos, P. J.; Tran, T.
1993-01-01
The Low-power Atmospheric Compensation Experiment (LACE) satellite, launched in February 1990 by the Naval Research Laboratory, uses a magnetic damper on a gravity gradient boom and a momentum wheel with its axis perpendicular to the plane of the orbit to stabilize and maintain its attitude. Satellite attitude is determined using three types of sensors: a conical Earth scanner, a set of sun sensors, and a magnetometer. The Ultraviolet Plume Instrument (UVPI), on board LACE, consists of two intensified CCD cameras and a gimbal led pointing mirror. The primary purpose of the UVPI is to image rocket plumes from space in the ultraviolet and visible wavelengths. Secondary objectives include imaging stars, atmospheric phenomena, and ground targets. The problem facing the UVPI experimenters is that the sensitivity of the LACF satellite attitude sensors is not always adequate to correctly point the UVPI cameras. Our solution is to point the UVPI cameras at known targets and use the information thus gained to improve attitude measurements. This paper describes the three methods developed to determine improved attitude values using the UVPI for both real-time operations and post observation analysis.
In-Situ Cameras for Radiometric Correction of Remotely Sensed Data
NASA Astrophysics Data System (ADS)
Kautz, Jess S.
The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full-scale implementation.
Joseph, Thomas T; Osman, Roman
2012-01-01
In RNA interference, a guide strand derived from a short dsRNA such as a microRNA (miRNA) is loaded into Argonaute, the central protein in the RNA Induced Silencing Complex (RISC) that silences messenger RNAs on a sequence-specific basis. The positions of any mismatched base pairs in an miRNA determine which Argonaute subtype is used. Subsequently, the Argonaute-guide complex binds and silences complementary target mRNAs; certain Argonautes cleave the target. Mismatches between guide strand and the target mRNA decrease cleavage efficiency. Thus, loading and silencing both require that signals about the presence of a mismatched base pair are communicated from the mismatch site to effector sites. These effector sites include the active site, to prevent target cleavage; the binding groove, to modify nucleic acid binding affinity; and surface allosteric sites, to control recruitment of additional proteins to form the RISC. To examine how such signals may be propagated, we analyzed the network of internal allosteric pathways in Argonaute exhibited through correlations of residue-residue interactions. The emerging network can be described as a set of pathways emanating from the core of the protein near the active site, distributed into the bulk of the protein, and converging upon a distributed cluster of surface residues. Nucleotides in the guide strand "seed region" have a stronger relationship with the protein than other nucleotides, concordant with their importance in sequence selectivity. Finally, any of several seed region guide-target mismatches cause certain Argonaute residues to have modified correlations with the rest of the protein. This arises from the aggregation of relatively small interaction correlation changes distributed across a large subset of residues. These residues are in effector sites: the active site, binding groove, and surface, implying that direct functional consequences of guide-target mismatches are mediated through the cumulative effects of a large number of internal allosteric pathways.