Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
Influence of range-gated intensifiers on underwater imaging system SNR
NASA Astrophysics Data System (ADS)
Wang, Xia; Hu, Ling; Zhi, Qiang; Chen, Zhen-yue; Jin, Wei-qi
2013-08-01
Range-gated technology has been a hot research field in recent years due to its high effective back scattering eliminating. As a result, it can enhance the contrast between a target and its background and extent the working distance of the imaging system. The underwater imaging system is required to have the ability to image in low light level conditions, as well as the ability to eliminate the back scattering effect, which means that the receiver has to be high-speed external trigger function, high resolution, high sensitivity, low noise, higher gain dynamic range. When it comes to an intensifier, the noise characteristics directly restrict the observation effect and range of the imaging system. The background noise may decrease the image contrast and sharpness, even covering the signal making it impossible to recognize the target. So it is quite important to investigate the noise characteristics of intensifiers. SNR is an important parameter reflecting the noise features of a system. Through the use of underwater laser range-gated imaging prediction model, and according to the linear SNR system theory, the gated imaging noise performance of the present market adopted super second generation and generation Ⅲ intensifiers were theoretically analyzed. Based on the active laser underwater range-gated imaging model, the effect to the system by gated intensifiers and the relationship between the system SNR and MTF were studied. Through theoretical and simulation analysis to the image intensifier background noise and SNR, the different influence on system SNR by super second generation and generation Ⅲ ICCD was obtained. Range-gated system SNR formula was put forward, and compared the different effect influence on the system by using two kind of ICCDs was compared. According to the matlab simulation, a detailed analysis was carried out theoretically. All the work in this paper lays a theoretical foundation to further eliminating back scattering effect, improving image SNR, designing and manufacturing higher performance underwater range-gated imaging systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, E; Chen, H; Polf, J
2016-06-15
Purpose: To report on the initial developments of a clinical 3-dimensional (3D) prompt gamma (PG) imaging system for proton radiotherapy range verification. Methods: The new imaging system under development consists of a prototype Compton camera to measure PG emission during proton beam irradiation and software to reconstruct, display, and analyze 3D images of the PG emission. For initial test of the system, PGs were measured with a prototype CC during a 200 cGy dose delivery with clinical proton pencil beams (ranging from 100 MeV – 200 MeV) to a water phantom. Measurements were also carried out with the CC placedmore » 15 cm from the phantom for a full range 150 MeV pencil beam and with its range shifted by 2 mm. Reconstructed images of the PG emission were displayed by the clinical PG imaging software and compared to the dose distributions of the proton beams calculated by a commercial treatment planning system. Results: Measurements made with the new PG imaging system showed that a 3D image could be reconstructed from PGs measured during the delivery of 200 cGy of dose, and that shifts in the Bragg peak range of as little as 2 mm could be detected. Conclusion: Initial tests of a new PG imaging system show its potential to provide 3D imaging and range verification for proton radiotherapy. Based on these results, we have begun work to improve the system with the goal that images can be produced from delivery of as little as 20 cGy so that the system could be used for in-vivo proton beam range verification on a daily basis.« less
Research on range-gated laser active imaging seeker
NASA Astrophysics Data System (ADS)
You, Mu; Wang, PengHui; Tan, DongJie
2013-09-01
Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.
High dynamic range coding imaging system
NASA Astrophysics Data System (ADS)
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
Fast exposure time decision in multi-exposure HDR imaging
NASA Astrophysics Data System (ADS)
Piao, Yongjie; Jin, Guang
2012-10-01
Currently available imaging and display system exists the problem of insufficient dynamic range, and the system cannot restore all the information for an high dynamic range (HDR) scene. The number of low dynamic range(LDR) image samples and fastness of exposure time decision impacts the real-time performance of the system dramatically. In order to realize a real-time HDR video acquisition system, this paper proposed a fast and robust method for exposure time selection in under and over exposure area which is based on system response function. The method utilized the monotony of the imaging system. According to this characteristic the exposure time is adjusted to an initial value to make the median value of the image equals to the middle value of the system output range; then adjust the exposure time to make the pixel value on two sides of histogram be the middle value of the system output range. Thus three low dynamic range images are acquired. Experiments show that the proposed method for adjusting the initial exposure time can converge in two iterations which is more fast and stable than average gray control method. As to the exposure time adjusting in under and over exposed area, the proposed method can use the dynamic range of the system more efficiently than fixed exposure time method.
Geometrical calibration of an AOTF hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Špiclin, Žiga; Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2010-02-01
Optical aberrations present an important problem in optical measurements. Geometrical calibration of an imaging system is therefore of the utmost importance for achieving accurate optical measurements. In hyper-spectral imaging systems, the problem of optical aberrations is even more pronounced because optical aberrations are wavelength dependent. Geometrical calibration must therefore be performed over the entire spectral range of the hyper-spectral imaging system, which is usually far greater than that of the visible light spectrum. This problem is especially adverse in AOTF (Acousto- Optic Tunable Filter) hyper-spectral imaging systems, as the diffraction of light in AOTF filters is dependent on both wavelength and angle of incidence. Geometrical calibration of hyper-spectral imaging system was performed by stable caliber of known dimensions, which was imaged at different wavelengths over the entire spectral range. The acquired images were then automatically registered to the caliber model by both parametric and nonparametric transformation based on B-splines and by minimizing normalized correlation coefficient. The calibration method was tested on an AOTF hyper-spectral imaging system in the near infrared spectral range. The results indicated substantial wavelength dependent optical aberration that is especially pronounced in the spectral range closer to the infrared part of the spectrum. The calibration method was able to accurately characterize the aberrations and produce transformations for efficient sub-pixel geometrical calibration over the entire spectral range, finally yielding better spatial resolution of hyperspectral imaging system.
110 °C range athermalization of wavefront coding infrared imaging systems
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong
2017-09-01
110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.
Target recognition for ladar range image using slice image
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Wang, Liang
2015-12-01
A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.
Plenoptic Imager for Automated Surface Navigation
NASA Technical Reports Server (NTRS)
Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael
2010-01-01
An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.
Electrically optofluidic zoom system with a large zoom range and high-resolution image.
Li, Lei; Yuan, Rong-Ying; Wang, Jin-Hui; Wang, Qiong-Hua
2017-09-18
We report an electrically controlled optofluidic zoom system which can achieve a large continuous zoom change and high-resolution image. The zoom system consists of an optofluidic zoom objective and a switchable light path which are controlled by two liquid optical shutters. The proposed zoom system can achieve a large tunable focal length range from 36mm to 92mm. And in this tuning range, the zoom system can correct aberrations dynamically, thus the image resolution is high. Due to large zoom range, the proposed imaging system incorporates both camera configuration and telescope configuration into one system. In addition, the whole system is electrically controlled by three electrowetting liquid lenses and two liquid optical shutters, therefore, the proposed system is very compact and free of mechanical moving parts. The proposed zoom system has potential to take place of conventional zoom systems.
Model-based restoration using light vein for range-gated imaging systems.
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen
2016-09-10
The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
Local dynamic range compensation for scanning electron microscope imaging system.
Sim, K S; Huang, Y H
2015-01-01
This is the extended project by introducing the modified dynamic range histogram modification (MDRHM) and is presented in this paper. This technique is used to enhance the scanning electron microscope (SEM) imaging system. By comparing with the conventional histogram modification compensators, this technique utilizes histogram profiling by extending the dynamic range of each tile of an image to the limit of 0-255 range while retains its histogram shape. The proposed technique yields better image compensation compared to conventional methods. © Wiley Periodicals, Inc.
Toward 1-mm depth precision with a solid state full-field range imaging system
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.
2006-02-01
Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.
Video-rate or high-precision: a flexible range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.
2008-02-01
A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.
2007-09-01
We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.
Target recognition of log-polar ladar range images using moment invariants
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong
2017-01-01
The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.
Effects of Resolution, Range, and Image Contrast on Target Acquisition Performance.
Hollands, Justin G; Terhaar, Phil; Pavlovic, Nada J
2018-05-01
We sought to determine the joint influence of resolution, target range, and image contrast on the detection and identification of targets in simulated naturalistic scenes. Resolution requirements for target acquisition have been developed based on threshold values obtained using imaging systems, when target range was fixed, and image characteristics were determined by the system. Subsequent work has examined the influence of factors like target range and image contrast on target acquisition. We varied the resolution and contrast of static images in two experiments. Participants (soldiers) decided whether a human target was located in the scene (detection task) or whether a target was friendly or hostile (identification task). Target range was also varied (50-400 m). In Experiment 1, 30 participants saw color images with a single target exemplar. In Experiment 2, another 30 participants saw monochrome images containing different target exemplars. The effects of target range and image contrast were qualitatively different above and below 6 pixels per meter of target for both tasks in both experiments. Target detection and identification performance were a joint function of image resolution, range, and contrast for both color and monochrome images. The beneficial effects of increasing resolution for target acquisition performance are greater for closer (larger) targets.
A new compact, cost-efficient concept for underwater range-gated imaging: the UTOFIA project
NASA Astrophysics Data System (ADS)
Mariani, Patrizio; Quincoces, Iñaki; Galparsoro, Ibon; Bald, Juan; Gabiña, Gorka; Visser, Andy; Jónasdóttir, Sigrun; Haugholt, Karl Henrik; Thorstensen, Jostein; Risholm, Petter; Thielemann, Jens
2017-04-01
Underwater Time Of Flight Image Acquisition system (UTOFIA) is a recently launched H2020 project (H2020 - 633098) to develop a compact and cost-effective underwater imaging system especially suited for observations in turbid environments. The UTOFIA project targets technology that can overcome the limitations created by scattering, by introducing cost-efficient range-gated imaging for underwater applications. This technology relies on a image acquisition principle that can extends the imaging range of the cameras 2-3 times respect to other cameras. Moreover, the system will simultaneously capture 3D information of the observed objects. Today range-gated imaging is not widely used, as it relies on specialised optical components making systems large and costly. Recent technology developments have made it possible a significant (2-3 times) reduction in size, complexity and cost of underwater imaging systems, whilst addressing the scattering issues at the same time. By acquiring simultaneous 3D data, the system allows to accurately measure the absolute size of marine life and their spatial relationship to their habitat, enhancing the precision of fish stock monitoring and ecology assessment, hence supporting proper management of marine resources. Additionally, the larger observed volume and the improved image quality make the system suitable for cost-effective underwater surveillance operations in e.g. fish farms, underwater infrastructures. The system can be integrated into existing ocean observatories for real time acquisition and can greatly advance present efforts in developing species recognition algorithms, given the additional features provided, the improved image quality and the independent illumination source based on laser. First applications of the most recent prototype of the imaging system will be provided including inspection of underwater infrastructures and observations of marine life under different environmental conditions.
System and Method for Scan Range Gating
NASA Technical Reports Server (NTRS)
Lindemann, Scott (Inventor); Zuk, David M. (Inventor)
2017-01-01
A system for scanning light to define a range gated signal includes a pulsed coherent light source that directs light into the atmosphere, a light gathering instrument that receives the light modified by atmospheric backscatter and transfers the light onto an image plane, a scanner that scans collimated light from the image plane to form a range gated signal from the light modified by atmospheric backscatter, a control circuit that coordinates timing of a scan rate of the scanner and a pulse rate of the pulsed coherent light source so that the range gated signal is formed according to a desired range gate, an optical device onto which an image of the range gated signal is scanned, and an interferometer to which the image of the range gated signal is directed by the optical device. The interferometer is configured to modify the image according to a desired analysis.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
Image dissector camera system study
NASA Technical Reports Server (NTRS)
Howell, L.
1984-01-01
Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.
Kuzmak, P. M.; Dayhoff, R. E.
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner. PMID:1482906
Kuzmak, P M; Dayhoff, R E
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Single-frequency 3D synthetic aperture imaging with dynamic metasurface antennas.
Boyarsky, Michael; Sleasman, Timothy; Pulido-Mancera, Laura; Diebold, Aaron V; Imani, Mohammadreza F; Smith, David R
2018-05-20
Through aperture synthesis, an electrically small antenna can be used to form a high-resolution imaging system capable of reconstructing three-dimensional (3D) scenes. However, the large spectral bandwidth typically required in synthetic aperture radar systems to resolve objects in range often requires costly and complex RF components. We present here an alternative approach based on a hybrid imaging system that combines a dynamically reconfigurable aperture with synthetic aperture techniques, demonstrating the capability to resolve objects in three dimensions (3D), with measurements taken at a single frequency. At the core of our imaging system are two metasurface apertures, both of which consist of a linear array of metamaterial irises that couple to a common waveguide feed. Each metamaterial iris has integrated within it a diode that can be biased so as to switch the element on (radiating) or off (non-radiating), such that the metasurface antenna can produce distinct radiation profiles corresponding to different on/off patterns of the metamaterial element array. The electrically large size of the metasurface apertures enables resolution in range and one cross-range dimension, while aperture synthesis provides resolution in the other cross-range dimension. The demonstrated imaging capabilities of this system represent a step forward in the development of low-cost, high-performance 3D microwave imaging systems.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Full range line-field parallel swept source imaging utilizing digital refocusing
NASA Astrophysics Data System (ADS)
Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.
2015-12-01
We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.
Improved proton CT imaging using a bismuth germanium oxide scintillator.
Tanaka, Sodai; Nishio, Teiji; Tsuneda, Masato; Matsushita, Keiichiro; Kabuki, Shigeto; Uesaka, Mitsuru
2018-02-02
Range uncertainty is among the most formidable challenges associated with the treatment planning of proton therapy. Proton imaging, which includes proton radiography and proton computed tomography (pCT), is a useful verification tool. We have developed a pCT detection system that uses a thick bismuth germanium oxide (BGO) scintillator and a CCD camera. The current method is based on a previous detection system that used a plastic scintillator, and implements improved image processing techniques. In the new system, the scintillation light intensity is integrated along the proton beam path by the BGO scintillator, and acquired as a two-dimensional distribution with the CCD camera. The range of a penetrating proton is derived from the integrated light intensity using a light-to-range conversion table, and a pCT image can be reconstructed. The proton range in the BGO scintillator is shorter than in the plastic scintillator, so errors due to extended proton ranges can be reduced. To demonstrate the feasibility of the pCT system, an experiment was performed using a 70 MeV proton beam created by the AVF930 cyclotron at the National Institute of Radiological Sciences. The accuracy of the light-to-range conversion table, which is susceptible to errors due to its spatial dependence, was investigated, and the errors in the acquired pixel values were less than 0.5 mm. Images of various materials were acquired, and the pixel-value errors were within 3.1%, which represents an improvement over previous results. We also obtained a pCT image of an edible chicken piece, the first of its kind for a biological material, and internal structures approximately one millimeter in size were clearly observed. This pCT imaging system is fast and simple, and based on these findings, we anticipate that we can acquire 200 MeV pCT images using the BGO scintillator system.
Improved proton CT imaging using a bismuth germanium oxide scintillator
NASA Astrophysics Data System (ADS)
Tanaka, Sodai; Nishio, Teiji; Tsuneda, Masato; Matsushita, Keiichiro; Kabuki, Shigeto; Uesaka, Mitsuru
2018-02-01
Range uncertainty is among the most formidable challenges associated with the treatment planning of proton therapy. Proton imaging, which includes proton radiography and proton computed tomography (pCT), is a useful verification tool. We have developed a pCT detection system that uses a thick bismuth germanium oxide (BGO) scintillator and a CCD camera. The current method is based on a previous detection system that used a plastic scintillator, and implements improved image processing techniques. In the new system, the scintillation light intensity is integrated along the proton beam path by the BGO scintillator, and acquired as a two-dimensional distribution with the CCD camera. The range of a penetrating proton is derived from the integrated light intensity using a light-to-range conversion table, and a pCT image can be reconstructed. The proton range in the BGO scintillator is shorter than in the plastic scintillator, so errors due to extended proton ranges can be reduced. To demonstrate the feasibility of the pCT system, an experiment was performed using a 70 MeV proton beam created by the AVF930 cyclotron at the National Institute of Radiological Sciences. The accuracy of the light-to-range conversion table, which is susceptible to errors due to its spatial dependence, was investigated, and the errors in the acquired pixel values were less than 0.5 mm. Images of various materials were acquired, and the pixel-value errors were within 3.1%, which represents an improvement over previous results. We also obtained a pCT image of an edible chicken piece, the first of its kind for a biological material, and internal structures approximately one millimeter in size were clearly observed. This pCT imaging system is fast and simple, and based on these findings, we anticipate that we can acquire 200 MeV pCT images using the BGO scintillator system.
Test technology on divergence angle of laser range finder based on CCD imaging fusion
NASA Astrophysics Data System (ADS)
Shi, Sheng-bing; Chen, Zhen-xing; Lv, Yao
2016-09-01
Laser range finder has been equipped with all kinds of weapons, such as tank, ship, plane and so on, is important component of fire control system. Divergence angle is important performance and incarnation of horizontal resolving power for laser range finder, is necessary appraised test item in appraisal test. In this paper, based on high accuracy test on divergence angle of laser range finder, divergence angle test system is designed based on CCD imaging, divergence angle of laser range finder is acquired through fusion technology for different attenuation imaging, problem that CCD characteristic influences divergence angle test is solved.
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebert, Martial; Kanade, Takeo
1989-01-01
A three-dimensional perception system for building a geometrical description of rugged terrain environments from range image data is presented with reference to the exploration of the rugged terrain of Mars. An intermediate representation consisting of an elevation map that includes an explicit representation of uncertainty and labeling of the occluded regions is proposed. The locus method used to convert range image to an elevation map is introduced, along with an uncertainty model based on this algorithm. Both the elevation map and the locus method are the basis of a terrain matching algorithm which does not assume any correspondences between range images. The two-stage algorithm consists of a feature-based matching algorithm to compute an initial transform and an iconic terrain matching algorithm to merge multiple range images into a uniform representation. Terrain modeling results on real range images of rugged terrain are presented. The algorithms considered are a fundamental part of the perception system for the Ambler, a legged locomotor.
Design of CMOS imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.
High dynamic range CMOS (HDRC) imagers for safety systems
NASA Astrophysics Data System (ADS)
Strobel, Markus; Döttling, Dietmar
2013-04-01
The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Is there a preference for linearity when viewing natural images?
NASA Astrophysics Data System (ADS)
Kane, David; Bertamío, Marcelo
2015-01-01
The system gamma of the imaging pipeline, defined as the product of the encoding and decoding gammas, is typically greater than one and is stronger for images viewed with a dark background (e.g. cinema) than those viewed in lighter conditions (e.g. office displays).1-3 However, for high dynamic range (HDR) images reproduced on a low dynamic range (LDR) monitor, subjects often prefer a system gamma of less than one,4 presumably reflecting the greater need for histogram equalization in HDR images. In this study we ask subjects to rate the perceived quality of images presented on a LDR monitor using various levels of system gamma. We reveal that the optimal system gamma is below one for images with a HDR and approaches or exceeds one for images with a LDR. Additionally, the highest quality scores occur for images where a system gamma of one is optimal, suggesting a preference for linearity (where possible). We find that subjective image quality scores can be predicted by computing the degree of histogram equalization of the lightness distribution. Accordingly, an optimal, image dependent system gamma can be computed that maximizes perceived image quality.
Radiation dose and magnification in pelvic X-ray: EOS™ imaging system versus plain radiographs.
Chiron, P; Demoulin, L; Wytrykowski, K; Cavaignac, E; Reina, N; Murgier, J
2017-12-01
In plain pelvic X-ray, magnification makes measurement unreliable. The EOS™ (EOS Imaging, Paris France) imaging system is reputed to reproduce patient anatomy exactly, with a lower radiation dose. This, however, has not been assessed according to patient weight, although both magnification and irradiation are known to vary with weight. We therefore conducted a prospective comparative study, to compare: (1) image magnification and (2) radiation dose between the EOS imaging system and plain X-ray. The EOS imaging system reproduces patient anatomy exactly, regardless of weight, unlike plain X-ray. A single-center comparative study of plain pelvic X-ray and 2D EOS radiography was performed in 183 patients: 186 arthroplasties; 104 male, 81 female; mean age 61.3±13.7years (range, 24-87years). Magnification and radiation dose (dose-area product [DAP]) were compared between the two systems in 186 hips in patients with a mean body-mass index (BMI) of 27.1±5.3kg/m 2 (range, 17.6-42.3kg/m 2 ), including 7 with morbid obesity. Mean magnification was zero using the EOS system, regardless of patient weight, compared to 1.15±0.05 (range, 1-1.32) on plain X-ray (P<10 -5 ). In patients with BMI<25, mean magnification on plain X-ray was 1.15±0.05 (range, 1-1.25) and, in patients with morbid obesity, 1.22±0.06 (range, 1.18-1.32). The mean radiation dose was 8.19±2.63dGy/cm 2 (range, 1.77-14.24) with the EOS system, versus 19.38±12.37dGy/cm 2 (range, 4.77-81.75) with plain X-ray (P<10 -4 ). For BMI >40, mean radiation dose was 9.36±2.57dGy/cm 2 (range, 7.4-14.2) with the EOS system, versus 44.76±22.21 (range, 25.2-81.7) with plain X-ray. Radiation dose increased by 0.20dGy with each extra BMI point for the EOS system, versus 0.74dGy for plain X-ray. Magnification did not vary with patient weight using the EOS system, unlike plain X-ray, and radiation dose was 2.5-fold lower. 3, prospective case-control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Towards a robust HDR imaging system
NASA Astrophysics Data System (ADS)
Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing
2016-07-01
High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.
2013-01-01
Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Kilovoltage energy imaging with a radiotherapy linac with a continuously variable energy range.
Roberts, D A; Hansen, V N; Thompson, M G; Poludniowski, G; Niven, A; Seco, J; Evans, P M
2012-03-01
In this paper, the effect on image quality of significantly reducing the primary electron energy of a radiotherapy accelerator is investigated using a novel waveguide test piece. The waveguide contains a novel variable coupling device (rotovane), allowing for a wide continuously variable energy range of between 1.4 and 9 MeV suitable for both imaging and therapy. Imaging at linac accelerating potentials close to 1 MV was investigated experimentally and via Monte Carlo simulations. An imaging beam line was designed, and planar and cone beam computed tomography images were obtained to enable qualitative and quantitative comparisons with kilovoltage and megavoltage imaging systems. The imaging beam had an electron energy of 1.4 MeV, which was incident on a water cooled electron window consisting of stainless steel, a 5 mm carbon electron absorber and 2.5 mm aluminium filtration. Images were acquired with an amorphous silicon detector sensitive to diagnostic x-ray energies. The x-ray beam had an average energy of 220 keV and half value layer of 5.9 mm of copper. Cone beam CT images with the same contrast to noise ratio as a gantry mounted kilovoltage imaging system were obtained with doses as low as 2 cGy. This dose is equivalent to a single 6 MV portal image. While 12 times higher than a 100 kVp CBCT system (Elekta XVI), this dose is 140 times lower than a 6 MV cone beam imaging system and 6 times lower than previously published LowZ imaging beams operating at higher (4-5 MeV) energies. The novel coupling device provides for a wide range of electron energies that are suitable for kilovoltage quality imaging and therapy. The imaging system provides high contrast images from the therapy portal at low dose, approaching that of gantry mounted kilovoltage x-ray systems. Additionally, the system provides low dose imaging directly from the therapy portal, potentially allowing for target tracking during radiotherapy treatment. There is the scope with such a tuneable system for further energy reduction and subsequent improvement in image quality.
A CMOS-based large-area high-resolution imaging system for high-energy x-ray applications
NASA Astrophysics Data System (ADS)
Rodricks, Brian; Fowler, Boyd; Liu, Chiao; Lowes, John; Haeffner, Dean; Lienert, Ulrich; Almer, John
2008-08-01
CCDs have been the primary sensor in imaging systems for x-ray diffraction and imaging applications in recent years. CCDs have met the fundamental requirements of low noise, high-sensitivity, high dynamic range and spatial resolution necessary for these scientific applications. State-of-the-art CMOS image sensor (CIS) technology has experienced dramatic improvements recently and their performance is rivaling or surpassing that of most CCDs. The advancement of CIS technology is at an ever-accelerating pace and is driven by the multi-billion dollar consumer market. There are several advantages of CIS over traditional CCDs and other solid-state imaging devices; they include low power, high-speed operation, system-on-chip integration and lower manufacturing costs. The combination of superior imaging performance and system advantages makes CIS a good candidate for high-sensitivity imaging system development. This paper will describe a 1344 x 1212 CIS imaging system with a 19.5μm pitch optimized for x-ray scattering studies at high-energies. Fundamental metrics of linearity, dynamic range, spatial resolution, conversion gain, sensitivity are estimated. The Detective Quantum Efficiency (DQE) is also estimated. Representative x-ray diffraction images are presented. Diffraction images are compared against a CCD-based imaging system.
Plastic fiber scintillator response to fast neutrons
NASA Astrophysics Data System (ADS)
Danly, C. R.; Sjue, S.; Wilde, C. H.; Merrill, F. E.; Haight, R. C.
2014-11-01
The Neutron Imaging System at NIF uses an array of plastic scintillator fibers in conjunction with a time-gated imaging system to form an image of the neutron emission from the imploded capsule. By gating on neutrons that have scattered from the 14.1 MeV DT energy to lower energy ranges, an image of the dense, cold fuel around the hotspot is also obtained. An unmoderated spallation neutron beamline at the Weapons Neutron Research facility at Los Alamos was used in conjunction with a time-gated imaging system to measure the yield of a scintillating fiber array over several energy bands ranging from 1 to 15 MeV. The results and comparison to simulation are presented.
Plastic fiber scintillator response to fast neutrons.
Danly, C R; Sjue, S; Wilde, C H; Merrill, F E; Haight, R C
2014-11-01
The Neutron Imaging System at NIF uses an array of plastic scintillator fibers in conjunction with a time-gated imaging system to form an image of the neutron emission from the imploded capsule. By gating on neutrons that have scattered from the 14.1 MeV DT energy to lower energy ranges, an image of the dense, cold fuel around the hotspot is also obtained. An unmoderated spallation neutron beamline at the Weapons Neutron Research facility at Los Alamos was used in conjunction with a time-gated imaging system to measure the yield of a scintillating fiber array over several energy bands ranging from 1 to 15 MeV. The results and comparison to simulation are presented.
Range determination for scannerless imaging
Muguira, Maritza Rosa; Sackos, John Theodore; Bradley, Bart Davis; Nellums, Robert
2000-01-01
A new method of operating a scannerless range imaging system (e.g., a scannerless laser radar) has been developed. This method is designed to compensate for nonlinear effects which appear in many real-world components. The system operates by determining the phase shift of the laser modulation, which is a physical quantity related physically to the path length between the laser source and the detector, for each pixel of an image.
Standoff concealed weapon detection using a 350-GHz radar imaging system
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.; Severtsen, Ronald H.; McMakin, Douglas L.; Hatchell, Brian K.; Valdez, Patrick L. J.
2010-04-01
The sub-millimeter (sub-mm) wave frequency band from 300 - 1000 GHz is currently being developed for standoff concealed weapon detection imaging applications. This frequency band is of interest due to the unique combination of high resolution and clothing penetration. The Pacific Northwest National Laboratory (PNNL) is currently developing a 350 GHz, active, wideband, three-dimensional, radar imaging system to evaluate the feasibility of active sub-mm imaging for standoff detection. Standoff concealed weapon and explosive detection is a pressing national and international need for both civilian and military security, as it may allow screening at safer distances than portal screening techniques. PNNL has developed a prototype active wideband 350 GHz radar imaging system based on a wideband, heterodyne, frequency-multiplier-based transceiver system coupled to a quasi-optical focusing system and high-speed rotating conical scanner. This prototype system operates at ranges up to 10+ meters, and can acquire an image in 10 - 20 seconds, which is fast enough to scan cooperative personnel for concealed weapons. The wideband operation of this system provides accurate ranging information, and the images obtained are fully three-dimensional. During the past year, several improvements to the system have been designed and implemented, including increased imaging speed using improved balancing techniques, wider bandwidth, and improved image processing techniques. In this paper, the imaging system is described in detail and numerous imaging results are presented.
Multi-beam range imager for autonomous operations
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.
1993-01-01
For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.
Development and testing of the EVS 2000 enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joe J.; Arnoldy, Dan; Zeylmaker, Richard; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.
Target recognition of ladar range images using slice image: comparison of four improved algorithms
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang
2017-07-01
Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, J; University of Sydney, Sydney, NSW; Sykes, J
Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less
2014-01-01
Background Subcutaneous veins localization is usually performed manually by medical staff to find suitable vein to insert catheter for medication delivery or blood sample function. The rule of thumb is to find large and straight enough vein for the medication to flow inside of the selected blood vessel without any obstruction. The problem of peripheral difficult venous access arises when patient’s veins are not visible due to any reason like dark skin tone, presence of hair, high body fat or dehydrated condition, etc. Methods To enhance the visibility of veins, near infrared imaging systems is used to assist medical staff in veins localization process. Optimum illumination is crucial to obtain a better image contrast and quality, taking into consideration the limited power and space on portable imaging systems. In this work a hyperspectral image quality assessment is done to get the optimum range of illumination for venous imaging system. A database of hyperspectral images from 80 subjects has been created and subjects were divided in to four different classes on the basis of their skin tone. In this paper the results of hyper spectral image analyses are presented in function of the skin tone of patients. For each patient, four mean images were constructed by taking mean with a spectral span of 50 nm within near infrared range, i.e. 750–950 nm. Statistical quality measures were used to analyse these images. Conclusion It is concluded that the wavelength range of 800 to 850 nm serve as the optimum illumination range to get best near infrared venous image quality for each type of skin tone. PMID:25087016
Biomedical sensing and imaging for the anterior segment of the eye
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Yoo, Young-Sik; Lee, Yong-Eun; Kim, Beop-Min; Joo, Choun-Ki
2015-07-01
Eye is an optical system composed briefly of cornea, lens, and retina. Ophthalmologists can diagnose status of patient's eye from information provided by optical sensors or images as well as from history taking or physical examinations. Recently, we developed a prototype of optical coherence tomography (OCT) image guided femtosecond laser cataract surgery system. The system combined a swept-source OCT and a femtosecond (fs) laser and afford the 2D and 3D structure information to increase the efficiency and safety of the cataract procedure. The OCT imaging range was extended to achieve the 3D image from the cornea to lens posterior. A prototype of OCT image guided fs laser cataract surgery system. The surgeons can plan the laser illumination range for the nuclear division and segmentation, and monitor the whole cataract surgery procedure using the real time OCT. The surgery system was demonstrated with an extracted pig eye and in vivo rabbit eye to verify the system performance and stability.
Luminescence imaging of water during uniform-field irradiation by spot scanning proton beams
NASA Astrophysics Data System (ADS)
Komori, Masataka; Sekihara, Eri; Yabe, Takuya; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-06-01
Luminescence was found during pencil-beam proton irradiation to water phantom and range could be estimated from the luminescence images. However, it is not yet clear whether the luminescence imaging is applied to the uniform fields made of spot-scanning proton-beam irradiations. For this purpose, imaging was conducted for the uniform fields having spread out Bragg peak (SOBP) made by spot scanning proton beams. We designed six types of the uniform fields with different ranges, SOBP widths and irradiation fields. One of the designed fields was irradiated to water phantom and a cooled charge coupled device camera was used to measure the luminescence image during irradiations. We estimated the ranges, field widths, and luminescence intensities from the luminescence images and compared those with the dose distribution calculated by a treatment planning system. For all types of uniform fields, we could obtain clear images of the luminescence showing the SOBPs. The ranges and field widths evaluated from the luminescence were consistent with those of the dose distribution calculated by a treatment planning system within the differences of ‑4 mm and ‑11 mm, respectively. Luminescence intensities were almost proportional to the SOBP widths perpendicular to the beam direction. The luminescence imaging could be applied to uniform fields made of spot scanning proton beam irradiations. Ranges and widths of the uniform fields with SOBP could be estimated from the images. The luminescence imaging is promising for the range and field width estimations in proton therapy.
Use of the variable gain settings on SPOT
Chavez, P.S.
1989-01-01
Often the brightness or digital number (DN) range of satellite image data is less than optimal and uses only a portion of the available values (0 to 255) because the range of reflectance values is small. Most imaging systems have been designed with only two gain settings, normal and high. The SPOT High Resolution Visible (HRV) imaging system has the capability to collect image data using one of eight different gain settings. With the proper procedure this allows the brightness or reflectance resolution, which is directly related to the range of DN values recorded, to be optimized for any given site as compared to using a single set of gain settings everywhere. -from Author
Processing techniques for digital sonar images from GLORIA.
Chavez, P.S.
1986-01-01
Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author
NASA Astrophysics Data System (ADS)
Jing, Joseph C.; Chou, Lidek; Su, Erica; Wong, Brian J. F.; Chen, Zhongping
2016-12-01
The upper airway is a complex tissue structure that is prone to collapse. Current methods for studying airway obstruction are inadequate in safety, cost, or availability, such as CT or MRI, or only provide localized qualitative information such as flexible endoscopy. Long range optical coherence tomography (OCT) has been used to visualize the human airway in vivo, however the limited imaging range has prevented full delineation of the various shapes and sizes of the lumen. We present a new long range OCT system that integrates high speed imaging with a real-time position tracker to allow for the acquisition of an accurate 3D anatomical structure in vivo. The new system can achieve an imaging range of 30 mm at a frame rate of 200 Hz. The system is capable of generating a rapid and complete visualization and quantification of the airway, which can then be used in computational simulations to determine obstruction sites.
A high-resolution full-field range imaging system
NASA Astrophysics Data System (ADS)
Carnegie, D. A.; Cree, M. J.; Dorrington, A. A.
2005-08-01
There exist a number of applications where the range to all objects in a field of view needs to be obtained. Specific examples include obstacle avoidance for autonomous mobile robots, process automation in assembly factories, surface profiling for shape analysis, and surveying. Ranging systems can be typically characterized as being either laser scanning systems where a laser point is sequentially scanned over a scene or a full-field acquisition where the range to every point in the image is simultaneously obtained. The former offers advantages in terms of range resolution, while the latter tend to be faster and involve no moving parts. We present a system for determining the range to any object within a camera's field of view, at the speed of a full-field system and the range resolution of some point laser scans. Initial results obtained have a centimeter range resolution for a 10 second acquisition time. Modifications to the existing system are discussed that should provide faster results with submillimeter resolution.
Mackenzie, Alistair; Dance, David R; Workman, Adam; Yip, Mary; Wells, Kevin; Young, Kenneth C
2012-05-01
Undertaking observer studies to compare imaging technology using clinical radiological images is challenging due to patient variability. To achieve a significant result, a large number of patients would be required to compare cancer detection rates for different image detectors and systems. The aim of this work was to create a methodology where only one set of images is collected on one particular imaging system. These images are then converted to appear as if they had been acquired on a different detector and x-ray system. Therefore, the effect of a wide range of digital detectors on cancer detection or diagnosis can be examined without the need for multiple patient exposures. Three detectors and x-ray systems [Hologic Selenia (ASE), GE Essential (CSI), Carestream CR (CR)] were characterized in terms of signal transfer properties, noise power spectra (NPS), modulation transfer function, and grid properties. The contributions of the three noise sources (electronic, quantum, and structure noise) to the NPS were calculated by fitting a quadratic polynomial at each spatial frequency of the NPS against air kerma. A methodology was developed to degrade the images to have the characteristics of a different (target) imaging system. The simulated images were created by first linearizing the original images such that the pixel values were equivalent to the air kerma incident at the detector. The linearized image was then blurred to match the sharpness characteristics of the target detector. Noise was then added to the blurred image to correct for differences between the detectors and any required change in dose. The electronic, quantum, and structure noise were added appropriate to the air kerma selected for the simulated image and thus ensuring that the noise in the simulated image had the same magnitude and correlation as the target image. A correction was also made for differences in primary grid transmission, scatter, and veiling glare. The method was validated by acquiring images of a CDMAM contrast detail test object (Artinis, The Netherlands) at five different doses for the three systems. The ASE CDMAM images were then converted to appear with the imaging characteristics of target CR and CSI detectors. The measured threshold gold thicknesses of the simulated and target CDMAM images were closely matched at normal dose level and the average differences across the range of detail diameters were -4% and 0% for the CR and CSI systems, respectively. The conversion was successful for images acquired over a wide dose range. The average difference between simulated and target images for a given dose was a maximum of 11%. The validation shows that the image quality of a digital mammography image obtained with a particular system can be degraded, in terms of noise magnitude and color, sharpness, and contrast to account for differences in the detector and antiscatter grid. Potentially, this is a powerful tool for observer studies, as a range of image qualities can be examined by modifying an image set obtained at a single (better) image quality thus removing the patient variability when comparing systems.
SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, S; Uesaka, M; Nishio, T
2016-06-15
Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value ofmore » the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.« less
Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors
NASA Technical Reports Server (NTRS)
Matthies, Larry; Grandjean, Pierrick
1993-01-01
Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.
Laser range profiling for small target recognition
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Tulldahl, Michael
2016-05-01
The detection and classification of small surface and airborne targets at long ranges is a growing need for naval security. Long range ID or ID at closer range of small targets has its limitations in imaging due to the demand on very high transverse sensor resolution. It is therefore motivated to look for 1D laser techniques for target ID. These include vibrometry, and laser range profiling. Vibrometry can give good results but is also sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angular resolved. The present paper will show both experimental and simulated results for laser range profiling of small boats out to 6-7 km range and a UAV mockup at close range (1.3 km). We obtained good results with the profiling system both for target detection and recognition. Comparison of experimental and simulated range waveforms based on CAD models of the target support the idea of having a profiling system as a first recognition sensor and thus narrowing the search space for the automatic target recognition based on imaging at close ranges. The naval experiments took place in the Baltic Sea with many other active and passive EO sensors beside the profiling system. Discussion of data fusion between laser profiling and imaging systems will be given. The UAV experiments were made from the rooftop laboratory at FOI.
A hybrid continuous-wave terahertz imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolganova, Irina N., E-mail: in.dolganova@gmail.com; Zaytsev, Kirill I., E-mail: kirzay@gmail.ru; Metelkina, Anna A.
2015-11-15
A hybrid (active-passive mode) terahertz (THz) imaging system and an algorithm for imaging synthesis are proposed to enhance the THz image quality. The concept of image contrast is used to compare active and passive THz imaging. Combining the measurement of the self-emitted radiation of the object with the back-scattered source radiation measurement, it becomes possible to use the THz image to retrieve maximum information about the object. The experimental results confirm the advantages of hybrid THz imaging systems, which can be generalized for a wide range of applications in the material sciences, chemical physics, bio-systems, etc.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Imaging of blood cells based on snapshot Hyper-Spectral Imaging systems
NASA Astrophysics Data System (ADS)
Robison, Christopher J.; Kolanko, Christopher; Bourlai, Thirimachos; Dawson, Jeremy M.
2015-05-01
Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering coregistered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera; attached to a microscope at varying objective powers and illumination intensity. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyper-spectral data cube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting.
Ultrasound Imaging System Video
NASA Technical Reports Server (NTRS)
2002-01-01
In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.
Dynamic granularity of imaging systems
Geissel, Matthias; Smith, Ian C.; Shores, Jonathon E.; ...
2015-11-04
Imaging systems that include a specific source, imaging concept, geometry, and detector have unique properties such as signal-to-noise ratio, dynamic range, spatial resolution, distortions, and contrast. Some of these properties are inherently connected, particularly dynamic range and spatial resolution. It must be emphasized that spatial resolution is not a single number but must be seen in the context of dynamic range and consequently is better described by a function or distribution. We introduce the “dynamic granularity” G dyn as a standardized, objective relation between a detector’s spatial resolution (granularity) and dynamic range for complex imaging systems in a given environmentmore » rather than the widely found characterization of detectors such as cameras or films by themselves. We found that this relation can partly be explained through consideration of the signal’s photon statistics, background noise, and detector sensitivity, but a comprehensive description including some unpredictable data such as dust, damages, or an unknown spectral distribution will ultimately have to be based on measurements. Measured dynamic granularities can be objectively used to assess the limits of an imaging system’s performance including all contributing noise sources and to qualify the influence of alternative components within an imaging system. Our article explains the construction criteria to formulate a dynamic granularity and compares measured dynamic granularities for different detectors used in the X-ray backlighting scheme employed at Sandia’s Z-Backlighter facility.« less
Results of ACTIM: an EDA study on spectral laser imaging
NASA Astrophysics Data System (ADS)
Hamoir, Dominique; Hespel, Laurent; Déliot, Philippe; Boucher, Yannick; Steinvall, Ove; Ahlberg, Jörgen; Larsson, Hakan; Letalick, Dietmar; Lutzmann, Peter; Repasi, Endre; Ritt, Gunnar
2011-11-01
The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising military applications, system analyses, a roadmap and recommendations. Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn, active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage, camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result in new capabilities. We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat detection. We present the system analyses that have been performed for confirming the interests, limitations and requirements of spectral active imaging in these three prioritized applications.
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.
2003-03-01
A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.
NASA Astrophysics Data System (ADS)
Rangarajan, Swathi; Chou, Li-Dek; Coughlan, Carolyn; Sharma, Giriraj; Wong, Brian J. F.; Ramalingam, Tirunelveli S.
2016-02-01
Fourier domain optical coherence tomography (FD-OCT) is a noninvasive imaging modality that has previously been used to image the human larynx. However, differences in anatomical geometry and short imaging range of conventional OCT limits its application in a clinical setting. In order to address this issue, we have developed a gradient-index (GRIN) lens rod-based hand-held probe in conjunction with a long imaging range 200 kHz Vertical-Cavity Surface Emitting Lasers (VCSEL) swept-source optical coherence tomography (SS-OCT) system for high speed real-time imaging of the human larynx in an office setting. This hand-held probe is designed to have a long and dynamically tunable working distance to accommodate the differences in anatomical geometry of human test subjects. A nominal working distance (~6 cm) of the probe is selected to have a lateral resolution <100 um within a depth of focus of 6.4 mm, which covers more than half of the 12 mm imaging range of the VCSEL laser. The maximum lateral scanning range of the probe at 6 cm working distance is approximately 8.4 mm, and imaging an area of 8.5 mm by 8.5 mm is accomplished within a second. Using the above system, we will demonstrate real-time cross-sectional OCT imaging of larynx during phonation in vivo in human and ex-vivo in pig vocal folds.
NASA Astrophysics Data System (ADS)
Marchand, Paul J.; Szlag, Daniel; Bouwens, Arno; Lasser, Theo
2018-03-01
Visible light optical coherence tomography has shown great interest in recent years for spectroscopic and high-resolution retinal and cerebral imaging. Here, we present an extended-focus optical coherence microscopy system operating from the visible to the near-infrared wavelength range for high axial and lateral resolution imaging of cortical structures in vivo. The system exploits an ultrabroad illumination spectrum centered in the visible wavelength range (λc = 650 nm, Δλ ˜ 250 nm) offering a submicron axial resolution (˜0.85 μm in water) and an extended-focus configuration providing a high lateral resolution of ˜1.4 μm maintained over ˜150 μm in depth in water. The system's axial and lateral resolution are first characterized using phantoms, and its imaging performance is then demonstrated by imaging the vasculature, myelinated axons, and neuronal cells in the first layers of the somatosensory cortex of mice in vivo.
Interactive data-processing system for metallurgy
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1978-01-01
Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.
Evaluation of image quality in terahertz pulsed imaging using test objects.
Fitzgerald, A J; Berry, E; Miles, R E; Zinovev, N N; Smith, M A; Chamberlain, J M
2002-11-07
As with other imaging modalities, the performance of terahertz (THz) imaging systems is limited by factors of spatial resolution, contrast and noise. The purpose of this paper is to introduce test objects and image analysis methods to evaluate and compare THz image quality in a quantitative and objective way, so that alternative terahertz imaging system configurations and acquisition techniques can be compared, and the range of image parameters can be assessed. Two test objects were designed and manufactured, one to determine the modulation transfer functions (MTF) and the other to derive image signal to noise ratio (SNR) at a range of contrasts. As expected the higher THz frequencies had larger MTFs, and better spatial resolution as determined by the spatial frequency at which the MTF dropped below the 20% threshold. Image SNR was compared for time domain and frequency domain image parameters and time delay based images consistently demonstrated higher SNR than intensity based parameters such as relative transmittance because the latter are more strongly affected by the sources of noise in the THz system such as laser fluctuations and detector shot noise.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.
2011-03-01
We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.
NASA Astrophysics Data System (ADS)
Yang, Victor X. D.; Gordon, Maggie L.; Tang, Shou-Jiang; Marcon, Norman E.; Gardiner, Geoffrey; Qi, Bing; Bisland, Stuart; Seng-Yue, Emily; Lo, Stewart; Pekar, Julius; Wilson, Brian C.; Vitkin, I. Alex
2003-09-01
We previously described a fiber based Doppler optical coherence tomography system [1] capable of imaging embryo cardiac blood flow at 4~16 frames per second with wide velocity dynamic range [2]. Coupling this system to a linear scanning fiber optical catheter design that minimizes friction and vibrations, we report here the initial results of in vivo endoscopic Doppler optical coherence tomography (EDOCT) imaging in normal rat and human esophagus. Microvascular flow in blood vessels less than 100 µm diameter was detected using a combination of color-Doppler and velocity variance imaging modes, during clinical endoscopy using a mobile EDOCT system.
Nankivil, Derek; Waterman, Gar; LaRocca, Francesco; Keller, Brenton; Kuo, Anthony N.; Izatt, Joseph A.
2015-01-01
We describe the first handheld, swept source optical coherence tomography (SSOCT) system capable of imaging both the anterior and posterior segments of the eye in rapid succession. A single 2D microelectromechanical systems (MEMS) scanner was utilized for both imaging modes, and the optical paths for each imaging mode were optimized for their respective application using a combination of commercial and custom optics. The system has a working distance of 26.1 mm and a measured axial resolution of 8 μm (in air). In posterior segment mode, the design has a lateral resolution of 9 μm, 7.4 mm imaging depth range (in air), 4.9 mm 6dB fall-off range (in air), and peak sensitivity of 103 dB over a 22° field of view (FOV). In anterior segment mode, the design has a lateral resolution of 24 μm, imaging depth range of 7.4 mm (in air), 6dB fall-off range of 4.5 mm (in air), depth-of-focus of 3.6 mm, and a peak sensitivity of 99 dB over a 17.5 mm FOV. In addition, the probe includes a wide-field iris imaging system to simplify alignment. A fold mirror assembly actuated by a bi-stable rotary solenoid was used to switch between anterior and posterior segment imaging modes, and a miniature motorized translation stage was used to adjust the objective lens position to correct for patient refraction between −12.6 and + 9.9 D. The entire probe weighs less than 630 g with a form factor of 20.3 x 9.5 x 8.8 cm. Healthy volunteers were imaged to illustrate imaging performance. PMID:26601014
Space imaging measurement system based on fixed lens and moving detector
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Doshida, Minoru; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu
2006-08-01
We have developed the Space Imaging Measurement System based on the fixed lens and fast moving detector to the control of the autonomous ground vehicle. The space measurement is the most important task in the development of the autonomous ground vehicle. In this study we move the detector back and forth along the optical axis at the fast rate to measure the three-dimensional image data. This system is just appropriate to the autonomous ground vehicle because this system does not send out any optical energy to measure the distance and keep the safety. And we use the digital camera of the visible ray range. Therefore it gives us the cost reduction of the three-dimensional image data acquisition with respect to the imaging laser system. We can combine many pieces of the narrow space imaging measurement data to construct the wide range three-dimensional data. This gives us the improvement of the image recognition with respect to the object space. To develop the fast movement of the detector, we build the counter mass balance in the mechanical crank system of the Space Imaging Measurement System. And then we set up the duct to prevent the optical noise due to the ray not coming through lens. The object distance is derived from the focus distance which related to the best focused image data. The best focused image data is selected from the image of the maximum standard deviation in the standard deviations of series images.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K
2016-11-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm 2 . The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm 2 ). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K.
2016-01-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm2. The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm2). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications. PMID:27896012
Real-time image processing of TOF range images using a reconfigurable processor system
NASA Astrophysics Data System (ADS)
Hussmann, S.; Knoll, F.; Edeler, T.
2011-07-01
During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.
NASA Astrophysics Data System (ADS)
Xu, Jingjiang; Song, Shaozhen; Men, Shaojie; Wang, Ruikang K.
2017-11-01
There is an increasing demand for imaging tools in clinical dermatology that can perform in vivo wide-field morphological and functional examination from surface to deep tissue regions at various skin sites of the human body. The conventional spectral-domain optical coherence tomography-based angiography (SD-OCTA) system is difficult to meet these requirements due to its fundamental limitations of the sensitivity roll-off, imaging range as well as imaging speed. To mitigate these issues, we demonstrate a swept-source OCTA (SS-OCTA) system by employing a swept source based on a vertical cavity surface-emitting laser. A series of comparisons between SS-OCTA and SD-OCTA are conducted. Benefiting from the high system sensitivity, long imaging range, and superior roll-off performance, the SS-OCTA system is demonstrated with better performance in imaging human skin than the SD-OCTA system. We show that the SS-OCTA permits remarkable deep visualization of both structure and vasculature (up to ˜2 mm penetration) with wide field of view capability (up to 18×18 mm2), enabling a more comprehensive assessment of the morphological features as well as functional blood vessel networks from the superficial epidermal to deep dermal layers. It is expected that the advantages of the SS-OCTA system will provide a ground for clinical translation, benefiting the existing dermatological practice.
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
Multiple energy synchrotron biomedical imaging system
NASA Astrophysics Data System (ADS)
Bassey, B.; Martinson, M.; Samadi, N.; Belev, G.; Karanfil, C.; Qi, P.; Chapman, D.
2016-12-01
A multiple energy imaging system that can extract multiple endogenous or induced contrast materials as well as water and bone images would be ideal for imaging of biological subjects. The continuous spectrum available from synchrotron light facilities provides a nearly perfect source for multiple energy x-ray imaging. A novel multiple energy x-ray imaging system, which prepares a horizontally focused polychromatic x-ray beam, has been developed at the BioMedical Imaging and Therapy bend magnet beamline at the Canadian Light Source. The imaging system is made up of a cylindrically bent Laue single silicon (5,1,1) crystal monochromator, scanning and positioning stages for the subjects, flat panel (area) detector, and a data acquisition and control system. Depending on the crystal’s bent radius, reflection type, and the horizontal beam width of the filtered synchrotron radiation (20-50 keV) used, the size and spectral energy range of the focused beam prepared varied. For example, with a bent radius of 95 cm, a (1,1,1) type reflection and a 50 mm wide beam, a 0.5 mm wide focused beam of spectral energy range 27 keV-43 keV was obtained. This spectral energy range covers the K-edges of iodine (33.17 keV), xenon (34.56 keV), cesium (35.99 keV), and barium (37.44 keV) some of these elements are used as biomedical and clinical contrast agents. Using the developed imaging system, a test subject composed of iodine, xenon, cesium, and barium along with water and bone were imaged and their projected concentrations successfully extracted. The estimated dose rate to test subjects imaged at a ring current of 200 mA is 8.7 mGy s-1, corresponding to a cumulative dose of 1.3 Gy and a dose of 26.1 mGy per image. Potential biomedical applications of the imaging system will include projection imaging that requires any of the extracted elements as a contrast agent and multi-contrast K-edge imaging.
SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications
NASA Astrophysics Data System (ADS)
Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.
2005-08-01
A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.
NASA Astrophysics Data System (ADS)
Lynam, Jeff R.
2001-09-01
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
The Ansel Adams zone system: HDR capture and range compression by chemical processing
NASA Astrophysics Data System (ADS)
McCann, John J.
2010-02-01
We tend to think of digital imaging and the tools of PhotoshopTM as a new phenomenon in imaging. We are also familiar with multiple-exposure HDR techniques intended to capture a wider range of scene information, than conventional film photography. We know about tone-scale adjustments to make better pictures. We tend to think of everyday, consumer, silver-halide photography as a fixed window of scene capture with a limited, standard range of response. This description of photography is certainly true, between 1950 and 2000, for instant films and negatives processed at the drugstore. These systems had fixed dynamic range and fixed tone-scale response to light. All pixels in the film have the same response to light, so the same light exposure from different pixels was rendered as the same film density. Ansel Adams, along with Fred Archer, formulated the Zone System, staring in 1940. It was earlier than the trillions of consumer photos in the second half of the 20th century, yet it was much more sophisticated than today's digital techniques. This talk will describe the chemical mechanisms of the zone system in the parlance of digital image processing. It will describe the Zone System's chemical techniques for image synthesis. It also discusses dodging and burning techniques to fit the HDR scene into the LDR print. Although current HDR imaging shares some of the Zone System's achievements, it usually does not achieve all of them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C.
Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a referencemore » beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the quantum noise correction factor reduced the difference from the model to the real NPS to generally within 4%. The use of the quantum noise correction improved the conversion of ASEh image to CRc image but had no difference for the conversion to CSI images. Conclusions: A practical method for estimating the NPS at any dose and over a range of beam qualities for mammography has been demonstrated. The noise model was incorporated into a methodology for converting an image to appear as if acquired on a different detector. The method can now be extended to work for a wide range of beam qualities and can be applied to the conversion of mammograms.« less
Rosman, David A; Duszak, Richard; Wang, Wenyi; Hughes, Danny R; Rosenkrantz, Andrew B
2018-02-01
The objective of our study was to use a new modality and body region categorization system to assess changing utilization of noninvasive diagnostic imaging in the Medicare fee-for-service population over a recent 20-year period (1994-2013). All Medicare Part B Physician Fee Schedule services billed between 1994 and 2013 were identified using Physician/Supplier Procedure Summary master files. Billed codes for diagnostic imaging were classified using the Neiman Imaging Types of Service (NITOS) coding system by both modality and body region. Utilization rates per 1000 beneficiaries were calculated for families of services. Among all diagnostic imaging modalities, growth was greatest for MRI (+312%) and CT (+151%) and was lower for ultrasound, nuclear medicine, and radiography and fluoroscopy (range, +1% to +31%). Among body regions, service growth was greatest for brain (+126%) and spine (+74%) imaging; showed milder growth (range, +18% to +67%) for imaging of the head and neck, breast, abdomen and pelvis, and extremity; and showed slight declines (range, -2% to -7%) for cardiac and chest imaging overall. The following specific imaging service families showed massive (> +100%) growth: cardiac CT, cardiac MRI, and breast MRI. NITOS categorization permits identification of temporal shifts in noninvasive diagnostic imaging by specific modality- and region-focused families, providing a granular understanding and reproducible analysis of global changes in imaging overall. Service family-level perspectives may help inform ongoing policy efforts to optimize imaging utilization and appropriateness.
A study of the effects of strong magnetic fields on the image resolution of PET scanners
NASA Astrophysics Data System (ADS)
Burdette, Don J.
Very high resolution images can be achieved in small animal PET systems utilizing solid state silicon pad detectors. In such systems using detectors with sub-millimeter intrinsic resolutions, the range of the positron is the largest contribution to the image blur. The size of the positron range effect depends on the initial positron energy and hence the radioactive tracer used. For higher energy positron emitters, such as 68Ga and 94mTc, the variation of the annihilation point dominates the spatial resolution. In this study two techniques are investigated to improve the image resolution of PET scanners limited by the range of the positron. One, the positron range can be reduced by embedding the PET field of view in a strong magnetic field. We have developed a silicon pad detector based PET instrument that can operate in strong magnetic fields with an image resolution of 0.7 mm FWHM to study this effect. Two, iterative reconstruction methods can be used to statistically correct for the range of the positron. Both strong magnetic fields and iterative reconstruction algorithms that statistically account for the positron range distribution are investigated in this work.
NASA Astrophysics Data System (ADS)
Chen, Q. G.; Zhu, H. H.; Xu, Y.; Lin, B.; Chen, H.
2015-08-01
A quantitative method to discriminate caries lesions for a fluorescence imaging system is proposed in this paper. The autofluorescence spectral investigation of 39 teeth samples classified by the International Caries Detection and Assessment System levels was performed at 405 nm excitation. The major differences in the different caries lesions focused on the relative spectral intensity range of 565-750 nm. The spectral parameter, defined as the ratio of wavebands at 565-750 nm to the whole spectral range, was calculated. The image component ratio R/(G + B) of color components was statistically computed by considering the spectral parameters (e.g. autofluorescence, optical filter, and spectral sensitivity) in our fluorescence color imaging system. Results showed that the spectral parameter and image component ratio presented a linear relation. Therefore, the image component ratio was graded as <0.66, 0.66-1.06, 1.06-1.62, and >1.62 to quantitatively classify sound, early decay, established decay, and severe decay tissues, respectively. Finally, the fluorescence images of caries were experimentally obtained, and the corresponding image component ratio distribution was compared with the classification result. A method to determine the numerical grades of caries using a fluorescence imaging system was proposed. This method can be applied to similar imaging systems.
Analog signal processing for optical coherence imaging systems
NASA Astrophysics Data System (ADS)
Xu, Wei
Optical coherence tomography (OCT) and optical coherence microscopy (OCM) are non-invasive optical coherence imaging techniques, which enable micron-scale resolution, depth resolved imaging capability. Both OCT and OCM are based on Michelson interferometer theory. They are widely used in ophthalmology, gastroenterology and dermatology, because of their high resolution, safety and low cost. OCT creates cross sectional images whereas OCM obtains en face images. In this dissertation, the design and development of three increasingly complicated analog signal processing (ASP) solutions for optical coherence imaging are presented. The first ASP solution was implemented for a time domain OCT system with a Rapid Scanning Optical Delay line (RSOD)-based optical signal modulation and logarithmic amplifier (Log amp) based demodulation. This OCT system can acquire up to 1600 A-scans per second. The measured dynamic range is 106dB at 200A-scan per second. This OCT signal processing electronics includes an off-the-shelf filter box with a Log amp circuit implemented on a PCB board. The second ASP solution was developed for an OCM system with synchronized modulation and demodulation and compensation for interferometer phase drift. This OCM acquired micron-scale resolution, high dynamic range images at acquisition speeds up to 45,000 pixels/second. This OCM ASP solution is fully custom designed on a perforated circuit board. The third ASP solution was implemented on a single 2.2 mm x 2.2 mm complementary metal oxide semiconductor (CMOS) chip. This design is expandable to a multiple channel OCT system. A single on-chip CMOS photodetector and ASP channel was used for coherent demodulation in a time domain OCT system. Cross-sectional images were acquired with a dynamic range of 76dB (limited by photodetector responsivity). When incorporated with a bump-bonded InGaAs photodiode with higher responsivity, the expected dynamic range is close to 100dB.
Heterodyne range imaging as an alternative to photogrammetry
NASA Astrophysics Data System (ADS)
Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard
2007-01-01
Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.
Color image processing and vision system for an automated laser paint-stripping system
NASA Astrophysics Data System (ADS)
Hickey, John M., III; Hise, Lawson
1994-10-01
Color image processing in machine vision systems has not gained general acceptance. Most machine vision systems use images that are shades of gray. The Laser Automated Decoating System (LADS) required a vision system which could discriminate between substrates of various colors and textures and paints ranging from semi-gloss grays to high gloss red, white and blue (Air Force Thunderbirds). The changing lighting levels produced by the pulsed CO2 laser mandated a vision system that did not require a constant color temperature lighting for reliable image analysis.
Image interpolation used in three-dimensional range data compression.
Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian
2016-05-20
Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.
NASA Astrophysics Data System (ADS)
Saari, H.; Akujärvi, A.; Holmlund, C.; Ojanen, H.; Kaivosoja, J.; Nissinen, A.; Niemeläinen, O.
2017-10-01
The accurate determination of the quality parameters of crops requires a spectral range from 400 nm to 2500 nm (Kawamura et al., 2010, Thenkabail et al., 2002). Presently the hyperspectral imaging systems that cover this wavelength range consist of several separate hyperspectral imagers and the system weight is from 5 to 15 kg. In addition the cost of the Short Wave Infrared (SWIR) cameras is high ( 50 k€). VTT has previously developed compact hyperspectral imagers for drones and Cubesats for Visible and Very near Infrared (VNIR) spectral ranges (Saari et al., 2013, Mannila et al., 2013, Näsilä et al., 2016). Recently VTT has started to develop a hyperspectral imaging system that will enable imaging simultaneously in the Visible, VNIR, and SWIR spectral bands. The system can be operated from a drone, on a camera stand, or attached to a tractor. The targeted main applications of the DroneKnowledge hyperspectral system are grass, peas, and cereals. In this paper the characteristics of the built system are shortly described. The system was used for spectral measurements of wheat, several grass species and pea plants fixed to the camera mount in the test fields in Southern Finland and in the green house. The wheat, grass and pea field measurements were also carried out using the system mounted on the tractor. The work is part of the Finnish nationally funded DroneKnowledge - Towards knowledge based export of small UAS remote sensing technology project.
NASA Astrophysics Data System (ADS)
Tsuji, Hidenobu; Imaki, Masaharu; Kotake, Nobuki; Hirai, Akihito; Nakaji, Masaharu; Kameyama, Shumpei
2017-03-01
We demonstrate a range imaging pulsed laser sensor with two-dimensional scanning of a transmitted beam and a scanless receiver using a high-aspect avalanche photodiode (APD) array for the eye-safe wavelength. The system achieves a high frame rate and long-range imaging with a relatively simple sensor configuration. We developed a high-aspect APD array for the wavelength of 1.5 μm, a receiver integrated circuit, and a range and intensity detector. By combining these devices, we realized 160×120 pixels range imaging with a frame rate of 8 Hz at a distance of about 50 m.
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
NASA Astrophysics Data System (ADS)
Lin, Hsin-Hon; Chang, Hao-Ting; Chao, Tsi-Chian; Chuang, Keh-Shih
2017-08-01
In vivo range verification plays an important role in proton therapy to fully utilize the benefits of the Bragg peak (BP) for delivering high radiation dose to tumor, while sparing the normal tissue. For accurately locating the position of BP, camera equipped with collimators (multi-slit and knife-edge collimator) to image prompt gamma (PG) emitted along the proton tracks in the patient have been proposed for range verification. The aim of the work is to compare the performance of multi-slit collimator and knife-edge collimator for non-invasive proton beam range verification. PG imaging was simulated by a validated GATE/GEANT4 Monte Carlo code to model the spot-scanning proton therapy and cylindrical PMMA phantom in detail. For each spot, 108 protons were simulated. To investigate the correlation between the acquired PG profile and the proton range, the falloff regions of PG profiles were fitted with a 3-line-segment curve function as the range estimate. Factors including the energy window setting, proton energy, phantom size, and phantom shift that may influence the accuracy of detecting range were studied. Results indicated that both collimator systems achieve reasonable accuracy and good response to the phantom shift. The accuracy of range predicted by multi-slit collimator system is less affected by the proton energy, while knife-edge collimator system can achieve higher detection efficiency that lead to a smaller deviation in predicting range. We conclude that both collimator systems have potentials for accurately range monitoring in proton therapy. It is noted that neutron contamination has a marked impact on range prediction of the two systems, especially in multi-slit system. Therefore, a neutron reduction technique for improving the accuracy of range verification of proton therapy is needed.
Handheld microwave bomb-detecting imaging system
NASA Astrophysics Data System (ADS)
Gorwara, Ashok; Molchanov, Pavlo
2017-05-01
Proposed novel imaging technique will provide all weather high-resolution imaging and recognition capability for RF/Microwave signals with good penetration through highly scattered media: fog, snow, dust, smoke, even foliage, camouflage, walls and ground. Image resolution in proposed imaging system is not limited by diffraction and will be determined by processor and sampling frequency. Proposed imaging system can simultaneously cover wide field of view, detect multiple targets and can be multi-frequency, multi-function. Directional antennas in imaging system can be close positioned and installed in cell phone size handheld device, on small aircraft or distributed around protected border or object. Non-scanning monopulse system allows dramatically decrease in transmitting power and at the same time provides increased imaging range by integrating 2-3 orders more signals than regular scanning imaging systems.
Development of Fluorescence Imaging Lidar for Boat-Based Coral Observation
NASA Astrophysics Data System (ADS)
Sasano, Masahiko; Imasato, Motonobu; Yamano, Hiroya; Oguma, Hiroyuki
2016-06-01
A fluorescence imaging lidar system installed in a boat-towable buoy has been developed for the observation of reef-building corals. Long-range fluorescent images of the sea bed can be recorded in the daytime with this system. The viability of corals is clear in these fluorescent images because of the innate fluorescent proteins. In this study, the specifications and performance of the system are shown.
NASA Astrophysics Data System (ADS)
Hennessy, Ricky; Koo, Chiwan; Ton, Phuc; Han, Arum; Righetti, Raffaella; Maitland, Kristen C.
2011-03-01
Ultrasound poroelastography can quantify structural and mechanical properties of tissues such as stiffness, compressibility, and fluid flow rate. This novel ultrasound technique is being explored to detect tissue changes associated with lymphatic disease. We have constructed a macroscopic fluorescence imaging system to validate ultrasonic fluid flow measurements and to provide high resolution imaging of microfluidic phantoms. The optical imaging system is composed of a white light source, excitation and emission filters, and a camera with a zoom lens. The field of view can be adjusted from 100 mm x 75 mm to 10 mm x 7.5 mm. The microfluidic device is made of polydimethylsiloxane (PDMS) and has 9 channels, each 40 μm deep with widths ranging from 30 μm to 200 μm. A syringe pump was used to propel water containing 15 μm diameter fluorescent microspheres through the microchannels, with flow rates ranging from 0.5 μl/min to 10 μl/min. Video was captured at a rate of 25 frames/sec. The velocity of the microspheres in the microchannels was calculated using an algorithm that tracked the movement of the fluorescent microspheres. The imaging system was able to measure particle velocities ranging from 0.2 mm/sec to 10 mm/sec. The range of flow velocities of interest in lymph vessels is between 1 mm/sec to 10 mm/sec; therefore our imaging system is sufficient to measure particle velocity in phantoms modeling lymphatic flow.
Development of proton CT imaging system using plastic scintillator and CCD camera
NASA Astrophysics Data System (ADS)
Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru
2016-06-01
A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering.
NASA Astrophysics Data System (ADS)
Conard, S. J.; Weaver, H. A.; Núñez, J. I.; Taylor, H. W.; Hayes, J. R.; Cheng, A. F.; Rodgers, D. J.
2017-09-01
The Long-Range Reconnaissance Imager (LORRI) is a high-resolution imaging instrument on the New Horizons spacecraft. LORRI collected over 5000 images during the approach and fly-by of the Pluto system in 2015, including the highest resolution images of Pluto and Charon and the four much smaller satellites (Styx, Nix, Kerberos, and Hydra) near the time of closest approach on 14 July 2015. LORRI is a narrow field of view (0.29°), Ritchey-Chrétien telescope with a 20.8 cm diameter primary mirror and a three-lens field flattener. The telescope has an effective focal length of 262 cm. The focal plane unit consists of a 1024 × 1024 pixel charge-coupled device (CCD) detector operating in frame transfer mode. LORRI provides panchromatic imaging over a bandpass that extends approximately from 350 nm to 850 nm. The instrument operates in an extreme thermal environment, viewing space from within the warm spacecraft. For this reason, LORRI has a silicon carbide optical system with passive thermal control, designed to maintain focus without adjustment over a wide temperature range from -100 C to +50 C. LORRI operated flawlessly throughout the encounter period, providing both science and navigation imaging of the Pluto system. We describe the preparations for the Pluto system encounter, including pre-encounter rehearsals, calibrations, and navigation imaging. In addition, we describe LORRI operations during the encounter, and the resulting imaging performance. Finally, we also briefly describe the post-Pluto encounter imaging of other Kuiper belt objects and the plans for the upcoming encounter with KBO 2014 MU69.
A Practical and Portable Solids-State Electronic Terahertz Imaging System
Smart, Ken; Du, Jia; Li, Li; Wang, David; Leslie, Keith; Ji, Fan; Li, Xiang Dong; Zeng, Da Zhang
2016-01-01
A practical compact solid-state terahertz imaging system is presented. Various beam guiding architectures were explored and hardware performance assessed to improve its compactness, robustness, multi-functionality and simplicity of operation. The system performance in terms of image resolution, signal-to-noise ratio, the electronic signal modulation versus optical chopper, is evaluated and discussed. The system can be conveniently switched between transmission and reflection mode according to the application. A range of imaging application scenarios was explored and images of high visual quality were obtained in both transmission and reflection mode. PMID:27110791
Performance evaluation of infrared imaging system in field test
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie
2014-11-01
Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.
Kim, Min Young; Lee, Hyunkee; Cho, Hyungsuck
2008-04-10
One major research issue associated with 3D perception by robotic systems is the creation of efficient sensor systems that can generate dense range maps reliably. A visual sensor system for robotic applications is developed that is inherently equipped with two types of sensor, an active trinocular vision and a passive stereo vision. Unlike in conventional active vision systems that use a large number of images with variations of projected patterns for dense range map acquisition or from conventional passive vision systems that work well on specific environments with sufficient feature information, a cooperative bidirectional sensor fusion method for this visual sensor system enables us to acquire a reliable dense range map using active and passive information simultaneously. The fusion algorithms are composed of two parts, one in which the passive stereo vision helps active vision and the other in which the active trinocular vision helps the passive one. The first part matches the laser patterns in stereo laser images with the help of intensity images; the second part utilizes an information fusion technique using the dynamic programming method in which image regions between laser patterns are matched pixel-by-pixel with help of the fusion results obtained in the first part. To determine how the proposed sensor system and fusion algorithms can work in real applications, the sensor system is implemented on a robotic system, and the proposed algorithms are applied. A series of experimental tests is performed for a variety of configurations of robot and environments. The performance of the sensor system is discussed in detail.
Pulsed holographic system for imaging through spatially extended scattering media
NASA Astrophysics Data System (ADS)
Kanaev, A. V.; Judd, K. P.; Lebow, P.; Watnik, A. T.; Novak, K. M.; Lindle, J. R.
2017-10-01
Imaging through scattering media is a highly sought capability for military, industrial, and medical applications. Unfortunately, nearly all recent progress was achieved in microscopic light propagation and/or light propagation through thin or weak scatterers which is mostly pertinent to medical research field. Sensing at long ranges through extended scattering media, for example turbid water or dense fog, still represents significant challenge and the best results are demonstrated using conventional approaches of time- or range-gating. The imaging range of such systems is constrained by their ability to distinguish a few ballistic photons that reach the detector from the background, scattered, and ambient photons, as well as from detector noise. Holography can potentially enhance time-gating by taking advantage of extra signal filtering based on coherence properties of the ballistic photons as well as by employing coherent addition of multiple frames. In a holographic imaging scheme ballistic photons of the imaging pulse are reflected from a target and interfered with the reference pulse at the detector creating a hologram. Related approaches were demonstrated previously in one-way imaging through thin biological samples and other microscopic scale scatterers. In this work, we investigate performance of holographic imaging systems under conditions of extreme scattering (less than one signal photon per pixel signal), demonstrate advantages of coherent addition of images recovered from holograms, and discuss image quality dependence on the ratio of the signal and reference beam power.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-01-01
With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789
Near real-time stereo vision system
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Matthies, Larry H. (Inventor)
1993-01-01
The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging.
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2017-10-01
Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with slightly different gate positions, 3D information within a part of the range gate can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 20 Hz, the time difference between the two required GV images is 50 ms which may be too long in a dynamic scenario with moving objects. Therefore, we have applied this approach to the reset and signal level images of a new short-wave infrared (SWIR) GV camera whose read-out integrated circuit supports correlated double sampling (CDS) actually intended for the reduction of kTC noise (reset noise). These images are extracted from only one single laser pulse with a marginal time difference in between. The SWIR GV camera consists of 640 x 512 avalanche photodiodes based on mercury cadmium telluride with a pixel pitch of 15 μm. A Q-switched, flash lamp pumped solid-state laser with 1.57 μm wavelength (OPO), 52 mJ pulse energy after beam shaping, 7 ns pulse length and 20 Hz pulse repetition frequency is used for flash illumination. In this paper, the experimental set-up is described and the operating principle of CDS is explained. The method of deriving super-resolution depth information from a GV system by using CDS is introduced and optimized. Further, the range accuracy is estimated from measured image data.
Real-time, continuous-wave terahertz imaging using a microbolometer focal-plane array
NASA Technical Reports Server (NTRS)
Hu, Qing (Inventor); Min Lee, Alan W. (Inventor)
2010-01-01
The present invention generally provides a terahertz (THz) imaging system that includes a source for generating radiation (e.g., a quantum cascade laser) having one or more frequencies in a range of about 0.1 THz to about 10 THz, and a two-dimensional detector array comprising a plurality of radiation detecting elements that are capable of detecting radiation in that frequency range. An optical system directs radiation from the source to an object to be imaged. The detector array detects at least a portion of the radiation transmitted through the object (or reflected by the object) so as to form a THz image of that object.
Method of passive ranging from infrared image sequence based on equivalent area
NASA Astrophysics Data System (ADS)
Yang, Weiping; Shen, Zhenkang
2007-11-01
The information of range between missile and targets is important not only to missile controlling component, but also to automatic target recognition, so studying the technique of passive ranging from infrared images has important theoretic and practical meanings. Here we tried to get the range between guided missile and target and help to identify targets or dodge a hit. The issue of distance between missile and target is currently a hot and difficult research content. As all know, infrared imaging detector can not range so that it restricts the functions of the guided information processing system based on infrared images. In order to break through the technical puzzle, we investigated the principle of the infrared imaging, after analysing the imaging geometric relationship between the guided missile and the target, we brought forward the method of passive ranging based on equivalent area and provided mathematical analytic formulas. Validating Experiments demonstrate that the presented method has good effect, the lowest relative error can reach 10% in some circumstances.
Small SWAP 3D imaging flash ladar for small tactical unmanned air systems
NASA Astrophysics Data System (ADS)
Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.
2015-05-01
The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-03-13
With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.
Cyclops: single-pixel imaging lidar system based on compressive sensing
NASA Astrophysics Data System (ADS)
Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.
2017-11-01
Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged images of the target surface. The implemented prototype demonstrated a frame rate of 30 mHz for 16x16 pixels images, a transversal (xy) resolution of 2 cm at 10 m for images with 64x64 pixels and the range (z) resolution proved to be better than 1 cm. The experimental results obtained for the "3D imaging" mode of operation demonstrated that it was possible to reconstruct spherical smooth surfaces. The proposed solution demonstrates a great potential for: miniaturization; increase spatial resolution without using large format detector arrays; eliminate the need for scanning mechanisms; implementing simple and robust configurations.
Applications of superconducting bolometers in security imaging
NASA Astrophysics Data System (ADS)
Luukanen, A.; Leivo, M. M.; Rautiainen, A.; Grönholm, M.; Toivanen, H.; Grönberg, L.; Helistö, P.; Mäyrä, A.; Aikio, M.; Grossman, E. N.
2012-12-01
Millimeter-wave (MMW) imaging systems are currently undergoing deployment World-wide for airport security screening applications. Security screening through MMW imaging is facilitated by the relatively good transmission of these wavelengths through common clothing materials. Given the long wavelength of operation (frequencies between 20 GHz to ~ 100 GHz, corresponding to wavelengths between 1.5 cm and 3 mm), existing systems are suited for close-range imaging only due to substantial diffraction effects associated with practical aperture diameters. The present and arising security challenges call for systems that are capable of imaging concealed threat items at stand-off ranges beyond 5 meters at near video frame rates, requiring substantial increase in operating frequency in order to achieve useful spatial resolution. The construction of such imaging systems operating at several hundred GHz has been hindered by the lack of submm-wave low-noise amplifiers. In this paper we summarize our efforts in developing a submm-wave video camera which utilizes cryogenic antenna-coupled microbolometers as detectors. Whilst superconducting detectors impose the use of a cryogenic system, we argue that the resulting back-end complexity increase is a favorable trade-off compared to complex and expensive room temperature submm-wave LNAs both in performance and system cost.
Design and testing of a dual-band enhanced vision system
NASA Astrophysics Data System (ADS)
Way, Scott P.; Kerr, Richard; Imamura, Joseph J.; Arnoldy, Dan; Zeylmaker, Dick; Zuro, Greg
2003-09-01
An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts. It has the ability to provide a single image from uncooled infrared imagers combined with SWIR, NIR or LLLTV sensors. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions but can also be used in a variety of applications where the fusion of dual band or multiband imagery is required. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for the fusion system.
Active confocal imaging for visual prostheses
Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli
2014-01-01
There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon
2011-11-15
Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to whichmore » the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging doses, soft tissue contrast was more easily differentiated on IBL + SPA head and neck and pelvic images than TBL + LFB and IBL + LFB. IBL + SPA thoracic images were comparable to IBL + LFB images, but less noisy than TBL + LFB images at all imaging doses considered. The mean MTFs over all imaging doses were comparable, at within 3%, for all imaging system configurations for both the head- and pelvis-sized phantoms. Conclusions: Since CNR scales with the square root of imaging dose, changing from TBL + LFB to IBL + LFB and IBL + LFB to IBL + SPA reduces the imaging dose required to obtain a given CNR by factors of 0.38 and 0.37, respectively. MTFs were comparable between imaging system configurations. IBL + SPA patient image quality was always better than that of the TBL + LFB system and as good as or better than that of the IBL + LFB system, for a given dose.« less
Shkirkova, Kristina; Akam, Eftitan Y; Huang, Josephine; Sheth, Sunil A; Nour, May; Liang, Conrad W; McManus, Michael; Trinh, Van; Duckwiler, Gary; Tarpley, Jason; Vinuela, Fernando; Saver, Jeffrey L
2017-12-01
Background Rapid dissemination and coordination of clinical and imaging data among multidisciplinary team members are essential for optimal acute stroke care. Aim To characterize the feasibility and utility of the Synapse Emergency Room mobile (Synapse ERm) informatics system. Methods We implemented the Synapse ERm system for integration of clinical data, computerized tomography, magnetic resonance, and catheter angiographic imaging, and real-time stroke team communications, in consecutive acute neurovascular patients at a Comprehensive Stroke Center. Results From May 2014 to October 2014, the Synapse ERm application was used by 33 stroke team members in 84 Code Stroke alerts. Patient age was 69.6 (±17.1), with 41.5% female. Final diagnosis was: ischemic stroke 64.6%, transient ischemic attack 7.3%, intracerebral hemorrhage 6.1%, and cerebrovascular-mimic 22.0%. Each patient Synapse ERm record was viewed by a median of 10 (interquartile range 6-18) times by a median of 3 (interquartile range 2-4) team members. The most used feature was computerized tomography, magnetic resonance, and catheter angiography image display. In-app tweet team, communications were sent by median 1 (interquartile range 0-1, range 0-13) users per case and viewed by median 1 (interquartile range 0-3, range 0-44) team members. Use of the system was associated with rapid treatment times, faster than national guidelines, including median door-to-needle 51.0 min (interquartile range 40.5-69.5) and median door-to-groin 94.5 min (interquartile range 85.5-121.3). In user surveys, the mobile information platform was judged easy to employ in 91% (95% confidence interval 65%-99%) of uses and of added help in stroke management in 50% (95% confidence interval 22%-78%). Conclusion The Synapse ERm mobile platform for stroke team distribution and integration of clinical and imaging data was feasible to implement, showed high ease of use, and moderate perceived added utility in therapeutic management.
Moving target detection in flash mode against stroboscopic mode by active range-gated laser imaging
NASA Astrophysics Data System (ADS)
Zhang, Xuanyu; Wang, Xinwei; Sun, Liang; Fan, Songtao; Lei, Pingshun; Zhou, Yan; Liu, Yuliang
2018-01-01
Moving target detection is important for the application of target tracking and remote surveillance in active range-gated laser imaging. This technique has two operation modes based on the difference of the number of pulses per frame: stroboscopic mode with the accumulation of multiple laser pulses per frame and flash mode with a single shot of laser pulse per frame. In this paper, we have established a range-gated laser imaging system. In the system, two types of lasers with different frequency were chosen for the two modes. Electric fan and horizontal sliding track were selected as the moving targets to compare the moving blurring between two modes. Consequently, the system working in flash mode shows more excellent performance in motion blurring against stroboscopic mode. Furthermore, based on experiments and theoretical analysis, we presented the higher signal-to-noise ratio of image acquired by stroboscopic mode than flash mode in indoor and underwater environment.
Motion effects in multistatic millimeter-wave imaging systems
NASA Astrophysics Data System (ADS)
Schiessl, Andreas; Ahmed, Sherif Sayed; Schmidt, Lorenz-Peter
2013-10-01
At airport security checkpoints, authorities are demanding improved personnel screening devices for increased security. Active mm-wave imaging systems deliver the high quality images needed for reliable automatic detection of hidden threats. As mm-wave imaging systems assume static scenarios, motion effects caused by movement of persons during the screening procedure can degrade image quality, so very short measurement time is required. Multistatic imaging array designs and fully electronic scanning in combination with digital beamforming offer short measurement time together with high resolution and high image dynamic range, which are critical parameters for imaging systems used for passenger screening. In this paper, operational principles of such systems are explained, and the performance of the imaging systems with respect to motion within the scenarios is demonstrated using mm-wave images of different test objects and standing as well as moving persons. Electronic microwave imaging systems using multistatic sparse arrays are suitable for next generation screening systems, which will support on the move screening of passengers.
NASA Astrophysics Data System (ADS)
Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung
2017-03-01
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
Forward and backward tone mapping of high dynamic range images based on subband architecture
NASA Astrophysics Data System (ADS)
Bouzidi, Ines; Ouled Zaid, Azza
2015-01-01
This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Multispectral THz-VIS passive imaging system for hidden threats visualization
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw
2013-10-01
Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.
NASA Astrophysics Data System (ADS)
Ready, John Francis, III
Proton beam usage to treat cancer has recently experienced rapid growth, as it offers the ability to target dose delivery in a patient more precisely than traditional x-ray treatment methods. Protons stop within the patient, delivering the maximum dose at the end of their track--a phenomenon described as the Bragg peak. However, because a large dose is delivered to a small volume, proton therapy is very sensitive to errors in patient setup and treatment planning calculations. Additionally, because all primary beam particles stop in the patient, there is no direct information available to verify dose delivery. These factors contribute to the range uncertainty in proton therapy, which ultimately hinders its clinical usefulness. A reliable method of proton range verification would allow the clinician to fully utilize the precise dose delivery of the Bragg peak. Several methods to verify proton range detect secondary emissions, especially prompt gamma ray (PG) emissions. However, detection of PGs is challenging due to their high energy (2-10 MeV) and low attenuation coefficients, which limit PG interactions in materials. Therefore, detection and collimation methods must be specifically designed for prompt gamma ray imaging (PGI) applications. In addition, production of PGs relies on delivering a dose of radiation to the patient. Ideally, verification of the Bragg peak location exposes patients to a minimal dose, thus limiting the PG counts available to the imaging system. An additional challenge for PGI is the lack of accurate simulation models, which limit the study of PG production characteristics and the relationship between PG distribution and dose delivery. Specific limitations include incorrect modeling of the reaction cross sections, gamma emission yields, and angular distribution of emission for specific photon energies. While simulations can still be valuable assets in designing a system to detect and image PGs, until new models are developed and incorporated into Monte Carlo simulation packages, simulations cannot be used to study the production and location of PG emissions during proton therapy. This work presents a novel system to image PGs emitted during proton therapy to verify proton beam range. The imaging system consists of a multi-slit collimator paired with a position-sensitive LSO scintillation detector. This innovative design is the first collimated imaging system to implement two-dimensional (2-D) imaging for PG proton beam range verification, while also providing a larger field of view than compared to single-slit collimator systems. Other, uncollimated imaging systems have been explored for PGI applications, such as Compton cameras. However, Compton camera designs are severely limited by counting rate capabilities. A recent Compton camera study reported count rate capability of about 5 kHz. However, at a typical clinical beam current of 1.0 nA, the estimated PG emission rate would be 6 x 108 per second. After accounting for distance to the detector and interaction efficiencies, the detection system will still be overwhelmed with counts in the MHz range, causing false coincidences and hindering the operation of the imaging system. Initial measurements using 50 MeV protons demonstrated the ability of our system to reconstruct 2-D PG distributions at clinical beam currents. A Bragg peak localization precision of 1 mm (2sigma) was achieved with delivery of (1.7 +/- 0.8) x 108 protons into a PMMA target, suggesting the ability of the system to detect relative shifts in proton range while delivering fewer protons than used in a typical treatment fraction. This is key, as the ideal system allows the clinician to verify proton range when delivering only a small portion of the prescribed dose, preventing the mistreatment of the patient. Additionally, the absolute position of the Bragg peak was identified to within 1.6 mm (2sigma) with 5.6 x 1010 protons delivered. These promising results warrant further investigation and system optimization for clinical implementation. While further measurements at clinical beam energy levels will be required to verify system performance, these preliminary results provide evidence that 2-D image reconstruction, with 1-2 mm accuracy, is possible with this design. Implementing such a system in the clinical setting would greatly improve proton therapy cancer treatment outcomes.
High-performance sub-terahertz transmission imaging system for food inspection
Ok, Gyeongsik; Park, Kisang; Chun, Hyang Sook; Chang, Hyun-Joo; Lee, Nari; Choi, Sung-Wook
2015-01-01
Unlike X-ray systems, a terahertz imaging system can distinguish low-density materials in a food matrix. For applying this technique to food inspection, imaging resolution and acquisition speed ought to be simultaneously enhanced. Therefore, we have developed the first continuous-wave sub-terahertz transmission imaging system with a polygonal mirror. Using an f-theta lens and a polygonal mirror, beam scanning is performed over a range of 150 mm. For obtaining transmission images, the line-beam is incorporated with sample translation. The imaging system demonstrates that a pattern with 2.83 mm line-width at 210 GHz can be identified with a scanning speed of 80 mm/s. PMID:26137392
NASA Astrophysics Data System (ADS)
Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.
2017-02-01
Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.
Phase calibration target for quantitative phase imaging with ptychography.
Godden, T M; Muñiz-Piniella, A; Claverley, J D; Yacoot, A; Humphry, M J
2016-04-04
Quantitative phase imaging (QPI) utilizes refractive index and thickness variations that lead to optical phase shifts. This gives contrast to images of transparent objects. In quantitative biology, phase images are used to accurately segment cells and calculate properties such as dry mass, volume and proliferation rate. The fidelity of the measured phase shifts is of critical importance in this field. However to date, there has been no standardized method for characterizing the performance of phase imaging systems. Consequently, there is an increasing need for protocols to test the performance of phase imaging systems using well-defined phase calibration and resolution targets. In this work, we present a candidate for a standardized phase resolution target, and measurement protocol for the determination of the transfer of spatial frequencies, and sensitivity of a phase imaging system. The target has been carefully designed to contain well-defined depth variations over a broadband range of spatial frequencies. In order to demonstrate the utility of the target, we measure quantitative phase images on a ptychographic microscope, and compare the measured optical phase shifts with Atomic Force Microscopy (AFM) topography maps and surface profile measurements from coherence scanning interferometry. The results show that ptychography has fully quantitative nanometer sensitivity in optical path differences over a broadband range of spatial frequencies for feature sizes ranging from micrometers to hundreds of micrometers.
A system for the real-time display of radar and video images of targets
NASA Technical Reports Server (NTRS)
Allen, W. W.; Burnside, W. D.
1990-01-01
Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.
Feng, Sheng; Lotz, Thomas; Chase, J Geoffrey; Hann, Christopher E
2010-01-01
Digital Image Elasto Tomography (DIET) is a non-invasive elastographic breast cancer screening technology, based on image-based measurement of surface vibrations induced on a breast by mechanical actuation. Knowledge of frequency response characteristics of a breast prior to imaging is critical to maximize the imaging signal and diagnostic capability of the system. A feasibility analysis for a non-invasive image based modal analysis system is presented that is able to robustly and rapidly identify resonant frequencies in soft tissue. Three images per oscillation cycle are enough to capture the behavior at a given frequency. Thus, a sweep over critical frequency ranges can be performed prior to imaging to determine critical imaging settings of the DIET system to optimize its tumor detection performance.
NASA Astrophysics Data System (ADS)
Rasmi, Chelur K.; Padmanabhan, Sreedevi; Shirlekar, Kalyanee; Rajan, Kanhirodan; Manjithaya, Ravi; Singh, Varsha; Mondal, Partha Pratim
2017-12-01
We propose and demonstrate a light-sheet-based 3D interrogation system on a microfluidic platform for screening biological specimens during flow. To achieve this, a diffraction-limited light-sheet (with a large field-of-view) is employed to optically section the specimens flowing through the microfluidic channel. This necessitates optimization of the parameters for the illumination sub-system (illumination intensity, light-sheet width, and thickness), microfluidic specimen platform (channel-width and flow-rate), and detection sub-system (camera exposure time and frame rate). Once optimized, these parameters facilitate cross-sectional imaging and 3D reconstruction of biological specimens. The proposed integrated light-sheet imaging and flow-based enquiry (iLIFE) imaging technique enables single-shot sectional imaging of a range of specimens of varying dimensions, ranging from a single cell (HeLa cell) to a multicellular organism (C. elegans). 3D reconstruction of the entire C. elegans is achieved in real-time and with an exposure time of few hundred micro-seconds. A maximum likelihood technique is developed and optimized for the iLIFE imaging system. We observed an intracellular resolution for mitochondria-labeled HeLa cells, which demonstrates the dynamic resolution of the iLIFE system. The proposed technique is a step towards achieving flow-based 3D imaging. We expect potential applications in diverse fields such as structural biology and biophysics.
Three-dimensional imaging of hold baggage for airport security
NASA Astrophysics Data System (ADS)
Kolokytha, S.; Speller, R.; Robson, S.
2014-06-01
This study describes a cost-effective check-in baggage screening system, based on "on-belt tomosynthesis" (ObT) and close-range photogrammetry, that is designed to address the limitations of the most common system used, conventional projection radiography. The latter's limitations can lead to loss of information and an increase in baggage handling time, as baggage is manually searched or screened with more advanced systems. This project proposes a system that overcomes such limitations creating a cost-effective automated pseudo-3D imaging system, by combining x-ray and optical imaging to form digital tomograms. Tomographic reconstruction requires a knowledge of the change in geometry between multiple x-ray views of a common object. This is uniquely achieved using a close range photogrammetric system based on a small network of web-cameras. This paper presents the recent developments of the ObT system and describes recent findings of the photogrammetric system implementation. Based on these positive results, future work on the advancement of the ObT system as a cost-effective pseudo-3D imaging of hold baggage for airport security is proposed.
TU-A-201-01: Introduction to In-Room Imaging System Characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.
2016-06-15
Recent years have seen a widespread proliferation of available in-room image guidance systems for radiation therapy target localization with many centers having multiple in-room options. In this session, available imaging systems for in-room IGRT will be reviewed highlighting the main differences in workflow efficiency, targeting accuracy and image quality as it relates to target visualization. Decision-making strategies for integrating these tools into clinical image guidance protocols that are tailored to specific disease sites like H&N, lung, pelvis, and spine SBRT will be discussed. Learning Objectives: Major system characteristics of a wide range of available in-room imaging systems for IGRT. Advantagesmore » / disadvantages of different systems for site-specific IGRT considerations. Concepts of targeting accuracy and time efficiency in designing clinical imaging protocols.« less
Commercial applications for optical data storage
NASA Astrophysics Data System (ADS)
Tas, Jeroen
1991-03-01
Optical data storage has spurred the market for document imaging systems. These systems are increasingly being used to electronically manage the processing, storage and retrieval of documents. Applications range from straightforward archives to sophisticated workflow management systems. The technology is developing rapidly and within a few years optical imaging facilities will be incorporated in most of the office information systems. This paper gives an overview of the status of the market, the applications and the trends of optical imaging systems.
Miniaturization and Optimization of Nanoscale Resonant Oscillators
2013-09-07
carried out over a range of core sizes. Using a double 4-f imaging system in conjunction with a pump filter ( Semrock RazorEdge long wavelength pass...Using a double 4-f imaging system in conjunction with a pump filter ( Semrock RazorEdge long wavelength pass), the samples are imaged onto either an
Screening of adulterants in powdered foods and ingredients using line-scan Raman chemical imaging.
USDA-ARS?s Scientific Manuscript database
A newly developed line-scan Raman imaging system using a 785 nm line laser was used to authenticate powdered foods and ingredients. The system was used to collect hyperspectral Raman images in the range of 102–2865 wavenumber from three representative food powders mixed with selected adulterants eac...
NASA Astrophysics Data System (ADS)
Alzeyadi, Ahmed; Yu, Tzuyang
2018-03-01
Nondestructive evaluation (NDE) is an indispensable approach for the sustainability of critical civil infrastructure systems such as bridges and buildings. Recently, microwave/radar sensors are widely used for assessing the condition of concrete structures. Among existing imaging techniques in microwave/radar sensors, synthetic aperture radar (SAR) imaging enables researchers to conduct surface and subsurface inspection of concrete structures in the range-cross-range representation of SAR images. The objective of this paper is to investigate the range effect of concrete specimens in SAR images at various ranges (15 cm, 50 cm, 75 cm, 100 cm, and 200 cm). One concrete panel specimen (water-to-cement ratio = 0.45) of 30-cm-by-30-cm-by-5-cm was manufactured and scanned by a 10 GHz SAR imaging radar sensor inside an anechoic chamber. Scatterers in SAR images representing two corners of the concrete panel were used to estimate the width of the panel. It was found that the range-dependent pattern of corner scatters can be used to predict the width of concrete panels. Also, the maximum SAR amplitude decreases when the range increases. An empirical model was also proposed for width estimation of concrete panels.
NASA Astrophysics Data System (ADS)
Deng, Zhiwei; Li, Xicai; Shi, Junsheng; Huang, Xiaoqiao; Li, Feiyan
2018-01-01
Depth measurement is the most basic measurement in various machine vision, such as automatic driving, unmanned aerial vehicle (UAV), robot and so on. And it has a wide range of use. With the development of image processing technology and the improvement of hardware miniaturization and processing speed, real-time depth measurement using dual cameras has become a reality. In this paper, an embedded AM5728 and the ordinary low-cost dual camera is used as the hardware platform. The related algorithms of dual camera calibration, image matching and depth calculation have been studied and implemented on the hardware platform, and hardware design and the rationality of the related algorithms of the system are tested. The experimental results show that the system can realize simultaneous acquisition of binocular images, switching of left and right video sources, display of depth image and depth range. For images with a resolution of 640 × 480, the processing speed of the system can be up to 25 fps. The experimental results show that the optimal measurement range of the system is from 0.5 to 1.5 meter, and the relative error of the distance measurement is less than 5%. Compared with the PC, ARM11 and DMCU hardware platforms, the embedded AM5728 hardware is good at meeting real-time depth measurement requirements in ensuring the image resolution.
Advances in time-of-flight PET
Surti, Suleman; Karp, Joel S.
2016-01-01
This paper provides a review and an update on time-of-flight PET imaging with a focus on PET instrumentation, ranging from hardware design to software algorithms. We first present a short introduction to PET, followed by a description of TOF PET imaging and its history from the early days. Next, we introduce the current state-of-art in TOF PET technology and briefly summarize the benefits of TOF PET imaging. This is followed by a discussion of the various technological advancements in hardware (scintillators, photo-sensors, electronics) and software (image reconstruction) that have led to the current widespread use of TOF PET technology, and future developments that have the potential for further improvements in the TOF imaging performance. We conclude with a discussion of some new research areas that have opened up in PET imaging as a result of having good system timing resolution, ranging from new algorithms for attenuation correction, through efficient system calibration techniques, to potential for new PET system designs. PMID:26778577
Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E
2018-01-01
Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.
Noise analysis for near-field 3D FM-CW radar imaging systems
NASA Astrophysics Data System (ADS)
Sheen, David M.
2015-05-01
Near field radar imaging systems are used for demanding security applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit performance in several ways. Practical imaging systems can employ arrays with low gain antennas and relatively large signal distribution networks that have substantial losses which limit transmit power and increase the effective noise figure of the receiver chain, resulting in substantial thermal noise. Phase noise can also limit system performance. The signal coupled from transmitter to receiver is much larger than expected target signals. Phase noise from this coupled signal can set the system noise floor if the oscillator is too noisy. Frequency modulated continuous wave (FM-CW) radar transceivers used in short range systems are relatively immune to the effects of the coupled phase noise due to range correlation effects. This effect can reduce the phase-noise floor such that it is below the thermal noise floor for moderate performance oscillators. Phase noise is also manifested in the range response around bright targets, and can cause smaller targets to be obscured. Noise in synthetic aperture imaging systems is mitigated by the processing gain of the system. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of these noise sources on a fast-chirping FM-CW system.
IRIS: a novel spectral imaging system for the analysis of cultural heritage objects
NASA Astrophysics Data System (ADS)
Papadakis, V. M.; Orphanos, Y.; Kogou, S.; Melessanaki, K.; Pouli, P.; Fotakis, C.
2011-06-01
A new portable spectral imaging system is herein presented capable of acquiring images of high resolution (2MPixels) ranging from 380 nm up to 950 nm. The system consists of a digital color CCD camera, 15 interference filters covering all the sensitivity range of the detector and a robust filter changing system. The acquisition software has been developed in "LabView" programming language allowing easy handling and modification by end-users. The system has been tested and evaluated on a series of objects of Cultural Heritage (CH) value including paintings, encrusted stonework, ceramics etc. This paper aims to present the system, as well as, its application and advantages in the analysis of artworks with emphasis on the detailed compositional and structural information of layered surfaces based on reflection & fluorescence spectroscopy. Specific examples will be presented and discussed on the basis of system improvements.
Color (RGB) imaging laser radar
NASA Astrophysics Data System (ADS)
Ferri De Collibus, M.; Bartolini, L.; Fornetti, G.; Francucci, M.; Guarneri, M.; Nuvoli, M.; Paglia, E.; Ricci, R.
2008-03-01
We present a new color (RGB) imaging 3D laser scanner prototype recently developed in ENEA, Italy). The sensor is based on AM range finding technique and uses three distinct beams (650nm, 532nm and 450nm respectively) in monostatic configuration. During a scan the laser beams are simultaneously swept over the target, yielding range and three separated channels (R, G and B) of reflectance information for each sampled point. This information, organized in range and reflectance images, is then elaborated to produce very high definition color pictures and faithful, natively colored 3D models. Notable characteristics of the system are the absence of shadows in the acquired reflectance images - due to the system's monostatic setup and intrinsic self-illumination capability - and high noise rejection, achieved by using a narrow field of view and interferential filters. The system is also very accurate in range determination (accuracy better than 10 -4) at distances up to several meters. These unprecedented features make the system particularly suited to applications in the domain of cultural heritage preservation, where it could be used by conservators for examining in detail the status of degradation of frescoed walls, monuments and paintings, even at several meters of distance and in hardly accessible locations. After providing some theoretical background, we describe the general architecture and operation modes of the color 3D laser scanner, by reporting and discussing first experimental results and comparing high-definition color images produced by the instrument with photographs of the same subjects taken with a Nikon D70 digital camera.
Chen, Liang; Carlton Jones, Anoma Lalani; Mair, Grant; Patel, Rajiv; Gontsarova, Anastasia; Ganesalingam, Jeban; Math, Nikhil; Dawson, Angela; Aweid, Basaam; Cohen, David; Mehta, Amrish; Wardlaw, Joanna; Rueckert, Daniel; Bentley, Paul
2018-05-15
Purpose To validate a random forest method for segmenting cerebral white matter lesions (WMLs) on computed tomographic (CT) images in a multicenter cohort of patients with acute ischemic stroke, by comparison with fluid-attenuated recovery (FLAIR) magnetic resonance (MR) images and expert consensus. Materials and Methods A retrospective sample of 1082 acute ischemic stroke cases was obtained that was composed of unselected patients who were treated with thrombolysis or who were undergoing contemporaneous MR imaging and CT, and a subset of International Stroke Thrombolysis-3 trial participants. Automated delineations of WML on images were validated relative to experts' manual tracings on CT images, and co-registered FLAIR MR imaging, and ratings were performed by using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between automated and expert ratings. Results Automated WML volumes correlated strongly with expert-delineated WML volumes at MR imaging and CT (r 2 = 0.85 and 0.71 respectively; P < .001). Spatial-similarity of automated maps, relative to WML MR imaging, was not significantly different to that of expert WML tracings on CT images. Individual expert WML volumes at CT correlated well with each other (r 2 = 0.85), but varied widely (range, 91% of mean estimate; median estimate, 11 mL; range of estimated ranges, 0.2-68 mL). Agreements (κ) between automated ratings and consensus ratings were 0.60 (Wahlund system) and 0.64 (van Swieten system) compared with agreements between individual pairs of experts of 0.51 and 0.67, respectively, for the two rating systems (P < .01 for Wahlund system comparison of agreements). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (P > .05). Automated preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total automated processing time averaged 109 seconds (range, 79-140 seconds). Conclusion An automated method for quantifying CT cerebral white matter lesions achieves a similar accuracy to experts in unselected and multicenter cohorts. © RSNA, 2018 Online supplemental material is available for this article.
NASA Astrophysics Data System (ADS)
Turpin, Terry M.; Lafuse, James L.
1993-02-01
ImSynTM is an image synthesis technology, developed and patented by Essex Corporation. ImSynTM can provide compact, low cost, and low power solutions to some of the most difficult image synthesis problems existing today. The inherent simplicity of ImSynTM enables the manufacture of low cost and reliable photonic systems for imaging applications ranging from airborne reconnaissance to doctor's office ultrasound. The initial application of ImSynTM technology has been to SAR processing; however, it has a wide range of applications such as: image correlation, image compression, acoustic imaging, x-ray tomographic (CAT, PET, SPECT), magnetic resonance imaging (MRI), microscopy, range- doppler mapping (extended TDOA/FDOA). This paper describes ImSynTM in terms of synthetic aperture microscopy and then shows how the technology can be extended to ultrasound and synthetic aperture radar. The synthetic aperture microscope (SAM) enables high resolution three dimensional microscopy with greater dynamic range than real aperture microscopes. SAM produces complex image data, enabling the use of coherent image processing techniques. Most importantly SAM produces the image data in a form that is easily manipulated by a digital image processing workstation.
Yap, Timothy E; Archer, Timothy J; Gobbe, Marine; Reinstein, Dan Z
2016-02-01
To compare corneal thickness measurements between three imaging systems. In this retrospective study of 81 virgin and 58 post-laser refractive surgery corneas, central and minimum corneal thickness were measured using optical coherence tomography (OCT), very high-frequency digital ultrasound (VHF digital ultrasound), and a Scheimpflug imaging system. Agreement between methods was analyzed using mean differences (bias) (OCT - VHF digital ultrasound, OCT - Scheimpflug, VHF digital ultrasound - Scheimpflug) and Bland-Altman analysis with 95% limits of agreement (LoA). Virgin cornea mean central corneal thickness was 508.3 ± 33.2 µm (range: 434 to 588 µm) for OCT, 512.7 ± 32.2 µm (range: 440 to 587 µm) for VHF digital ultrasound, and 530.2 ± 32.6 µm (range: 463 to 612 µm) for Scheimpflug imaging. OCT and VHF digital ultrasound showed the closest agreement with a bias of -4.37 µm, 95% LoA ±12.6 µm. Least agreement was between OCT and Scheimpflug imaging with a bias of -21.9 µm, 95% LoA ±20.7 µm. Bias between VHF digital ultrasound and Scheimpflug imaging was -17.5 µm, 95% LoA ±19.0 µm. In post-laser refractive surgery corneas, mean central corneal thickness was 417.9 ± 47.1 µm (range: 342 to 557 µm) for OCT, 426.3 ± 47.1 µm (range: 363 to 563 µm) for VHF digital ultrasound, and 437.0 ± 48.5 µm (range: 359 to 571 µm) for Scheimpflug imaging. Closest agreement was between OCT and VHF digital ultrasound with a bias of -8.45 µm, 95% LoA ±13.2 µm. Least agreement was between OCT and Scheimpflug imaging with a bias of -19.2 µm, 95% LoA ±19.2 µm. Bias between VHF digital ultrasound and Scheimpflug imaging was -10.7 µm, 95% LoA ±20.0 µm. No relationship was observed between difference in central corneal thickness measurements and mean central corneal thickness. Results were similar for minimum corneal thickness. Central and minimum corneal thickness was measured thinnest by OCT and thickest by Scheimpflug imaging in both groups. A clinically significant bias existed between Scheimpflug imaging and the other two modalities. Copyright 2016, SLACK Incorporated.
Cooper, Virgil N; Oshiro, Thomas; Cagnon, Christopher H; Bassett, Lawrence W; McLeod-Stockmann, Tyler M; Bezrukiy, Nikita V
2003-10-01
Digital detectors in mammography have wide dynamic range in addition to the benefit of decoupled acquisition and display. How wide the dynamic range is and how it compares to film-screen systems in the clinical x-ray exposure domain are unclear. In this work, we compare the effective dynamic ranges of film-screen and flat panel mammography systems, along with the dynamic ranges of their component image receptors in the clinical x-ray exposure domain. An ACR mammography phantom was imaged using variable mAs (exposure) values for both systems. The dynamic range of the contrast-limited film-screen system was defined as that ratio of mAs (exposure) values for a 26 kVp Mo/Mo (HVL=0.34 mm Al) beam that yielded passing phantom scores. The same approach was done for the noise-limited digital system. Data from three independent observers delineated a useful phantom background optical density range of 1.27 to 2.63, which corresponded to a dynamic range of 2.3 +/- 0.53. The digital system had a dynamic range of 9.9 +/- 1.8, which was wider than the film-screen system (p<0.02). The dynamic range of the film-screen system was limited by the dynamic range of the film. The digital detector, on the other hand, had an estimated dynamic range of 42, which was wider than the dynamic range of the digital system in its entirety by a factor of 4. The generator/tube combination was the limiting factor in determining the digital system's dynamic range.
NASA Astrophysics Data System (ADS)
Špiclin, Žiga; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2012-03-01
Spatial resolution of hyperspectral imaging systems can vary significantly due to axial optical aberrations that originate from wavelength-induced index-of-refraction variations of the imaging optics. For systems that have a broad spectral range, the spatial resolution will vary significantly both with respect to the acquisition wavelength and with respect to the spatial position within each spectral image. Variations of the spatial resolution can be effectively characterized as part of the calibration procedure by a local image-based estimation of the pointspread function (PSF) of the hyperspectral imaging system. The estimated PSF can then be used in the image deconvolution methods to improve the spatial resolution of the spectral images. We estimated the PSFs from the spectral images of a line grid geometric caliber. From individual line segments of the line grid, the PSF was obtained by a non-parametric estimation procedure that used an orthogonal series representation of the PSF. By using the non-parametric estimation procedure, the PSFs were estimated at different spatial positions and at different wavelengths. The variations of the spatial resolution were characterized by the radius and the fullwidth half-maximum of each PSF and by the modulation transfer function, computed from images of USAF1951 resolution target. The estimation and characterization of the PSFs and the image deconvolution based spatial resolution enhancement were tested on images obtained by a hyperspectral imaging system with an acousto-optic tunable filter in the visible spectral range. The results demonstrate that the spatial resolution of the acquired spectral images can be significantly improved using the estimated PSFs and image deconvolution methods.
Three-dimensional tracking and imaging laser scanner for space operations
NASA Astrophysics Data System (ADS)
Laurin, Denis G.; Beraldin, J. A.; Blais, Francois; Rioux, Marc; Cournoyer, Luc
1999-05-01
This paper presents the development of a laser range scanner (LARS) as a three-dimensional sensor for space applications. The scanner is a versatile system capable of doing surface imaging, target ranging and tracking. It is capable of short range (0.5 m to 20 m) and long range (20 m to 10 km) sensing using triangulation and time-of-flight (TOF) methods respectively. At short range (1 m), the resolution is sub-millimeter and drops gradually with distance (2 cm at 10 m). For long range, the TOF provides a constant resolution of plus or minus 3 cm, independent of range. The LARS could complement the existing Canadian Space Vision System (CSVS) for robotic manipulation. As an active vision system, the LARS is immune to sunlight and adverse lighting; this is a major advantage over the CSVS, as outlined in this paper. The LARS could also replace existing radar systems used for rendezvous and docking. There are clear advantages of an optical system over a microwave radar in terms of size, mass, power and precision. Equipped with two high-speed galvanometers, the laser can be steered to address any point in a 30 degree X 30 degree field of view. The scanning can be continuous (raster scan, Lissajous) or direct (random). This gives the scanner the ability to register high-resolution 3D images of range and intensity (up to 4000 X 4000 pixels) and to perform point target tracking as well as object recognition and geometrical tracking. The imaging capability of the scanner using an eye-safe laser is demonstrated. An efficient fiber laser delivers 60 mW of CW or 3 (mu) J pulses at 20 kHz for TOF operation. Implementation of search and track of multiple targets is also demonstrated. For a single target, refresh rates up to 137 Hz is possible. Considerations for space qualification of the scanner are discussed. Typical space operations, such as docking, object attitude tracking, and inspections are described.
Exposure Range For Cine Radiographic Procedures
NASA Astrophysics Data System (ADS)
Moore, Robert J.
1980-08-01
Based on the author's experience, state-of-the-art cine radiographic equipment of the type used in modern cardiovascular laboratories for selective coronary arteriography must perform at well-defined levels to produce cine images with acceptable quantum mottle, contrast, and detail, as judged by consensus of across section of American cardiologists/radiologists experienced in viewing such images. Accordingly, a "standard" undertable state-of-the-art cine radiographic imaging system is postulated to answer the question of what patient exposure range is necessary to obtain cine images of acceptable quality. It is shown that such a standard system would be expected to produce a 'tabletop exposure of about 25 milliRoentgens per frame for the "standard" adult patient, plus-or-minus 33% for accept-able variation of system parameters. This means that for cine radiography at 60 frames per second (30 frames per second) the exposure rate range based on this model is 60 to 120 Roentgens per minute (30 to 60 Roentgens per minute). The author contends that studies at exposure levels below these will yield cine images of questionable diagnostic value; studies at exposure levels above these may yield cine images of excellent visual quality but having little additional diagnostic value, at the expense of added patient/personnel radiation exposure and added x-ray tube heat loading.
TU-A-201-00: Image Guidance Technologies and Management Strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Recent years have seen a widespread proliferation of available in-room image guidance systems for radiation therapy target localization with many centers having multiple in-room options. In this session, available imaging systems for in-room IGRT will be reviewed highlighting the main differences in workflow efficiency, targeting accuracy and image quality as it relates to target visualization. Decision-making strategies for integrating these tools into clinical image guidance protocols that are tailored to specific disease sites like H&N, lung, pelvis, and spine SBRT will be discussed. Learning Objectives: Major system characteristics of a wide range of available in-room imaging systems for IGRT. Advantagesmore » / disadvantages of different systems for site-specific IGRT considerations. Concepts of targeting accuracy and time efficiency in designing clinical imaging protocols.« less
FIZICS: fluorescent imaging zone identification system, a novel macro imaging system.
Skwish, Stephen; Asensio, Francisco; King, Greg; Clarke, Glenn; Kath, Gary; Salvatore, Michael J; Dufresne, Claude
2004-12-01
Constantly improving biological assay development continues to drive technological requirements. Recently, a specification was defined for capturing white light and fluorescent images of agar plates ranging in size from the NUNC Omni tray (96-well footprint, 128 x 85 mm) to the NUNC Bio Assay Dish (245 x 245 mm). An evaluation of commercially available products failed to identify any system capable of fluorescent macroimaging with discrete wavelength selection. To address the lack of a commercially available system, a custom imaging system was designed and constructed. This system provides the same capabilities of many commercially available systems with the added ability to fluorescently image up to a 245 x 245 mm area using wavelengths in the visible light spectrum.
Initial test of MITA/DIMM with an operational CBP system
NASA Astrophysics Data System (ADS)
Baldwin, Kevin; Hanna, Randall; Brown, Andrea; Brown, David; Moyer, Steven; Hixson, Jonathan G.
2018-05-01
The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University/Applied Physics Laboratory) and CERDEC NVESD MSD (Communications and Electronics Research Development Engineering Command Night Vision and Electronic Sensors Directorate Modeling and Simulation Division). The intent was to develop an efficient methodology whereby imaging system performance could be quickly and objectively characterized in a field setting. The initial design, development, and testing spanned a period of approximately 18 months with the initial project coming to a conclusion after testing of the MITA system in June 2017 with a fielded CBP system. The NVESD contribution to MITA was thermally heated target resolution boards deployed to support a range close to the sensor and, when possible, at range with the targets of interest. JHU/APL developed a laser DIMM (Differential Image Motion Monitor) system designed to measure the optical turbulence present along the line of sight of the imaging system during the time of image collection. The imagery collected of the target board was processed to calculate the in situ system resolution. This in situ imaging system resolution and the time-correlated turbulence measured by the DIMM system were used in NV-IPM (Night Vision Integrated Performance Model) to calculate the theoretical imaging system performance. Overall, this proves the MITA concept feasible. However, MITA is still in the initial phases of development and requires further verification and validation to ensure accuracy and reliability of both the instrument and the imaging system performance predictions.
Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy
NASA Technical Reports Server (NTRS)
Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, legal representative, Alicia (Inventor); Gursel, Yekta (Inventor)
2012-01-01
An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.
Camera, handlens, and microscope optical system for imaging and coupled optical spectroscopy
NASA Technical Reports Server (NTRS)
Mungas, Greg S. (Inventor); Boynton, John (Inventor); Sepulveda, Cesar A. (Inventor); Nunes de Sepulveda, Alicia (Inventor); Gursel, Yekta (Inventor)
2011-01-01
An optical system comprising two lens cells, each lens cell comprising multiple lens elements, to provide imaging over a very wide image distance and within a wide range of magnification by changing the distance between the two lens cells. An embodiment also provides scannable laser spectroscopic measurements within the field-of-view of the instrument.
Automatic image registration performance for two different CBCT systems; variation with imaging dose
NASA Astrophysics Data System (ADS)
Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.
2014-03-01
The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
NASA Astrophysics Data System (ADS)
Liu, Shuangquan; Zhang, Bin; Wang, Xin; Li, Lin; Chen, Yan; Liu, Xin; Liu, Fei; Shan, Baoci; Bai, Jing
2011-02-01
A dual-modality imaging system for simultaneous fluorescence molecular tomography (FMT) and positron emission tomography (PET) of small animals has been developed. The system consists of a noncontact 360°-projection FMT module and a flat panel detector pair based PET module, which are mounted orthogonally for the sake of eliminating cross interference. The FMT images and PET data are simultaneously acquired by employing dynamic sampling mode. Phantom experiments, in which the localization and range of radioactive and fluorescence probes are exactly indicated, have been carried out to verify the feasibility of the system. An experimental tumor-bearing mouse is also scanned using the dual-modality simultaneous imaging system, the preliminary fluorescence tomographic images and PET images demonstrate the in vivo performance of the presented dual-modality system.
In vivo imaging of the rodent eye with swept source/Fourier domain OCT
Liu, Jonathan J.; Grulkowski, Ireneusz; Kraus, Martin F.; Potsaid, Benjamin; Lu, Chen D.; Baumann, Bernhard; Duker, Jay S.; Hornegger, Joachim; Fujimoto, James G.
2013-01-01
Swept source/Fourier domain OCT is demonstrated for in vivo imaging of the rodent eye. Using commercial swept laser technology, we developed a prototype OCT imaging system for small animal ocular imaging operating in the 1050 nm wavelength range at an axial scan rate of 100 kHz with ~6 µm axial resolution. The high imaging speed enables volumetric imaging with high axial scan densities, measuring high flow velocities in vessels, and repeated volumetric imaging over time. The 1050 nm wavelength light provides increased penetration into tissue compared to standard commercial OCT systems at 850 nm. The long imaging range enables multiple operating modes for imaging the retina, posterior eye, as well as anterior eye and full eye length. A registration algorithm using orthogonally scanned OCT volumetric data sets which can correct motion on a per A-scan basis is applied to compensate motion and merge motion corrected volumetric data for enhanced OCT image quality. Ultrahigh speed swept source OCT is a promising technique for imaging the rodent eye, proving comprehensive information on the cornea, anterior segment, lens, vitreous, posterior segment, retina and choroid. PMID:23412778
UTOFIA: an underwater time-of-flight image acquisition system
NASA Astrophysics Data System (ADS)
Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris
2017-10-01
In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation.
Ritschl, Ludwig; Kuntz, Jan; Fleischmann, Christof; Kachelrieß, Marc
2016-05-01
In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled data set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritschl, Ludwig; Fleischmann, Christof; Kuntz, Jan, E-mail: j.kuntz@dkfz.de
Purpose: In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled datamore » set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. Methods: The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. Results: The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. Conclusions: The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.« less
Multiscale optical imaging of rare-earth-doped nanocomposites in a small animal model
NASA Astrophysics Data System (ADS)
Higgins, Laura M.; Ganapathy, Vidya; Kantamneni, Harini; Zhao, Xinyu; Sheng, Yang; Tan, Mei-Chee; Roth, Charles M.; Riman, Richard E.; Moghe, Prabhas V.; Pierce, Mark C.
2018-03-01
Rare-earth-doped nanocomposites have appealing optical properties for use as biomedical contrast agents, but few systems exist for imaging these materials. We describe the design and characterization of (i) a preclinical system for whole animal in vivo imaging and (ii) an integrated optical coherence tomography/confocal microscopy system for high-resolution imaging of ex vivo tissues. We demonstrate these systems by administering erbium-doped nanocomposites to a murine model of metastatic breast cancer. Short-wave infrared emissions were detected in vivo and in whole organ imaging ex vivo. Visible upconversion emissions and tissue autofluorescence were imaged in biopsy specimens, alongside optical coherence tomography imaging of tissue microstructure. We anticipate that this work will provide guidance for researchers seeking to image these nanomaterials across a wide range of biological models.
Two-color temporal focusing multiphoton excitation imaging with tunable-wavelength excitation
NASA Astrophysics Data System (ADS)
Lien, Chi-Hsiang; Abrigo, Gerald; Chen, Pei-Hsuan; Chien, Fan-Ching
2017-02-01
Wavelength tunable temporal focusing multiphoton excitation microscopy (TFMPEM) is conducted to visualize optical sectioning images of multiple fluorophore-labeled specimens through the optimal two-photon excitation (TPE) of each type of fluorophore. The tunable range of excitation wavelength was determined by the groove density of the grating, the diffraction angle, the focal length of lenses, and the shifting distance of the first lens in the beam expander. Based on a consideration of the trade-off between the tunable-wavelength range and axial resolution of temporal focusing multiphoton excitation imaging, the presented system demonstrated a tunable-wavelength range from 770 to 920 nm using a diffraction grating with groove density of 830 lines/mm. TPE fluorescence imaging examination of a fluorescent thin film indicated that the width of the axial confined excitation was 3.0±0.7 μm and the shifting distance of the temporal focal plane was less than 0.95 μm within the presented wavelength tunable range. Fast different wavelength excitation and three-dimensionally rendered imaging of Hela cell mitochondria and cytoskeletons and mouse muscle fibers were demonstrated. Significantly, the proposed system can improve the quality of two-color TFMPEM images through different excitation wavelengths to obtain higher-quality fluorescent signals in multiple-fluorophore measurements.
3D super resolution range-gated imaging for canopy reconstruction and measurement
NASA Astrophysics Data System (ADS)
Huang, Hantao; Wang, Xinwei; Sun, Liang; Lei, Pingshun; Fan, Songtao; Zhou, Yan
2018-01-01
In this paper, we proposed a method of canopy reconstruction and measurement based on 3D super resolution range-gated imaging. In this method, high resolution 2D intensity images are grasped by active gate imaging, and 3D images of canopy are reconstructed by triangular-range-intensity correlation algorithm at the same time. A range-gated laser imaging system(RGLIS) is established based on 808 nm diode laser and gated intensified charge-coupled device (ICCD) camera with 1392´1040 pixels. The proof experiments have been performed for potted plants located 75m away and trees located 165m away. The experiments show it that can acquire more than 1 million points per frame, and 3D imaging has the spatial resolution about 0.3mm at the distance of 75m and the distance accuracy about 10 cm. This research is beneficial for high speed acquisition of canopy structure and non-destructive canopy measurement.
Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.
2018-01-01
Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786
Luminescence imaging of water during carbon-ion irradiation for range estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Komori, Masataka; Koyama, Shuji
Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions withmore » those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.« less
Nanohole-array-based device for 2D snapshot multispectral imaging
Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.
2013-01-01
We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.
Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A
2017-01-01
Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Spaceborne electronic imaging systems
NASA Technical Reports Server (NTRS)
1971-01-01
Criteria and recommended practices for the design of the spaceborne elements of electronic imaging systems are presented. A spaceborne electronic imaging system is defined as a device that collects energy in some portion of the electromagnetic spectrum with detector(s) whose direct output is an electrical signal that can be processed (using direct transmission or delayed transmission after recording) to form a pictorial image. This definition encompasses both image tube systems and scanning point-detector systems. The intent was to collect the design experience and recommended practice of the several systems possessing the common denominator of acquiring images from space electronically and to maintain the system viewpoint rather than pursuing specialization in devices. The devices may be markedly different physically, but each was designed to provide a particular type of image within particular limitations. Performance parameters which determine the type of system selected for a given mission and which influence the design include: Sensitivity, Resolution, Dynamic range, Spectral response, Frame rate/bandwidth, Optics compatibility, Image motion, Radiation resistance, Size, Weight, Power, and Reliability.
Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin
2013-01-01
One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
Bok, Tae-Hoon; Kim, Juho; Bae, Jinho; Lee, Chong Hyun; Paeng, Dong-Guk
2014-09-24
The mechanical scanning of a single element transducer has been mostly utilized for high-frequency ultrasound imaging. However, it requires space for the mechanical motion of the transducer. In this paper, a rotational scanning ultrasound biomicroscopy (UBM) system equipped with a high-frequency angled needle transducer is designed and implemented in order to minimize the space required. It was applied to ex vivo ultrasound imaging of porcine posterior ocular tissues through a minimal incision hole of 1 mm in diameter. The retina and sclera for the one eye were visualized in the relative rotating angle range of 270°~330° and at a distance range of 6~7 mm, whereas the tissues of the other eye were observed in relative angle range of 160°~220° and at a distance range of 7.5~9 mm. The layer between retina and sclera seemed to be bent because the distance between the transducer tip and the layer was varied while the transducer was rotated. Certin features of the rotation system such as the optimal scanning angle, step angle and data length need to be improved for ensure higher accuracy and precision. Moreover, the focal length should be considered for the image quality. This implementation represents the first report of a rotational scanning UBM system.
Bok, Tae-Hoon; Kim, Juho; Bae, Jinho; Lee, Chong Hyun; Paeng, Dong-Guk
2014-01-01
The mechanical scanning of a single element transducer has been mostly utilized for high-frequency ultrasound imaging. However, it requires space for the mechanical motion of the transducer. In this paper, a rotational scanning ultrasound biomicroscopy (UBM) system equipped with a high-frequency angled needle transducer is designed and implemented in order to minimize the space required. It was applied to ex vivo ultrasound imaging of porcine posterior ocular tissues through a minimal incision hole of 1 mm in diameter. The retina and sclera for the one eye were visualized in the relative rotating angle range of 270° ∼ 330° and at a distance range of 6 ∼ 7 mm, whereas the tissues of the other eye were observed in relative angle range of 160° ∼ 220° and at a distance range of 7.5 ∼ 9 mm. The layer between retina and sclera seemed to be bent because the distance between the transducer tip and the layer was varied while the transducer was rotated. Certin features of the rotation system such as the optimal scanning angle, step angle and data length need to be improved for ensure higher accuracy and precision. Moreover, the focal length should be considered for the image quality. This implementation represents the first report of a rotational scanning UBM system. PMID:25254305
133Xe contamination found in internal bacteria filter of xenon ventilation system.
Hackett, Michael T; Collins, Judith A; Wierzbinski, Rebecca S
2003-09-01
We report on (133)Xe contamination found in the reusable internal bacteria filter of our xenon ventilation system. Internal bacteria filters (n = 6) were evaluated after approximately 1 mo of normal use. The ventilation system was evacuated twice to eliminate (133)Xe in the system before removal of the filter. Upon removal, the filter was monitored using a survey meter with an energy-compensated probe and was imaged on a scintillation camera. The filter was monitored and imaged over several days and was stored in a fume hood. Estimated (133)Xe activity in each filter immediately after removal ranged from 132 to 2,035 kBq (3.6-55.0 micro Ci), based on imaging. Initial surface radiation levels ranged from 0.4 to 4.5 micro Sv/h (0.04-0.45 mrem/h). The (133)Xe activity did not readily leave the filter over time (i.e., time to reach half the counts of the initial decay-corrected image ranged from <6 to >72 h). The majority of the image counts (approximately 70%) were seen in 2 distinctive areas in the filter. They corresponded to sites where the manufacturer used polyurethane adhesive to attach the fiberglass filter medium to the filter housing. (133)Xe contamination within the reusable internal bacteria filter of our ventilation system was easily detected by a survey meter and imaging. Although initial activities and surface radiation levels were low, radiation safety practices would dictate that a (133)Xe-contaminated bacteria filter be stored preferably in a fume hood until it cannot be distinguished from background before autoclaving or disposal.
A trunk ranging system based on binocular stereo vision
NASA Astrophysics Data System (ADS)
Zhao, Xixuan; Kan, Jiangming
2017-07-01
Trunk ranging is an essential function for autonomous forestry robots. Traditional trunk ranging systems based on personal computers are not convenient in practical application. This paper examines the implementation of a trunk ranging system based on the binocular vision theory via TI's DaVinc DM37x system. The system is smaller and more reliable than that implemented using a personal computer. It calculates the three-dimensional information from the images acquired by binocular cameras, producing the targeting and ranging results. The experimental results show that the measurement error is small and the system design is feasible for autonomous forestry robots.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Limited Angle Dual Modality Breast Imaging
NASA Astrophysics Data System (ADS)
More, Mitali J.; Li, Heng; Goodale, Patricia J.; Zheng, Yibin; Majewski, Stan; Popov, Vladimir; Welch, Benjamin; Williams, Mark B.
2007-06-01
We are developing a dual modality breast scanner that can obtain x-ray transmission and gamma ray emission images in succession at multiple viewing angles with the breast held under mild compression. These views are reconstructed and fused to obtain three-dimensional images that combine structural and functional information. Here, we describe the dual modality system and present results of phantom experiments designed to test the system's ability to obtain fused volumetric dual modality data sets from a limited number of projections, acquired over a limited (less than 180 degrees) angular range. We also present initial results from phantom experiments conducted to optimize the acquisition geometry for gamma imaging. The optimization parameters include the total number of views and the angular range over which these views should be spread, while keeping the total number of detected counts fixed. We have found that in general, for a fixed number of views centered around the direction perpendicular to the direction of compression, in-plane contrast and SNR are improved as the angular range of the views is decreased. The improvement in contrast and SNR with decreasing angular range is much greater for deeper lesions and for a smaller number of views. However, the z-resolution of the lesion is significantly reduced with decreasing angular range. Finally, we present results from limited angle tomography scans using a system with dual, opposing heads.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Detecting adulterants in milk powder using high-throughput Raman chemical imaging
USDA-ARS?s Scientific Manuscript database
This study used a line-scan high-throughput Raman imaging system to authenticate milk powder. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source. The system was used to acquire hyperspectral Raman images in a wavenumber range of 103–2881 cm-1 from the skim milk...
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Flash LIDAR Systems for Planetary Exploration
NASA Astrophysics Data System (ADS)
Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.
2009-01-01
Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.
NASA Astrophysics Data System (ADS)
Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas
2005-05-01
3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.
Reliability of a novel thermal imaging system for temperature assessment of healthy feet.
Petrova, N L; Whittam, A; MacDonald, A; Ainarkar, S; Donaldson, A N; Bevans, J; Allen, J; Plassmann, P; Kluwe, B; Ring, F; Rogers, L; Simpson, R; Machin, G; Edmonds, M E
2018-01-01
Thermal imaging is a useful modality for identifying preulcerative lesions ("hot spots") in diabetic foot patients. Despite its recognised potential, at present, there is no readily available instrument for routine podiatric assessment of patients at risk. To address this need, a novel thermal imaging system was recently developed. This paper reports the reliability of this device for temperature assessment of healthy feet. Plantar skin foot temperatures were measured with the novel thermal imaging device (Diabetic Foot Ulcer Prevention System (DFUPS), constructed by Photometrix Imaging Ltd) and also with a hand-held infrared spot thermometer (Thermofocus® 01500A3, Tecnimed, Italy) after 20 min of barefoot resting with legs supported and extended in 105 subjects (52 males and 53 females; age range 18 to 69 years) as part of a multicentre clinical trial. The temperature differences between the right and left foot at five regions of interest (ROIs), including 1st and 4th toes, 1st, 3rd and 5th metatarsal heads were calculated. The intra-instrument agreement (three repeated measures) and the inter-instrument agreement (hand-held thermometer and thermal imaging device) were quantified using intra-class correlation coefficients (ICCs) and the 95% confidence intervals (CI). Both devices showed almost perfect agreement in replication by instrument. The intra-instrument ICCs for the thermal imaging device at all five ROIs ranged from 0.95 to 0.97 and the intra-instrument ICCs for the hand-held-thermometer ranged from 0.94 to 0.97. There was substantial to perfect inter-instrument agreement between the hand-held thermometer and the thermal imaging device and the ICCs at all five ROIs ranged between 0.94 and 0.97. This study reports the performance of a novel thermal imaging device in the assessment of foot temperatures in healthy volunteers in comparison with a hand-held infrared thermometer. The newly developed thermal imaging device showed very good agreement in repeated temperature assessments at defined ROIs as well as substantial to perfect agreement in temperature assessment with the hand-held infrared thermometer. In addition to the reported non-inferior performance in temperature assessment, the thermal imaging device holds the potential to provide an instantaneous thermal image of all sites of the feet (plantar, dorsal, lateral and medial views). Diabetic Foot Ulcer Prevention System NCT02317835, registered December 10, 2014.
TU-A-201-02: Treatment Site-Specific Considerations for Clinical IGRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wijesooriya, K.
2016-06-15
Recent years have seen a widespread proliferation of available in-room image guidance systems for radiation therapy target localization with many centers having multiple in-room options. In this session, available imaging systems for in-room IGRT will be reviewed highlighting the main differences in workflow efficiency, targeting accuracy and image quality as it relates to target visualization. Decision-making strategies for integrating these tools into clinical image guidance protocols that are tailored to specific disease sites like H&N, lung, pelvis, and spine SBRT will be discussed. Learning Objectives: Major system characteristics of a wide range of available in-room imaging systems for IGRT. Advantagesmore » / disadvantages of different systems for site-specific IGRT considerations. Concepts of targeting accuracy and time efficiency in designing clinical imaging protocols.« less
Improved linearity using harmonic error rejection in a full-field range imaging system
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2008-02-01
Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.
Features and limitations of mobile tablet devices for viewing radiological images.
Grunert, J H
2015-03-01
Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.
A THz heterodyne instrument for biomedical imaging applications
NASA Technical Reports Server (NTRS)
Siegel, Peter H.
2004-01-01
An ultra-wide-dynamic-range heterodyne imaging system operating at 2.5 THz is described. The instrument employs room temperature Schottky barrier diode mixers and far infrared gas laser sources developed for NASA space applications. A dynamic range of over 100dB at fixed intermediate frequencies has been realized. Amplitude/phase tracking circuitry results in stability of 0.02 dB and +-2 degrees of phase. The system is being employed to characterize biological (human and animal derived tissues) and a variety of materials of interest to NASA. This talk will describe the instrument and some of the early imaging experiments on everything from mouse tail to aerogel.
Segmentation, modeling and classification of the compact objects in a pile
NASA Technical Reports Server (NTRS)
Gupta, Alok; Funka-Lea, Gareth; Wohn, Kwangyoen
1990-01-01
The problem of interpreting dense range images obtained from the scene of a heap of man-made objects is discussed. A range image interpretation system consisting of segmentation, modeling, verification, and classification procedures is described. First, the range image is segmented into regions and reasoning is done about the physical support of these regions. Second, for each region several possible three-dimensional interpretations are made based on various scenarios of the objects physical support. Finally each interpretation is tested against the data for its consistency. The superquadric model is selected as the three-dimensional shape descriptor, plus tapering deformations along the major axis. Experimental results obtained from some complex range images of mail pieces are reported to demonstrate the soundness and the robustness of our approach.
NASA Astrophysics Data System (ADS)
Grasser, R.; Peyronneaudi, Benjamin; Yon, Kevin; Aubry, Marie
2015-10-01
CILAS, subsidiary of Airbus Defense and Space, develops, manufactures and sales laser-based optronics equipment for defense and homeland security applications. Part of its activity is related to active systems for threat detection, recognition and identification. Active surveillance and active imaging systems are often required to achieve identification capacity in case for long range observation in adverse conditions. In order to ease the deployment of active imaging systems often complex and expensive, CILAS suggests a new concept. It consists on the association of two apparatus working together. On one side, a patented versatile laser platform enables high peak power laser illumination for long range observation. On the other side, a small camera add-on works as a fast optical switch to select photons with specific time of flight only. The association of the versatile illumination platform and the fast optical switch presents itself as an independent body, so called "flash module", giving to virtually any passive observation systems gated active imaging capacity in NIR and SWIR.
Q selection for an electro-optical earth imaging system: theoretical and experimental results.
Cochrane, Andy; Schulz, Kevin; Kendrick, Rick; Bell, Ray
2013-09-23
This paper explores practical design considerations for selecting Q for an electro-optical earth imaging system, where Q is defined as (λ FN) / pixel pitch. Analytical methods are used to show that, under imaging conditions with high SNR, increasing Q with fixed aperture cannot lead to degradation of image quality regardless of the angular smear rate of the system. The potential for degradation of image quality under low SNR is bounded by an increase of the detector noise scaling as Q. An imaging test bed is used to collect representative imagery for various Q configurations. The test bed includes real world errors such as image smear and haze. The value of Q is varied by changing the focal length of the imaging system. Imagery is presented over a broad range of parameters.
Li, Xiaofang; Deng, Linhong; Lu, Hu; He, Bin
2014-08-01
A measurement system based on the image processing technology and developed by LabVIEW was designed to quickly obtain the range of motion (ROM) of spine. NI-Vision module was used to pre-process the original images and calculate the angles of marked needles in order to get ROM data. Six human cadaveric thoracic spine segments T7-T10 were selected to carry out 6 kinds of loads, including left/right lateral bending, flexion, extension, cis/counterclockwise torsion. The system was used to measure the ROM of segment T8-T9 under the loads from 1 Nm to 5 Nm. The experimental results showed that the system is able to measure the ROM of the spine accurately and quickly, which provides a simple and reliable tool for spine biomechanics investigators.
See around the corner using active imaging
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Elmqvist, Magnus; Larsson, Håkan
2011-11-01
This paper investigates the prospects of "seeing around the corner" using active imaging. A monostatic active imaging system offers interesting capabilities in the presence of glossy reflecting objects. Examples of such surfaces are windows in buildings and cars, calm water, signs and vehicle surfaces. During daylight it might well be possible to use mirrorlike reflection by the naked eye or a CCD camera for non-line of sight imaging. However the advantage with active imaging is that one controls the illumination. This will not only allow for low light and night utilization but also for use in cases where the sun or other interfering lights limit the non-line of sight imaging possibility. The range resolution obtained by time gating will reduce disturbing direct reflections and allow simultaneous view in several directions using range discrimination. Measurements and theoretical considerations in this report support the idea of using laser to "see around the corner". Examples of images and reflectivity measurements will be presented together with examples of potential system applications.
Design of a new type synchronous focusing mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jintao; Tan, Ruijun; Chen, Zhou; Zhang, Yongqi; Fu, Panlong; Qu, Yachen
2018-05-01
Aiming at the dual channel telescopic imaging system composed of infrared imaging system, low-light-level imaging system and image fusion module, In the fusion of low-light-level images and infrared images, it is obvious that using clear source images is easier to obtain high definition fused images. When the target is imaged at 15m to infinity, focusing is needed to ensure the imaging quality of the dual channel imaging system; therefore, a new type of synchronous focusing mechanism is designed. The synchronous focusing mechanism realizes the focusing function through the synchronous translational imaging devices, mainly including the structure of the screw rod nut, the shaft hole coordination structure and the spring steel ball eliminating clearance structure, etc. Starting from the synchronous focusing function of two imaging devices, the structure characteristics of the synchronous focusing mechanism are introduced in detail, and the focusing range is analyzed. The experimental results show that the synchronous focusing mechanism has the advantages of ingenious design, high focusing accuracy and stable and reliable operation.
NASA Astrophysics Data System (ADS)
Dhalla, Al-Hafeez Zahir
Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for extending the imaging range of OCT systems are developed. These techniques include the use of a high spectral purity swept source laser in a full-field OCT system, as well as the use of a peculiar phenomenon known as coherence revival to resolve the complex conjugate ambiguity in swept source OCT. In addition, a technique for extending the depth of focus of OCT systems by using a polarization-encoded, dual-focus sample arm is demonstrated. Along the way, other related advances are also presented, including the development of techniques to reduce crosstalk and speckle artifacts in full-field OCT, and the use of fast optical switches to increase the imaging speed of certain low-duty cycle swept source OCT systems. Finally, the clinical utility of these techniques is demonstrated by combining them to demonstrate high-speed, high resolution, extended-depth imaging of both the anterior and posterior eye simultaneously and in vivo.
A novel snapshot polarimetric imager
NASA Astrophysics Data System (ADS)
Wong, Gerald; McMaster, Ciaran; Struthers, Robert; Gorman, Alistair; Sinclair, Peter; Lamb, Robert; Harvey, Andrew R.
2012-10-01
Polarimetric imaging (PI) is of increasing importance in determining additional scene information beyond that of conventional images. For very long-range surveillance, image quality is degraded due to turbulence. Furthermore, the high magnification required to create images with sufficient spatial resolution suitable for object recognition and identification require long focal length optical systems. These are incompatible with the size and weight restrictions for aircraft. Techniques which allow detection and recognition of an object at the single pixel level are therefore likely to provide advance warning of approaching threats or long-range object cueing. PI is a technique that has the potential to detect object signatures at the pixel level. Early attempts to develop PI used rotating polarisers (and spectral filters) which recorded sequential polarized images from which the complete Stokes matrix could be derived. This approach has built-in latency between frames and requires accurate registration of consecutive frames to analyze real-time video of moving objects. Alternatively, multiple optical systems and cameras have been demonstrated to remove latency, but this approach increases cost and bulk of the imaging system. In our investigation we present a simplified imaging system that divides an image into two orthogonal polarimetric components which are then simultaneously projected onto a single detector array. Thus polarimetric data is recorded without latency on a single snapshot. We further show that, for pixel-level objects, the data derived from only two orthogonal states (H and V) is sufficient to increase the probability of detection whilst reducing false alarms compared to conventional unpolarised imaging.
A 3D camera for improved facial recognition
NASA Astrophysics Data System (ADS)
Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim
2004-12-01
We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.
In vivo verification of particle therapy: how Compton camera configurations affect 3D image quality
NASA Astrophysics Data System (ADS)
Mackin, D.; Draeger, E.; Peterson, S.; Polf, J.; Beddar, S.
2017-05-01
The steep dose gradients enabled by the Bragg peaks of particle therapy beams are a double edged sword. They enable highly conformal dose distributions, but even small deviations from the planned beam range can cause overdosing of healthy tissue or under-dosing of the tumour. To reduce this risk, particle therapy treatment plans include margins large enough to account for all the sources of range uncertainty, which include patient setup errors, patient anatomy changes, and CT number to stopping power ratios. Any system that could verify the beam range in vivo, would allow reduced margins and more conformal dose distributions. Toward our goal developing such a system based on Compton camera (CC) imaging, we studied how three configurations (single camera, parallel opposed, and orthogonal) affect the quality of the 3D images. We found that single CC and parallel opposed configurations produced superior images in 2D. The increase in parallax produced by an orthogonal CC configuration was shown to be beneficial in producing artefact free 3D images.
A commercialized photoacoustic microscopy system with switchable optical and acoustic resolutions
NASA Astrophysics Data System (ADS)
Pu, Yang; Bi, Renzhe; Olivo, Malini; Zhao, Xiaojie
2018-02-01
A focused-scanning photoacoustic microscopy (PAM) is available to help advancing life science research in neuroscience, cell biology, and in vivo imaging. At this early stage, the only one manufacturer of PAM systems, MicroPhotoAcoustics (MPA; Ronkonkoma, NY), MPA has developed a commercial PAM system with switchable optical and acoustic resolution (OR- and AR-PAM), using multiple patents licensed from the lab of Lihong Wang, who pioneered photoacoustics. The system includes different excitation sources. Two kilohertz-tunable, Q-switched, Diode Pumped Solid-State (DPSS) lasers offering a up to 30kHz pulse repetition rate and 9 ns pulse duration with 532 and 559 nm to achieve functional photoacoustic tomography for sO2 (oxygen saturation of hemoglobin) imaging in OR-PAM. A Ti:sapphire laser from 700 to 900 nm to achieve deep-tissue imaging. OR-PAM provides up to 1 mm penetration depth and 5 μm lateral resolution. while AR-PAM offers up to 3 mm imaging depth and 45 μm lateral resolution. The scanning step sizes for OR- and AR-PAM are 0.625 and 6.25 μm, respectively. Researchers have used the system for a range of applications, including preclinical neural imaging; imaging of cell nuclei in intestine, ear, and leg; and preclinical human imaging of finger cuticle. With the continuation of new technological advancements and discoveries, MPA plans to further advance PAM to achieve faster imaging speed, higher spatial resolution at deeper tissue layer, and address a broader range of biomedical applications.
Unsynchronized scanning with a low-cost laser range finder for real-time range imaging
NASA Astrophysics Data System (ADS)
Hatipoglu, Isa; Nakhmani, Arie
2017-06-01
Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.
Helmet-mounted displays in long-range-target visual acquisition
NASA Astrophysics Data System (ADS)
Wilkins, Donald F.
1999-07-01
Aircrews have always sought a tactical advantage within the visual range (WVR) arena -- usually defined as 'see the opponent first.' Even with radar and interrogation foe/friend (IFF) systems, the pilot who visually acquires his opponent first has a significant advantage. The Helmet Mounted Cueing System (HMCS) equipped with a camera offers an opportunity to correct the problems with the previous approaches. By utilizing real-time image enhancement technique and feeding the image to the pilot on the HMD, the target can be visually acquired well beyond the range provided by the unaided eye. This paper will explore the camera and display requirements for such a system and place those requirements within the context of other requirements, such as weight.
A 100-200 MHz ultrasound biomicroscope.
Knspik, D A; Starkoski, B; Pavlin, C J; Foster, F S
2000-01-01
The development of higher frequency ultrasound imaging systems affords a unique opportunity to visualize living tissue at the microscopic level. This work was undertaken to assess the potential of ultrasound imaging in vivo using the 100-200 MHz range. Spherically focused lithium niobate transducers were fabricated. The properties of a 200 MHz center frequency device are described in detail. This transducer showed good sensitivity with an insertion loss of 18 dB at 200 MHz. Resolution of 14 /spl mu/m in the lateral direction and 12 /spl mu/m in the axial direction was achieved with f/1.14 focusing. A linear mechanical scan system and a scan converter were used to generate B-scan images at a frame rate up to 12 frames per second. System performance in B-mode imaging is limited by frequency dependent attenuation in tissues. An alternative technique, zone-focus image collection, was investigated to extend depth of field. Images of coronary arteries, the eye, and skin are presented along with some preliminary correlations with histology. These results demonstrate the feasibility of ultrasound biomicroscopy In the 100-200 MHz range. Further development of ultrasound backscatter imaging at frequencies up to and above 200 MHz will contribute valuable information about tissue microstructure.
Preliminary experimental results from a MARS Micro-CT system.
He, Peng; Yu, Hengyong; Thayer, Patrick; Jin, Xin; Xu, Qiong; Bennett, James; Tappenden, Rachael; Wei, Biao; Goldstein, Aaron; Renaud, Peter; Butler, Anthony; Butler, Phillip; Wang, Ge
2012-01-01
The Medipix All Resolution System (MARS) system is a commercial spectral/multi-energy micro-CT scanner designed and assembled by the MARS Bioimaging, Ltd. in New Zealand. This system utilizes the state-of-the-art Medipix photon-counting, energy-discriminating detector technology developed by a collaboration at European Organization for Nuclear Research (CERN). In this paper, we report our preliminary experimental results using this system, including geometrical alignment, photon energy characterization, protocol optimization, and spectral image reconstruction. We produced our scan datasets with a multi-material phantom, and then applied ordered subset-simultaneous algebraic reconstruction technique (OS-SART) to reconstruct images in different energy ranges and principal component analysis (PCA) to evaluate spectral deviation among the energy ranges.
Automatic panoramic thermal integrated sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.
2005-05-01
Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.
Xu, Jingjiang; Song, Shaozhen; Wei, Wei; Wang, Ruikang K
2017-01-01
Wide-field vascular visualization in bulk tissue that is of uneven surface is challenging due to the relatively short ranging distance and significant sensitivity fall-off for most current optical coherence tomography angiography (OCTA) systems. We report a long ranging and ultra-wide-field OCTA (UW-OCTA) system based on an akinetic swept laser. The narrow instantaneous linewidth of the swept source with its high phase stability, combined with high-speed detection in the system enable us to achieve long ranging (up to 46 mm) and almost negligible system sensitivity fall-off. To illustrate these advantages, we compare the basic system performances between conventional spectral domain OCTA and UW-OCTA systems and their functional imaging of microvascular networks in living tissues. In addition, we show that the UW-OCTA is capable of different depth-ranging of cerebral blood flow within entire brain in mice, and providing unprecedented blood perfusion map of human finger in vivo . We believe that the UW-OCTA system has promises to augment the existing clinical practice and explore new biomedical applications for OCT imaging.
Xu, Jingjiang; Song, Shaozhen; Wei, Wei; Wang, Ruikang K.
2016-01-01
Wide-field vascular visualization in bulk tissue that is of uneven surface is challenging due to the relatively short ranging distance and significant sensitivity fall-off for most current optical coherence tomography angiography (OCTA) systems. We report a long ranging and ultra-wide-field OCTA (UW-OCTA) system based on an akinetic swept laser. The narrow instantaneous linewidth of the swept source with its high phase stability, combined with high-speed detection in the system enable us to achieve long ranging (up to 46 mm) and almost negligible system sensitivity fall-off. To illustrate these advantages, we compare the basic system performances between conventional spectral domain OCTA and UW-OCTA systems and their functional imaging of microvascular networks in living tissues. In addition, we show that the UW-OCTA is capable of different depth-ranging of cerebral blood flow within entire brain in mice, and providing unprecedented blood perfusion map of human finger in vivo. We believe that the UW-OCTA system has promises to augment the existing clinical practice and explore new biomedical applications for OCT imaging. PMID:28101428
2015-09-30
changes in near-shore water columns and support companion laser imaging system tests. The physical, biological and optical oceanographic data...developed under this project will be used as input to optical and environmental models to assess the performance characteristics of laser imaging systems...OBJECTIVES We proposed to characterize the physical, biological and optical fields present during deployments of the Streak Tube Imaging Lidar
NASA Astrophysics Data System (ADS)
Han, Bin
This dissertation describes a research project to test the clinical utility of a time-resolved proton radiographic (TRPR) imaging system by performing comprehensive Monte Carlo simulations of a physical device coupled with realistic lung cancer patient anatomy defined by 4DCT for proton therapy. A time-resolved proton radiographic imaging system was modeled through Monte Carlo simulations. A particle-tracking feature was employed to evaluate the performance of the proton imaging system, especially in its ability to visualize and quantify proton range variations during respiration. The Most Likely Path (MLP) algorithm was developed to approximate the multiple Coulomb scattering paths of protons for the purpose of image reconstruction. Spatial resolution of ˜ 1 mm and range resolution of 1.3% of the total range were achieved using the MLP algorithm. Time-resolved proton radiographs of five patient cases were reconstructed to track tumor motion and to calculate water equivalent length variations. By comparing with direct 4DCT measurement, the accuracy of tumor tracking was found to be better than 2 mm in five patient cases. Utilizing tumor tracking information to reduce margins to the planning target volume, a gated treatment plan was compared with un-gated treatment plan. The equivalent uniform dose (EUD) and the normal tissue complication probability (NTCP) were used to quantify the gain in the quality of treatments. The EUD of the OARs was found to be reduced up to 11% and the corresponding NTCP of organs at risk (OARs) was found to be reduced up to 16.5%. These results suggest that, with image guidance by proton radiography, dose to OARs can be reduced and the corresponding NTCPs can be significantly reduced. The study concludes that the proton imaging system can accurately track the motion of the tumor and detect the WEL variations, leading to potential gains in using image-guided proton radiography for lung cancer treatments.
TU-AB-207-01: Introduction to Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sechopoulos, I.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-03: Tomosynthesis: Clinical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidment, A.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-00: Digital Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-02: Testing of Body and Breast Tomosynthesis Sytems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
Study on super-resolution three-dimensional range-gated imaging technology
NASA Astrophysics Data System (ADS)
Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao
2018-04-01
Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.
Structured light: theory and practice and practice and practice...
NASA Astrophysics Data System (ADS)
Keizer, Richard L.; Jun, Heesung; Dunn, Stanley M.
1991-04-01
We have developed a structured light system for noncontact 3-D measurement of human body surface areas and volumes. We illustrate the image processing steps and algorithms used to recover range data from a single camera image, reconstruct a complete surface from one or more sets of range data, and measure areas and volumes. The development of a working system required the solution to a number of practical problems in image processing and grid labeling (the stereo correspondence problem for structured light). In many instances we found that the standard cookbook techniques for image processing failed. This was due in part to the domain (human body), the restrictive assumptions of the models underlying the cookbook techniques, and the inability to consistently predict the outcome of the image processing operations. In this paper, we will discuss some of our successes and failures in two key steps in acquiring range data using structured light: First, the problem of detecting intersections in the structured light grid, and secondly, the problem of establishing correspondence between projected and detected intersections. We will outline the problems and solutions we have arrived at after several years of trial and error. We can now measure range data with an r.m.s. relative error of 0.3% and measure areas on the human body surface within 3% and volumes within 10%. We have found that the solution to building a working vision system requires the right combination of theory and experimental verification.
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Radiometric infrared focal plane array imaging system for thermographic applications
NASA Technical Reports Server (NTRS)
Esposito, B. J.; Mccafferty, N.; Brown, R.; Tower, J. R.; Kosonocky, W. F.
1992-01-01
This document describes research performed under the Radiometric Infrared Focal Plane Array Imaging System for Thermographic Applications contract. This research investigated the feasibility of using platinum silicide (PtSi) Schottky-barrier infrared focal plane arrays (IR FPAs) for NASA Langley's specific radiometric thermal imaging requirements. The initial goal of this design was to develop a high spatial resolution radiometer with an NETD of 1 percent of the temperature reading over the range of 0 to 250 C. The proposed camera design developed during this study and described in this report provides: (1) high spatial resolution (full-TV resolution); (2) high thermal dynamic range (0 to 250 C); (3) the ability to image rapid, large thermal transients utilizing electronic exposure control (commandable dynamic range of 2,500,000:1 with exposure control latency of 33 ms); (4) high uniformity (0.5 percent nonuniformity after correction); and (5) high thermal resolution (0.1 C at 25 C background and 0.5 C at 250 C background).
Radiometric infrared focal plane array imaging system for thermographic applications
NASA Astrophysics Data System (ADS)
Esposito, B. J.; McCafferty, N.; Brown, R.; Tower, J. R.; Kosonocky, W. F.
1992-11-01
This document describes research performed under the Radiometric Infrared Focal Plane Array Imaging System for Thermographic Applications contract. This research investigated the feasibility of using platinum silicide (PtSi) Schottky-barrier infrared focal plane arrays (IR FPAs) for NASA Langley's specific radiometric thermal imaging requirements. The initial goal of this design was to develop a high spatial resolution radiometer with an NETD of 1 percent of the temperature reading over the range of 0 to 250 C. The proposed camera design developed during this study and described in this report provides: (1) high spatial resolution (full-TV resolution); (2) high thermal dynamic range (0 to 250 C); (3) the ability to image rapid, large thermal transients utilizing electronic exposure control (commandable dynamic range of 2,500,000:1 with exposure control latency of 33 ms); (4) high uniformity (0.5 percent nonuniformity after correction); and (5) high thermal resolution (0.1 C at 25 C background and 0.5 C at 250 C background).
Habitable Exoplanet Imager Optical Telescope Concept Design
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2017-01-01
Habitable Exoplanet Imaging Mission (HabEx) is a concept for a mission to directly image and characterize planetary systems around Sun-like stars. In addition to the search for life on Earth-like exoplanets, HabEx will enable a broad range of general astrophysics science enabled by 100 to 2500 nm spectral range and 3 x 3 arc-minute FOV. HabEx is one of four mission concepts currently being studied for the 2020 Astrophysics Decadal Survey.
Habitable Exoplanet Imager: Optical Telescope Structural Design and Performance Prediction
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2017-01-01
Habitable Exoplanet Imaging Mission (HabEx) is a concept for a mission to directly image and characterize planetary systems around Sun-like stars. In addition to the search for life on Earth-like exoplanets, HabExwill enable a broad range of general astrophysics science enabled by 100 to 2500 nm spectral range and 3 x 3 arc-minute FOV. HabExis one of four mission concepts currently being studied for the 2020 Astrophysics Decadal Survey.
Light-pollution measurement with the Wide-field all-sky image analyzing monitoring system
NASA Astrophysics Data System (ADS)
Vítek, S.
2017-07-01
The purpose of this experiment was to measure light pollution in the capital of Czech Republic, Prague. As a measuring instrument is used calibrated consumer level digital single reflex camera with IR cut filter, therefore, the paper reports results of measuring and monitoring of the light pollution in the wavelength range of 390 - 700 nm, which most affects visual range astronomy. Combining frames of different exposure times made with a digital camera coupled with fish-eye lens allow to create high dynamic range images, contain meaningful values, so such a system can provide absolute values of the sky brightness.
Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.
Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed
2009-06-01
Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm.
Haga, Yoshihiro; Chida, Koichi; Inaba, Yohei; Kaga, Yuji; Meguro, Taiichiro; Zuguchi, Masayuki
2016-02-01
As the use of diagnostic X-ray equipment with flat panel detectors (FPDs) has increased, so has the importance of proper management of FPD systems. To ensure quality control (QC) of FPD system, an easy method for evaluating FPD imaging performance for both stationary and moving objects is required. Until now, simple rotatable QC phantoms have not been available for the easy evaluation of the performance (spatial resolution and dynamic range) of FPD in imaging moving objects. We developed a QC phantom for this purpose. It consists of three thicknesses of copper and a rotatable test pattern of piano wires of various diameters. Initial tests confirmed its stable performance. Our moving phantom is very useful for QC of FPD images of moving objects because it enables visual evaluation of image performance (spatial resolution and dynamic range) easily.
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
NASA Astrophysics Data System (ADS)
Dayhoff, Ruth E.; Maloney, Daniel L.
1990-08-01
The effective delivery of health care has become increasingly dependent on a wide range of medical data which includes a variety of images. Manual and computer-based medical records ordinarily do not contain image data, leaving the physician to deal with a fragmented patient record widely scattered throughout the hospital. The Department of Veterans Affairs (VA) is currently installing a prototype hospital information system (HIS) workstation network to demonstrate the feasibility of providing image management and communications (IMAC) functionality as an integral part of an existing hospital information system. The core of this system is a database management system adapted to handle images as a new data type. A general model for this integration is discussed and specifics of the hospital-wide network of image display workstations are given.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
Optical design and testing: introduction.
Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin
2014-10-10
Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.
Satellite image collection optimization
NASA Astrophysics Data System (ADS)
Martin, William
2002-09-01
Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite.
Influence of long-range Coulomb interaction in velocity map imaging.
Barillot, T; Brédy, R; Celep, G; Cohen, S; Compagnon, I; Concina, B; Constant, E; Danakas, S; Kalaitzis, P; Karras, G; Lépine, F; Loriot, V; Marciniak, A; Predelus-Renois, G; Schindler, B; Bordas, C
2017-07-07
The standard velocity-map imaging (VMI) analysis relies on the simple approximation that the residual Coulomb field experienced by the photoelectron ejected from a neutral or ion system may be neglected. Under this almost universal approximation, the photoelectrons follow ballistic (parabolic) trajectories in the externally applied electric field, and the recorded image may be considered as a 2D projection of the initial photoelectron velocity distribution. There are, however, several circumstances where this approximation is not justified and the influence of long-range forces must absolutely be taken into account for the interpretation and analysis of the recorded images. The aim of this paper is to illustrate this influence by discussing two different situations involving isolated atoms or molecules where the analysis of experimental images cannot be performed without considering long-range Coulomb interactions. The first situation occurs when slow (meV) photoelectrons are photoionized from a neutral system and strongly interact with the attractive Coulomb potential of the residual ion. The result of this interaction is the formation of a more complex structure in the image, as well as the appearance of an intense glory at the center of the image. The second situation, observed also at low energy, occurs in the photodetachment from a multiply charged anion and it is characterized by the presence of a long-range repulsive potential. Then, while the standard VMI approximation is still valid, the very specific features exhibited by the recorded images can be explained only by taking into consideration tunnel detachment through the repulsive Coulomb barrier.
Sodickson, Daniel K.
2010-01-01
Cardiovascular magnetic resonance imaging (CVMRI) is of proven clinical value in the non-invasive imaging of cardiovascular diseases. CVMRI requires rapid image acquisition, but acquisition speed is fundamentally limited in conventional MRI. Parallel imaging provides a means for increasing acquisition speed and efficiency. However, signal-to-noise (SNR) limitations and the limited number of receiver channels available on most MR systems have in the past imposed practical constraints, which dictated the use of moderate accelerations in CVMRI. High levels of acceleration, which were unattainable previously, have become possible with many-receiver MR systems and many-element, cardiac-optimized RF-coil arrays. The resulting imaging speed improvements can be exploited in a number of ways, ranging from enhancement of spatial and temporal resolution to efficient whole heart coverage to streamlining of CVMRI work flow. In this review, examples of these strategies are provided, following an outline of the fundamentals of the highly accelerated imaging approaches employed in CVMRI. Topics discussed include basic principles of parallel imaging; key requirements for MR systems and RF-coil design; practical considerations of SNR management, supported by multi-dimensional accelerations, 3D noise averaging and high field imaging; highly accelerated clinical state-of-the art cardiovascular imaging applications spanning the range from SNR-rich to SNR-limited; and current trends and future directions. PMID:17562047
Design and application of an array extended blackbody
NASA Astrophysics Data System (ADS)
Zhang, Ya-zhou; Fan, Xiao-li; Lei, Hao; Zhou, Zhi-yuan
2018-02-01
An array extended blackbody is designed to quantitatively measure and evaluate the performance of infrared imaging systems. The theory, structure, control software and application of blackbody are introduced. The parameters of infrared imaging systems such as the maximum detectable range, detection sensitivity, spatial resolution and temperature resolution can be measured.
No scanning depth imaging system based on TOF
NASA Astrophysics Data System (ADS)
Sun, Rongchun; Piao, Yan; Wang, Yu; Liu, Shuo
2016-03-01
To quickly obtain a 3D model of real world objects, multi-point ranging is very important. However, the traditional measuring method usually adopts the principle of point by point or line by line measurement, which is too slow and of poor efficiency. In the paper, a no scanning depth imaging system based on TOF (time of flight) was proposed. The system is composed of light source circuit, special infrared image sensor module, processor and controller of image data, data cache circuit, communication circuit, and so on. According to the working principle of the TOF measurement, image sequence was collected by the high-speed CMOS sensor, and the distance information was obtained by identifying phase difference, and the amplitude image was also calculated. Experiments were conducted and the experimental results show that the depth imaging system can achieve no scanning depth imaging function with good performance.
Multiscale optical imaging of rare-earth-doped nanocomposites in a small animal model.
Higgins, Laura M; Ganapathy, Vidya; Kantamneni, Harini; Zhao, Xinyu; Sheng, Yang; Tan, Mei-Chee; Roth, Charles M; Riman, Richard E; Moghe, Prabhas V; Pierce, Mark C
2018-03-01
Rare-earth-doped nanocomposites have appealing optical properties for use as biomedical contrast agents, but few systems exist for imaging these materials. We describe the design and characterization of (i) a preclinical system for whole animal in vivo imaging and (ii) an integrated optical coherence tomography/confocal microscopy system for high-resolution imaging of ex vivo tissues. We demonstrate these systems by administering erbium-doped nanocomposites to a murine model of metastatic breast cancer. Short-wave infrared emissions were detected in vivo and in whole organ imaging ex vivo. Visible upconversion emissions and tissue autofluorescence were imaged in biopsy specimens, alongside optical coherence tomography imaging of tissue microstructure. We anticipate that this work will provide guidance for researchers seeking to image these nanomaterials across a wide range of biological models. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Development of a High-Throughput Microwave Imaging System for Concealed Weapons Detection
2016-07-15
hardware. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field microwave imaging is a non-ionizing...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled with...Simulated image of 25 point scatterers imaged at range 1.5m, with array layout depicted in Fig. 3. Left: image formed with Equation (5) ( Fourier
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
NASA Astrophysics Data System (ADS)
Jaillon, Franck; Makita, Shuichi; Yasuno, Yoshiaki
2012-03-01
Ability of a new version of one-micrometer dual-beam optical coherence angiography (OCA) based on Doppler optical coherence tomography (OCT), is demonstrated for choroidal vasculature imaging. A particular feature of this system is the adjustable time delay between two probe beams. This allows changing the measurable velocity range of moving constituents such as blood without alteration of the scanning protocol. Since choroidal vasculature is made of vessels having blood flows with different velocities, this technique provides a way of discriminating vessels according to the velocity range of their inner flow. An example of choroid imaging of a normal emmetropic eye is here given. It is shown that combining images acquired with different velocity ranges provides an enhanced vasculature representation. This method may be then useful for pathological choroid characterization.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.
2014-03-01
We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.
A preclinical Talbot-Lau prototype for x-ray dark-field imaging of human-sized objects.
Hauke, C; Bartl, P; Leghissa, M; Ritschl, L; Sutter, S M; Weber, T; Zeidler, J; Freudenberger, J; Mertelmeier, T; Radicke, M; Michel, T; Anton, G; Meinel, F G; Baehr, A; Auweter, S; Bondesson, D; Gaass, T; Dinkel, J; Reiser, M; Hellbach, K
2018-06-01
Talbot-Lau x-ray interferometry provides information about the scattering and refractive properties of an object - in addition to the object's attenuation features. Until recently, this method was ineligible for imaging human-sized objects as it is challenging to adapt Talbot-Lau interferometers (TLIs) to the relevant x-ray energy ranges. In this work, we present a preclinical Talbot-Lau prototype capable of imaging human-sized objects with proper image quality at clinically acceptable dose levels. The TLI is designed to match a setup of clinical relevance as closely as possible. The system provides a scan range of 120 × 30 cm 2 by using a scanning beam geometry. Its ultimate load is 100 kg. High aspect ratios and fine grid periods of the gratings ensure a reasonable setup length and clinically relevant image quality. The system is installed in a university hospital and is, therefore, exposed to the external influences of a clinical environment. To demonstrate the system's capabilities, a full-body scan of a euthanized pig was performed. In addition, freshly excised porcine lungs with an extrinsically provoked pneumothorax were mounted into a human thorax phantom and examined with the prototype. Both examination sequences resulted in clinically relevant image quality - even in the case of a skin entrance air kerma of only 0.3 mGy which is in the range of human thoracic imaging. The presented case of a pneumothorax and a reader study showed that the prototype's dark-field images provide added value for pulmonary diagnosis. We demonstrated that a dedicated design of a Talbot-Lau interferometer can be applied to medical imaging by constructing a preclinical Talbot-Lau prototype. We experienced that the system is feasible for imaging human-sized objects and the phase-stepping approach is suitable for clinical practice. Hence, we conclude that Talbot-Lau x-ray imaging has potential for clinical use and enhances the diagnostic power of medical x-ray imaging. © 2018 American Association of Physicists in Medicine.
Biomedical imaging with THz waves
NASA Astrophysics Data System (ADS)
Nguyen, Andrew
2010-03-01
We discuss biomedical imaging using radio waves operating in the terahertz (THz) range between 300 GHz to 3 THz. Particularly, we present the concept for two THz imaging systems. One system employs single antenna, transmitter and receiver operating over multi-THz-frequency simultaneously for sensing and imaging small areas of the human body or biological samples. Another system consists of multiple antennas, a transmitter, and multiple receivers operating over multi-THz-frequency capable of sensing and imaging simultaneously the whole body or large biological samples. Using THz waves for biomedical imaging promises unique and substantial medical benefits including extremely small medical devices, extraordinarily fine spatial resolution, and excellent contrast between images of diseased and healthy tissues. THz imaging is extremely attractive for detection of cancer in the early stages, sensing and imaging of tissues near the skin, and study of disease and its growth versus time.
NASA Astrophysics Data System (ADS)
Pierzchalski, Arkadiusz; Marecka, Monika; Müller, Hans-Willy; Bocsi, József; Tárnok, Attila
2009-02-01
Flow cytometers (FCM) are built for particle measurements. In principle, concentration measurement of a homogeneous solution is not possible with FCM due to the lack of a trigger signal. In contrast to FCM slide based cytometry systems could act as tools for the measurement of concentrations using volume defined cell counting chambers. These chambers enable to analyze a well defined volume. Sensovation AG (Stockach, Germany) introduced an automated imaging system that combines imaging with cytometric features analysis. Aim of this study was to apply this imaging system to quantify the fluorescent molecule concentrations. The Lumisens (Sensovation AG) slide-based technology based on fluorescence digital imaging microscopy was used. The instrument is equipped with an inverted microscope, blue and red LEDs, double band-pass filters and a high-resolution cooled 16-bit digital camera. The instrument was focussed on the bottom of 400μm deep 6 chamber slides (IBIDI GmbH, Martinsried, Germany) or flat bottom 96 well plates (Greiner Bio One GmbH, Frickenhausen, Germany). Fluorescent solutions were imaged under 90% pixel saturation in a broad concentration range (FITC: 0.0002-250 μg/ml, methylene blue (MethB): 0.0002-250 μg/ml). Exposition times were recorded. Images were analysed by the iCys (CompuCyte Corp., Cambridge, MA, USA) image analysis software with the phantom contour function. Relative fluorescence intensities were calculated from mean fluorescence intensities per phantom contours divided by the exposition time. Solution concentrations could be distinguished over a broad dynamic range of 3.5 to 5.5 decades log (range FITC: 0.0002-31.25μg/ml, MethB: 0.0076-31.25μg/ml) with a good linear relationship between dye concentration and relative fluorescence intensity. The minimal number of fluorescent molecules per pixel as determined by the mean fluorescence intensity and the molecular weight of the fluorochrome were about 800 molecules FITC and ~2.000 MethB. The novel slide-based imaging system is suitable for detection of fluorescence differences over a broad range of concentrations. This approach may lead to novel assays for measuring concentration differences in cell free solutions and cell cultures e.g. in secretion assays.
Wide-Angle, Flat-Field Telescope
NASA Technical Reports Server (NTRS)
Hallam, K. L.; Howell, B. J.; Wilson, M. E.
1987-01-01
All-reflective system unvignetted. Wide-angle telescope uses unobstructed reflecting elements to produce flat image. No refracting elements, no chromatic aberration, and telescope operates over spectral range from infrared to far ultraviolet. Telescope used with such image detectors as photographic firm, vidicons, and solid-state image arrays.
High speed parallel spectral-domain OCT using spectrally encoded line-field illumination
NASA Astrophysics Data System (ADS)
Lee, Kye-Sung; Hur, Hwan; Bae, Ji Yong; Kim, I. Jong; Kim, Dong Uk; Nam, Ki-Hwan; Kim, Geon-Hee; Chang, Ki Soo
2018-01-01
We report parallel spectral-domain optical coherence tomography (OCT) at 500 000 A-scan/s. This is the highest-speed spectral-domain (SD) OCT system using a single line camera. Spectrally encoded line-field scanning is proposed to increase the imaging speed in SD-OCT effectively, and the tradeoff between speed, depth range, and sensitivity is demonstrated. We show that three imaging modes of 125k, 250k, and 500k A-scan/s can be simply switched according to the sample to be imaged considering the depth range and sensitivity. To demonstrate the biological imaging performance of the high-speed imaging modes of the spectrally encoded line-field OCT system, human skin and a whole leaf were imaged at the speed of 250k and 500k A-scan/s, respectively. In addition, there is no sensitivity dependence in the B-scan direction, which is implicit in line-field parallel OCT using line focusing of a Gaussian beam with a cylindrical lens.
Miniaturized GPS/MEMS IMU integrated board
NASA Technical Reports Server (NTRS)
Lin, Ching-Fang (Inventor)
2012-01-01
This invention documents the efforts on the research and development of a miniaturized GPS/MEMS IMU integrated navigation system. A miniaturized GPS/MEMS IMU integrated navigation system is presented; Laser Dynamic Range Imager (LDRI) based alignment algorithm for space applications is discussed. Two navigation cameras are also included to measure the range and range rate which can be integrated into the GPS/MEMS IMU system to enhance the navigation solution.
NASA Astrophysics Data System (ADS)
Chen, Hao; Zhang, Xinggan; Bai, Yechao; Tang, Lan
2017-01-01
In inverse synthetic aperture radar (ISAR) imaging, the migration through resolution cells (MTRCs) will occur when the rotation angle of the moving target is large, thereby degrading image resolution. To solve this problem, an ISAR imaging method based on segmented preprocessing is proposed. In this method, the echoes of large rotating target are divided into several small segments, and every segment can generate a low-resolution image without MTRCs. Then, each low-resolution image is rotated back to the original position. After image registration and phase compensation, a high-resolution image can be obtained. Simulation and real experiments show that the proposed algorithm can deal with the radar system with different range and cross-range resolutions and significantly compensate the MTRCs.
Characterization of a novel anthropomorphic plastinated lung phantom
Yoon, Sungwon; Henry, Robert W.; Bouley, Donna M.; Bennett, N. Robert; Fahrig, Rebecca
2008-01-01
Phantoms are widely used during the development of new imaging systems and algorithms. For development and optimization of new imaging systems such as tomosynthesis, where conventional image quality metrics may not be applicable, a realistic phantom that can be used across imaging systems is desirable. A novel anthropomorphic lung phantom was developed by plastination of an actual pig lung. The plastinated phantom is characterized and compared with reference to in vivo images of the same tissue prior to plastination using high resolution 3D CT. The phantom is stable over time and preserves the anatomical features and relative locations of the in vivo sample. The volumes for different tissue types in the phantom are comparable to the in vivo counterparts, and CT numbers for different tissue types fall within a clinically useful range. Based on the measured CT numbers, the phantom cardiac tissue experienced a 92% decrease in bulk density and the phantom pulmonary tissue experienced a 78% decrease in bulk density compared to their in vivo counterparts. By-products in the phantom from the room temperature vulcanizing silicone and plastination process are also identified. A second generation phantom, which eliminates most of the by-products, is presented. Such anthropomorphic phantoms can be used to evaluate a wide range of novel imaging systems. PMID:19175148
Study on high power ultraviolet laser oil detection system
NASA Astrophysics Data System (ADS)
Jin, Qi; Cui, Zihao; Bi, Zongjie; Zhang, Yanchao; Tian, Zhaoshuo; Fu, Shiyou
2018-03-01
Laser Induce Fluorescence (LIF) is a widely used new telemetry technology. It obtains information about oil spill and oil film thickness by analyzing the characteristics of stimulated fluorescence and has an important application in the field of rapid analysis of water composition. A set of LIF detection system for marine oil pollution is designed in this paper, which uses 355nm high-energy pulsed laser as the excitation light source. A high-sensitivity image intensifier is used in the detector. The upper machine sends a digital signal through a serial port to achieve nanoseconds range-gated width control for image intensifier. The target fluorescence spectrum image is displayed on the image intensifier by adjusting the delay time and the width of the pulse signal. The spectral image is coupled to CCD by lens imaging to achieve spectral display and data analysis function by computer. The system is used to detect the surface of the floating oil film in the distance of 25m to obtain the fluorescence spectra of different oil products respectively. The fluorescence spectra of oil products are obvious. The experimental results show that the system can realize high-precision long-range fluorescence detection and reflect the fluorescence characteristics of the target accurately, with broad application prospects in marine oil pollution identification and oil film thickness detection.
NASA Astrophysics Data System (ADS)
Kaar, M.; Semturs, F.; Hummel, J.; Hoffmann, R.; Figl, M.
2015-03-01
Technical quality assurance (TQA) procedures for mammography systems usually include tests with a contrast-detail phantom. These phantoms contain multiple objects of varying dimensions arranged on a flat body. Exposures of the phantom are then evaluated by an observer, either human or software. One well-known issue of this method is that dose distribution is not uniform across the image area of any mammography system, mainly due to the heel effect. The purpose of this work is to investigate to what extent image quality differs across the detector plane. We analyze a total of 320 homogeneous mammography exposures from 32 radiology institutes. Systems of different models and manufacturers, both computed radiography (CR) and direct radiography (DR) are included. All images were taken from field installations operated within the nationwide Austrian mammography screening program, which includes mandatory continuous TQA. We calculate signal-to-noise ratios (SNR) for 15 regions of interest arranged to cover the area of the phantom. We define the 'signal range' of an image and compare this value categorized by technologies. We found the deviations of SNR greater in anterior-posterior than in lateral direction. SNR ranges are significantly higher for CR systems than for DR systems.
NASA Astrophysics Data System (ADS)
Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim
2014-09-01
Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Wesley; Sattarivand, Mike
Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknessesmore » range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.« less
Information-Theoretic Assessment of Sample Imaging Systems
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Park, Stephen K.; Rahman, Zia-ur
1999-01-01
By rigorously extending modern communication theory to the assessment of sampled imaging systems, we develop the formulations that are required to optimize the performance of these systems within the critical constraints of image gathering, data transmission, and image display. The goal of this optimization is to produce images with the best possible visual quality for the wide range of statistical properties of the radiance field of natural scenes that one normally encounters. Extensive computational results are presented to assess the performance of sampled imaging systems in terms of information rate, theoretical minimum data rate, and fidelity. Comparisons of this assessment with perceptual and measurable performance demonstrate that (1) the information rate that a sampled imaging system conveys from the captured radiance field to the observer is closely correlated with the fidelity, sharpness and clarity with which the observed images can be restored and (2) the associated theoretical minimum data rate is closely correlated with the lowest data rate with which the acquired signal can be encoded for efficient transmission.
Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C
2003-01-01
The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.
Optimization of a dual mode Rowland mount spectrometer used in the 120-950 nm wavelength range
NASA Astrophysics Data System (ADS)
McDowell, M. W.; Bouwer, H. K.
In a recent article, several configurations were described whereby a Rowland mount spectrometer could be modified to cover a wavelength range of 120-950 nm. In one of these configurations, large additional image aberration is introduced which severely limits the spectral resolving power. In the present article, the theoretical imaging properties of this configuration are considered and a simple method is proposed to reduce this aberration. The optimized system possesses an image quality similar to the conventional Rowland mount with the image surface slightly displaced from the Rowland circle but concentric to it.
Quantitative mapping of solute accumulation in a soil-root system by magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Haber-Pohlmeier, S.; Vanderborght, J.; Pohlmeier, A.
2017-08-01
Differential uptake of water and solutes by plant roots generates heterogeneous concentration distributions in soils. Noninvasive observations of root system architecture and concentration patterns therefore provide information about root water and solute uptake. We present the application of magnetic resonance imaging (MRI) to image and monitor root architecture and the distribution of a tracer, GdDTPA2- (Gadolinium-diethylenetriaminepentacetate) noninvasively during an infiltration experiment in a soil column planted with white lupin. We show that inversion recovery preparation within the MRI imaging sequence can quantitatively map concentrations of a tracer in a complex root-soil system. Instead of a simple T1 weighting, the procedure is extended by a wide range of inversion times to precisely map T1 and subsequently to cover a much broader concentration range of the solute. The derived concentrations patterns were consistent with mass balances and showed that the GdDTPA2- tracer represents a solute that is excluded by roots. Monitoring and imaging the accumulation of the tracer in the root zone therefore offers the potential to determine where and by which roots water is taken up.
NASA Astrophysics Data System (ADS)
Lee, Sung Hyun; Sunaguchi, Naoki; Hirano, Yoshiyuki; Kano, Yosuke; Liu, Chang; Torikoshi, Masami; Ohno, Tatsuya; Nakano, Takashi; Kanai, Tatsuaki
2018-02-01
In this study, we investigate the performance of the Gunma University Heavy Ion Medical Center’s ion computed tomography (CT) system, which measures the residual range of a carbon-ion beam using a fluoroscopy screen, a charge-coupled-device camera, and a moving wedge absorber and collects CT reconstruction images from each projection angle. Each 2D image was obtained by changing the polymethyl methacrylate (PMMA) thickness, such that all images for one projection could be expressed as the depth distribution in PMMA. The residual range as a function of PMMA depth was related to the range in water through a calibration factor, which was determined by comparing the PMMA-equivalent thickness measured by the ion CT system to the water-equivalent thickness measured by a water column. Aluminium, graphite, PMMA, and five biological phantoms were placed in a sample holder, and the residual range for each was quantified simultaneously. A novel method of CT reconstruction to correct for the angular deflection of incident carbon ions in the heterogeneous region utilising the Bragg peak reduction (BPR) is also introduced in this paper, and its performance is compared with other methods present in the literature such as the decomposition and differential methods. Stopping power ratio values derived with the BPR method from carbon-ion CT images matched closely with the true water-equivalent length values obtained from the validation slab experiment.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
A short-range optical wireless transmission method based on LED
NASA Astrophysics Data System (ADS)
Miao, Meiyuan; Chen, Ailin; Zhu, Mingxing; Li, Ping; Gao, Yingming; Zou, Nianyu
2016-10-01
As to electromagnetic wave interfere and only one to one transmission problem of Bluetooth, a short-range LED optical wireless transmission method is proposed to be complementary technology in this paper. Furthermore achieved image transmission through this method. The system makes C52 to be the mater controller, transmitter got data from terminals by USB and sends modulated signals with LED. Optical signal is detected by PD, through amplified, filtered with shaping wave from, and demodulated on receiver. Then send to terminals like PC and reverted back to original image. Analysis the performance from peak power and average power, power consumption of transmitter, relationship of bit error rate and modulation mode, and influence of ambient light, respectively. The results shows that image can be received accurately which uses this method. The most distant transmission distance can get to 1m with transmitter LED source of 1w, and the transfer rate is 14.4Kbit/s with OOK modulation mode on stabilization system, the ambient light effect little to LED transmission system in normal light environment. The method is a convenient to carry LED wireless short range transmission for mobile transmission equipment as a supplement of Bluetooth short-range transmission for its ISM band interfere, and the analysis method in this paper can be a reference for other similar systems. It also proves the system is feasibility for next study.
Best-next-view algorithm for three-dimensional scene reconstruction using range images
NASA Astrophysics Data System (ADS)
Banta, J. E.; Zhien, Yu; Wang, X. Z.; Zhang, G.; Smith, M. T.; Abidi, Mongi A.
1995-10-01
The primary focus of the research detailed in this paper is to develop an intelligent sensing module capable of automatically determining the optimal next sensor position and orientation during scene reconstruction. To facilitate a solution to this problem, we have assembled a system for reconstructing a 3D model of an object or scene from a sequence of range images. Candidates for the best-next-view position are determined by detecting and measuring occlusions to the range camera's view in an image. Ultimately, the candidate which will reveal the greatest amount of unknown scene information is selected as the best-next-view position. Our algorithm uses ray tracing to determine how much new information a given sensor perspective will reveal. We have tested our algorithm successfully on several synthetic range data streams, and found the system's results to be consistent with an intuitive human search. The models recovered by our system from range data compared well with the ideal models. Essentially, we have proven that range information of physical objects can be employed to automatically reconstruct a satisfactory dynamic 3D computer model at a minimal computational expense. This has obvious implications in the contexts of robot navigation, manufacturing, and hazardous materials handling. The algorithm we developed takes advantage of no a priori information in finding the best-next-view position.
Wireless Command-and-Control of UAV-Based Imaging LANs
NASA Technical Reports Server (NTRS)
Herwitz, Stanley; Dunagan, S. E.; Sullivan, D. V.; Slye, R. E.; Leung, J. G.; Johnson, L. F.
2006-01-01
Dual airborne imaging system networks were operated using a wireless line-of-sight telemetry system developed as part of a 2002 unmanned aerial vehicle (UAV) imaging mission over the USA s largest coffee plantation on the Hawaiian island of Kauai. A primary mission objective was the evaluation of commercial-off-the-shelf (COTS) 802.11b wireless technology for reduction of payload telemetry costs associated with UAV remote sensing missions. Predeployment tests with a conventional aircraft demonstrated successful wireless broadband connectivity between a rapidly moving airborne imaging local area network (LAN) and a fixed ground station LAN. Subsequently, two separate LANs with imaging payloads, packaged in exterior-mounted pressure pods attached to the underwing of NASA's Pathfinder-Plus UAV, were operated wirelessly by ground-based LANs over independent Ethernet bridges. Digital images were downlinked from the solar-powered aircraft at data rates of 2-6 megabits per second (Mbps) over a range of 6.5 9.5 km. An integrated wide area network enabled payload monitoring and control through the Internet from a range of ca. 4000 km during parts of the mission. The recent advent of 802.11g technology is expected to boost the system data rate by about a factor of five.
A novel design of dual-channel optical system of star-tracker based on non-blind area PAL system
NASA Astrophysics Data System (ADS)
Luo, Yujie; Bai, Jian
2016-07-01
Star-tracker plays an important role in satellite navigation. Considering the satellites on near-Earth orbit, the system usually has two optical systems: one for observing the profile of Earth and the other for capturing the positions of stars. In this paper, we demonstrate a novel kind of dual-channel optical observation system of star-tracker with non-blind area PAL imaging system based on dichroic filter, which can combine both different observation channels into an integrated structure and realize the feature of miniaturization. According to the practical usage of star-tracker and the features of dichroic filter, we set the ultraviolet band as the PAL channel to observe the Earth with the FOV ranging from 40°-60°, and set the visible band as the front imaging channel to capture the stars far away from this system with the FOV ranging from 0°-20°. Consequently, the rays of both channels are converged on the same image plane, improving the efficiency of pixels of detector and reducing the weight and size of whole star-tracker system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorum, O.H.; Hoover, A.; Jones, J.P.
This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of viewmore » and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.« less
[Research on Spectral Polarization Imaging System Based on Static Modulation].
Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng
2015-04-01
The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.
Topographic mapping using a monopulse SAR system
NASA Technical Reports Server (NTRS)
Zink, M.; Oettl, H.; Freeman, A.
1993-01-01
Terrain height variations in mountainous areas cause two problems in the radiometric correction of SAR images: the first being that the wrong elevation angle may be used in correcting for the radiometric variation of the antenna pattern; the second that the local incidence angle used in correcting the projection of the pixel area from slant range to ground range coordinates may vary from that given by the flat earth assumption. We propose a novel design of a SAR system which exploits the monopulse principle to determine the elevation angle and thus the height at the different parts of the image. The key element of such a phase monopulse system is an antenna, which can be divided into a lower and upper half in elevation using a monopulse comparator. In addition to the usual sum pattern, the elevation difference pattern can be generated by a -pi phase shift on one half of the antenna. From the ratios of images radiometrically modulated by the difference and sum antenna pattern in cross-track direction, we can derive the appropriate elevation angle at any point in the image. Together with the slant range we can calculate the height of the platform above this point using information on the antenna pointing and the platform attitude. This operation, repeated at many locations throughout the image, allows us to build up a topographic map of the height of the aircraft above each location. Inversion of this map, using the precisely determined aircraft altitude and the accurate flight path, leads to the actual topography of the imaged surface. The precise elevation of one point in the image could also be used to convert the height map to a topographic map. In this paper, we present design considerations for a corresponding airborne SAR system in X-Band and give estimates of the error due to system noise and azimuth ambiguities as well as the expected performance and precision in topographic mapping.
High-dynamic-range scene compression in humans
NASA Astrophysics Data System (ADS)
McCann, John J.
2006-02-01
Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.
Analysis of Interactive Graphics Display Equipment for an Automated Photo Interpretation System.
1982-06-01
System provides the hardware and software for a range of graphics processor tasks. The IMAGE System employs the RSX- II M real - time operating . system in...One hard copy unit serves up to four work stations. The executive program of the IMAGE system is the DEC RSX- 11 M real - time operating system . In...picture controller. The PDP 11/34 executes programs concurrently under the RSX- I IM real - time operating system . Each graphics program consists of a
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
Simulation of digital mammography images
NASA Astrophysics Data System (ADS)
Workman, Adam
2005-04-01
A number of different technologies are available for digital mammography. However, it is not clear how differences in the physical performance aspects of the different imaging technologies affect clinical performance. Randomised controlled trials provide a means of gaining information on clinical performance however do not provide direct comparison of the different digital imaging technologies. This work describes a method of simulating the performance of different digital mammography systems. The method involves modifying the imaging performance parameters of images from a small field of view (SFDM), high resolution digital imaging system used for spot imaging. Under normal operating conditions this system produces images with higher signal-to-noise ratio (SNR) over a wide spatial frequency range than current full field digital mammography (FFDM) systems. The SFDM images can be 'degraded" by computer processing to simulate the characteristics of a FFDM system. Initial work characterised the physical performance (MTF, NPS) of the SFDM detector and developed a model and method for simulating signal transfer and noise properties of a FFDM system. It was found that the SNR properties of the simulated FFDM images were very similar to those measured from an actual FFDM system verifying the methodology used. The application of this technique to clinical images from the small field system will allow the clinical performance of different FFDM systems to be simulated and directly compared using the same clinical image datasets.
NASA Astrophysics Data System (ADS)
Lim, H. S.
2017-12-01
Due to global warming, the sea ice in the Arctic Ocean is melting dramatically in summer, which is providing a new opportunity to exploit the Northern Sea Route (NSR) connecting Asia and Europe ship route. Recent increases in logistics transportation through NSR and resource development reveal the possible threats of marine pollution and marine transportation accidents without real-time navigation system. To develop a safe Voyage Environmental Information System (VEIS) for vessels operating, the Korea Institute of Ocean Science and Technology (KIOST) which is supported by the Ministry of Oceans and Fisheries, Korea has initiated the development of short-term and middle range prediction system for the sea ice concentration (SIC) and sea ice thickness (SIT) in NSR since 2014. The sea ice prediction system of VEIS consists of AMSR2 satellite composite images (a day), short-term (a week) prediction system, and middle range (a month) prediction system using a statistical method with re-analysis data (TOPAZ) and short-term predicted model data. In this study, the middle range prediction system for the SIC and SIT in NSR is calibrated with another middle range predicted atmospheric and oceanic data (NOAA CFSv2). The system predicts one month SIC and SIT on a daily basis, as validated with dynamic composite SIC data extracted from AMSR2 L2 satellite images.
Cargo Container Imaging with Gaseous Detectors
NASA Astrophysics Data System (ADS)
Forest, Tony
2006-10-01
The gas electron multiplier (GEM) , developed at CERN by Fabio Sauli, represents the latest innovation in micropattern gaseous detectors and has been utilized as a preamplification stage in applications ranging from fundamental physics experiments to medical imaging. Although cargo container inspection systems are currently in place using gamma-rays or X-rays, they are predominantly designed with a resolution to detect contraband. Current imaging systems also suffer from false alarms due to naturally radioactive cargo when radiation portal monitors are used for passive detection of nuclear materials. Detection of small shielded radioactive elements is even more problematic. Idaho State University has been developing a system to image cargo containers in order to detect small shielded radioactive cargo. The possible application of an imaging system with gas electron multiplication will be shown along with preliminary images using gaseous detectors instead of the scintillators currently in use.
Nickoloff, Edward Lee
2011-01-01
This article reviews the design and operation of both flat-panel detector (FPD) and image intensifier fluoroscopy systems. The different components of each imaging chain and their functions are explained and compared. FPD systems have multiple advantages such as a smaller size, extended dynamic range, no spatial distortion, and greater stability. However, FPD systems typically have the same spatial resolution for all fields of view (FOVs) and are prone to ghosting. Image intensifier systems have better spatial resolution with the use of smaller FOVs (magnification modes) and tend to be less expensive. However, the spatial resolution of image intensifier systems is limited by the television system to which they are coupled. Moreover, image intensifier systems are degraded by glare, vignetting, spatial distortions, and defocusing effects. FPD systems do not have these problems. Some recent innovations to fluoroscopy systems include automated filtration, pulsed fluoroscopy, automatic positioning, dose-area product meters, and improved automatic dose rate control programs. Operator-selectable features may affect both the patient radiation dose and image quality; these selectable features include dose level setting, the FOV employed, fluoroscopic pulse rates, geometric factors, display software settings, and methods to reduce the imaging time. © RSNA, 2011.
Luminescence imaging of water during proton-beam irradiation for range estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka
Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantomsmore » of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy.« less
The Lick Observatory image-dissector scanner.
NASA Technical Reports Server (NTRS)
Robinson, L. B.; Wampler, E. J.
1972-01-01
A scanner that uses an image dissector to scan the output screen of an image tube has proven to be a sensitive and linear detector for faint astronomical spectra. The image-tube phosphor screen acts as a short-term storage element and allows the system to approach the performance of an ideal multichannel photon counter. Pulses resulting from individual photons, emitted from the output phosphor and detected by the image dissector, trigger an amplifier-discriminator and are counted in a 24-bit, 4096-word circulating memory. Aspects of system performance are discussed, giving attention to linearity, dynamic range, sensitivity, stability, and scattered light properties.
FPGA Based High Speed Data Acquisition System for Electrical Impedance Tomography
Khan, S; Borsic, A; Manwaring, Preston; Hartov, Alexander; Halter, Ryan
2014-01-01
Electrical Impedance Tomography (EIT) systems are used to image tissue bio-impedance. EIT provides a number of features making it attractive for use as a medical imaging device including the ability to image fast physiological processes (>60 Hz), to meet a range of clinical imaging needs through varying electrode geometries and configurations, to impart only non-ionizing radiation to a patient, and to map the significant electrical property contrasts present between numerous benign and pathological tissues. To leverage these potential advantages for medical imaging, we developed a modular 32 channel data acquisition (DAQ) system using National Instruments’ PXI chassis, along with FPGA, ADC, Signal Generator and Timing and Synchronization modules. To achieve high frame rates, signal demodulation and spectral characteristics of higher order harmonics were computed using dedicated FFT-hardware built into the FPGA module. By offloading the computing onto FPGA, we were able to achieve a reduction in throughput required between the FPGA and PC by a factor of 32:1. A custom designed analog front end (AFE) was used to interface electrodes with our system. Our system is wideband, and capable of acquiring data for input signal frequencies ranging from 100 Hz to 12 MHz. The modular design of both the hardware and software will allow this system to be flexibly configured for the particular clinical application. PMID:24729790
Phase aided 3D imaging and modeling: dedicated systems and case studies
NASA Astrophysics Data System (ADS)
Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang
2014-05-01
Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.
A 3D photographic capsule endoscope system with full field of view
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng
2013-09-01
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
NASA Astrophysics Data System (ADS)
Tavakolian, Pantea; Sivagurunathan, Koneswaran; Mandelis, Andreas
2017-07-01
Photothermal diffusion-wave imaging is a promising technique for non-destructive evaluation and medical applications. Several diffusion-wave techniques have been developed to produce depth-resolved planar images of solids and to overcome imaging depth and image blurring limitations imposed by the physics of parabolic diffusion waves. Truncated-Correlation Photothermal Coherence Tomography (TC-PCT) is the most successful class of these methodologies to-date providing 3-D subsurface visualization with maximum depth penetration and high axial and lateral resolution. To extend the depth range and axial and lateral resolution, an in-depth analysis of TC-PCT, a novel imaging system with improved instrumentation, and an optimized reconstruction algorithm over the original TC-PCT technique is developed. Thermal waves produced by a laser chirped pulsed heat source in a finite thickness solid and the image reconstruction algorithm are investigated from the theoretical point of view. 3-D visualization of subsurface defects utilizing the new TC-PCT system is reported. The results demonstrate that this method is able to detect subsurface defects at the depth range of ˜4 mm in a steel sample, which exhibits dynamic range improvement by a factor of 2.6 compared to the original TC-PCT. This depth does not represent the upper limit of the enhanced TC-PCT. Lateral resolution in the steel sample was measured to be ˜31 μm.
NASA Astrophysics Data System (ADS)
Liu, Lingli; Zheng, Hairong; Williams, Logan; Zhang, Fuxing; Wang, Rui; Hertzberg, Jean; Shandas, Robin
2008-03-01
We have recently developed an ultrasound-based velocimetry technique, termed echo particle image velocimetry (Echo PIV), to measure multi-component velocity vectors and local shear rates in arteries and opaque fluid flows by identifying and tracking flow tracers (ultrasound contrast microbubbles) within these flow fields. The original system was implemented on images obtained from a commercial echocardiography scanner. Although promising, this system was limited in spatial resolution and measurable velocity range. In this work, we propose standard rules for characterizing Echo PIV performance and report on a custom-designed Echo PIV system with increased spatial resolution and measurable velocity range. Then we employed this system for initial measurements on tube flows, rotating flows and in vitro carotid artery and abdominal aortic aneurysm (AAA) models to acquire the local velocity and shear rate distributions in these flow fields. The experimental results verified the accuracy of this technique and indicated the promise of the custom Echo PIV system in capturing complex flow fields non-invasively.
Blind guidance system based on laser triangulation
NASA Astrophysics Data System (ADS)
Wu, Jih-Huah; Wang, Jinner-Der; Fang, Wei; Lee, Yun-Parn; Shan, Yi-Chia; Kao, Hai-Ko; Ma, Shih-Hsin; Jiang, Joe-Air
2012-05-01
We propose a new guidance system for the blind. An optical triangulation method is used in the system. The main components of the proposed system comprise of a notebook computer, a camera, and two laser modules. The track image of the light beam on the ground or on the object is captured by the camera and then the image is sent to the notebook computer for further processing and analysis. Using a developed signal-processing algorithm, our system can determine the object width and the distance between the object and the blind person through the calculation of the light line positions on the image. A series of feasibility tests of the developed blind guidance system were conducted. The experimental results show that the distance between the test object and the blind can be measured with a standard deviation of less than 8.5% within the range of 40 and 130 cm, while the test object width can be measured with a standard deviation of less than 4.5% within the range of 40 and 130 cm. The application potential of the designed system to the blind guidance can be expected.
Tunable-Bandwidth Filter System
NASA Technical Reports Server (NTRS)
Bailey, John W.
2004-01-01
A tunable-bandwidth filter system (TBFS), now undergoing development, is intended to be part of a remote sensing multispectral imaging system that will operate in the visible and near infrared spectral region (wavelengths from 400 to 900 nm). Attributes of the TBFS include rapid tunability of the pass band over a wide wavelength range and high transmission efficiency. The TBFS is based on a unique integration of two pairs of broadband Raman reflection holographic filters with two rotating spherical lenses. In experiments, a prototype of the TBFS, was shown to be capable of spectral sampling of images in the visible range over a 200 nm spectral range with a spectral resolution of 30 nm. The figure depicts the optical layout of a prototype of the TBFS as part of a laboratory multispectral imaging system for the spectral sampling of color test images in two orthogonal polarizations. Each pair of broadband Raman reflection holographic filters is mounted at an equatorial plane between two halves of a spherical lens. The two filters in each pair are characterized by steep spectral slopes (equivalently, narrow spectral edges), no ripple or side lobes in their pass bands, and a few nanometers of non-overlapping wavelength range between their pass bands. Each spherical lens and thus the filter pair within it is rotated in order to rapidly tune its pass band. The rotations of are effected by electronically controlled, programmable, high-precision rotation stages. The rotations are coordinated by electronic circuits operating under overall supervision of a personal computer in order to obtain the desired variation of the overall pass bands with time. Embedding the filters inside the spherical lenses increases the range of the hologram incidence angles, making it possible to continuously tune the pass and stop bands of the filters over a wider wavelength range. In addition, each spherical lens also serves as part of the imaging optics: The telephoto lens focuses incoming light to a field stop that is also a focal point of each spherical lens. A correcting lens in front of the field stop compensates for the spherical aberration of the spherical lenses. The front surface of each spherical lens collimates the light coming from the field stop. After the collimated light passes through the filter in the spherical lens, the rear surface of the lens focuses the light onto a charge-coupled-device image detector.
Tunable-Bandwidth Filter System
NASA Technical Reports Server (NTRS)
Aye, Tin; Yu, Kevin; Dimov, Fedor; Savant, Gajendra
2006-01-01
A tunable-bandwidth filter system (TBFS), now undergoing development, is intended to be part of a remote-sensing multispectral imaging system that will operate in the visible and near infrared spectral region (wavelengths from 400 to 900 nm). Attributes of the TBFS include rapid tunability of the pass band over a wide wavelength range and high transmission efficiency. The TBFS is based on a unique integration of two pairs of broadband Raman reflection holographic filters with two rotating spherical lenses. In experiments, a prototype of the TBFS was shown to be capable of spectral sampling of images in the visible range over a 200-nm spectral range with a spectral resolution of .30 nm. The figure depicts the optical layout of a prototype of the TBFS as part of a laboratory multispectral imaging system for the spectral sampling of color test images in two orthogonal polarizations. Each pair of broadband Raman reflection holographic filters is mounted at an equatorial plane between two halves of a spherical lens. The two filters in each pair are characterized by steep spectral slopes (equivalently, narrow spectral edges), no ripple or side lobes in their pass bands, and a few nanometers of non-overlapping wavelength range between their pass bands. Each spherical lens and thus the filter pair within it is rotated in order to rapidly tune its pass band. The rotations of the lenses are effected by electronically controlled, programmable, high-precision rotation stages. The rotations are coordinated by electronic circuits operating under overall supervision of a personal computer in order to obtain the desired variation of the overall pass bands with time. Embedding the filters inside the spherical lenses increases the range of the hologram incidence angles, making it possible to continuously tune the pass and stop bands of the filters over a wider wavelength range. In addition, each spherical lens also serves as part of the imaging optics: The telephoto lens focuses incoming light to a field stop that is also a focal point of each spherical lens. A correcting lens in front of the field stop compensates for the spherical aberration of the spherical lenses. The front surface of each spherical lens collimates the light coming from the field stop. After the collimated light passes through the filter in the spherical lens, the rear surface of the lens focuses the light onto a charge-coupled-device image detector.
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre
2017-06-03
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.
NASA Astrophysics Data System (ADS)
Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik
2016-03-01
With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.
Tunable optical coherence tomography in the infrared range using visible photons
NASA Astrophysics Data System (ADS)
Paterova, Anna V.; Yang, Hongzhi; An, Chengwu; Kalashnikov, Dmitry A.; Krivitsky, Leonid A.
2018-04-01
Optical coherence tomography (OCT) is an appealing technique for bio-imaging, medicine, and material analysis. For many applications, OCT in mid- and far-infrared (IR) leads to significantly more accurate results. Reported mid-IR OCT systems require light sources and photodetectors which operate in mid-IR range. These devices are expensive and need cryogenic cooling. Here, we report a proof-of-concept demonstration of a wavelength tunable IR OCT technique with detection of only visible range photons. Our method is based on the nonlinear interference of frequency correlated photon pairs. The nonlinear crystal, introduced in the Michelson-type interferometer, generates photon pairs with one photon in the visible and another in the IR range. The intensity of detected visible photons depends on the phase and loss of IR photons, which interact with the sample under study. This enables us to characterize sample properties and perform imaging in the IR range by detecting visible photons. The technique possesses broad wavelength tunability and yields a fair axial and lateral resolution, which can be tailored to the specific application. The work contributes to the development of versatile 3D imaging and material characterization systems working in a broad range of IR wavelengths, which do not require the use of IR-range light sources and photodetectors.
Determining the 3-D structure and motion of objects using a scanning laser range sensor
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Smith, Philip W.
1993-01-01
In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.
MTF evaluation of in-line phase contrast imaging system
NASA Astrophysics Data System (ADS)
Sun, Xiaoran; Gao, Feng; Zhao, Huijuan; Zhang, Limin; Li, Jiao; Zhou, Zhongxing
2017-02-01
X-ray phase contrast imaging (XPCI) is a novel method that exploits the phase shift for the incident X-ray to form an image. Various XPCI methods have been proposed, among which, in-line phase contrast imaging (IL-PCI) is regarded as one of the most promising clinical methods. The contrast of the interface is enhanced due to the introduction of the boundary fringes in XPCI, thus it is generally used to evaluate the image quality of XPCI. But the contrast is a comprehensive index and it does not reflect the information of image quality in the frequency range. The modulation transfer function (MTF), which is the Fourier transform of the system point spread function, is recognized as the metric to characterize the spatial response of conventional X-ray imaging system. In this work, MTF is introduced into the image quality evaluation of the IL-PCI system. Numerous simulations based on Fresnel - Kirchhoff diffraction theory are performed with varying system settings and the corresponding MTFs were calculated for comparison. The results show that MTF can provide more comprehensive information of image quality comparing to contrast in IL-PCI.
Smart Stylet: The development and use of a bedside external ventricular drain image-guidance system
Patil, Vaibhav; Gupta, Rajiv; Estépar, Raúl San José; Lacson, Ronilda; Cheung, Arnold; Wong, Judith M.; Popp, A. John; Golby, Alexandra; Ogilvy, Christopher; Vosburgh, Kirby G.
2015-01-01
Background Placement accuracy of ventriculostomy catheters is reported in a wide and variable range. Development of an efficient image-guidance system may improve physician performance and patient safety. Objective We evaluate the prototype of Smart Stylet, a new electromagnetic image-guidance system for use during bedside ventriculostomy. Methods Accuracy of the Smart Stylet system was assessed. System operators were evaluated for their ability to successfully target the ipsilateral frontal horn in a phantom model. Results Target registration error across 15 intracranial targets ranged from 1.3 – 4.6 mm (mean 3.1 mm). Using Smart Stylet guidance, a test operator successfully passed a ventriculostomy catheter to a shifted ipsilateral frontal horn 20/20 (100%) times from the frontal approach in a skull phantom. Without Smart Stylet guidance, the operator was successful 4/10 (40 %) from the right frontal approach and 6/10 (60 %) from the left frontal approach. In a separate experiment, resident operators were successful 2/4 (50%) when targeting the shifted ipsilateral frontal horn with Smart Stylet guidance and 0/4 (0 %) without image-guidance using a skull phantom. Conclusions Smart Stylet may improve the ability to successfully target the ventricles during frontal ventriculostomy. PMID:25662506
Computed tomography automatic exposure control techniques in 18F-FDG oncology PET-CT scanning.
Iball, Gareth R; Tout, Deborah
2014-04-01
Computed tomography (CT) automatic exposure control (AEC) systems are now used in all modern PET-CT scanners. A collaborative study was undertaken to compare AEC techniques of the three major PET-CT manufacturers for fluorine-18 fluorodeoxyglucose half-body oncology imaging. An audit of 70 patients was performed for half-body CT scans taken on a GE Discovery 690, Philips Gemini TF and Siemens Biograph mCT (all 64-slice CT). Patient demographic and dose information was recorded and image noise was calculated as the SD of Hounsfield units in the liver. A direct comparison of the AEC systems was made by scanning a Rando phantom on all three systems for a range of AEC settings. The variation in dose and image quality with patient weight was significantly different for all three systems, with the GE system showing the largest variation in dose with weight and Philips the least. Image noise varied with patient weight in Philips and Siemens systems but was constant for all weights in GE. The z-axis mA profiles from the Rando phantom demonstrate that these differences are caused by the nature of the tube current modulation techniques applied. The mA profiles varied considerably according to the AEC settings used. CT AEC techniques from the three manufacturers yield significantly different tube current modulation patterns and hence deliver different doses and levels of image quality across a range of patient weights. Users should be aware of how their system works and of steps that could be taken to optimize imaging protocols.
Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging
NASA Astrophysics Data System (ADS)
Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas
2016-03-01
In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.
NASA Astrophysics Data System (ADS)
Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej
2015-03-01
Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.
Dual-energy digital mammography for calcification imaging: scatter and nonuniformity corrections.
Kappadath, S Cheenu; Shaw, Chris C
2005-11-01
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DE calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 microm) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 microm size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 microm size range when the visibility criteria were lowered to barely visible. Calcifications smaller than approximately 250 microm were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.
Dual-energy digital mammography for calcification imaging: Scatter and nonuniformity corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kappadath, S. Cheenu; Shaw, Chris C.
Mammographic images of small calcifications, which are often the earliest signs of breast cancer, can be obscured by overlapping fibroglandular tissue. We have developed and implemented a dual-energy digital mammography (DEDM) technique for calcification imaging under full-field imaging conditions using a commercially available aSi:H/CsI:Tl flat-panel based digital mammography system. The low- and high-energy images were combined using a nonlinear mapping function to cancel the tissue structures and generate the dual-energy (DE) calcification images. The total entrance-skin exposure and mean-glandular dose from the low- and high-energy images were constrained so that they were similar to screening-examination levels. To evaluate the DEmore » calcification image, we designed a phantom using calcium carbonate crystals to simulate calcifications of various sizes (212-425 {mu}m) overlaid with breast-tissue-equivalent material 5 cm thick with a continuously varying glandular-tissue ratio from 0% to 100%. We report on the effects of scatter radiation and nonuniformity in x-ray intensity and detector response on the DE calcification images. The nonuniformity was corrected by normalizing the low- and high-energy images with full-field reference images. Correction of scatter in the low- and high-energy images significantly reduced the background signal in the DE calcification image. Under the current implementation of DEDM, utilizing the mammography system and dose level tested, calcifications in the 300-355 {mu}m size range were clearly visible in DE calcification images. Calcification threshold sizes decreased to the 250-280 {mu}m size range when the visibility criteria were lowered to barely visible. Calcifications smaller than {approx}250 {mu}m were usually not visible in most cases. The visibility of calcifications with our DEDM imaging technique was limited by quantum noise, not system noise.« less
Collaborative Point Paper on Border Surveillance Technology
2007-06-01
Systems PLC LORHIS (Long Range Hyperspectral Imaging System ) can be configured for either manned or unmanned aircraft to automatically detect and...Airships, and/or Aerostats, (RF, Electro-Optical, Infrared, Video) • Land- based Sensor Systems (Attended/Mobile and Unattended: e.g., CCD, Motion, Acoustic...electronic surveillance technologies for intrusion detection and warning. These ground- based systems are primarily short-range, up to around 500 meters
Hyperspectral imaging using the single-pixel Fourier transform technique
NASA Astrophysics Data System (ADS)
Jin, Senlin; Hui, Wangwei; Wang, Yunlong; Huang, Kaicheng; Shi, Qiushuai; Ying, Cuifeng; Liu, Dongqi; Ye, Qing; Zhou, Wenyuan; Tian, Jianguo
2017-03-01
Hyperspectral imaging technology is playing an increasingly important role in the fields of food analysis, medicine and biotechnology. To improve the speed of operation and increase the light throughput in a compact equipment structure, a Fourier transform hyperspectral imaging system based on a single-pixel technique is proposed in this study. Compared with current imaging spectrometry approaches, the proposed system has a wider spectral range (400-1100 nm), a better spectral resolution (1 nm) and requires fewer measurement data (a sample rate of 6.25%). The performance of this system was verified by its application to the non-destructive testing of potatoes.
Li, Yang; Ma, Jianguo; Martin, K Heath; Yu, Mingyue; Ma, Teng; Dayton, Paul A; Jiang, Xiaoning; Shung, K Kirk; Zhou, Qifa
2016-09-01
Superharmonic contrast-enhanced ultrasound imaging, also called acoustic angiography, has previously been used for the imaging of microvasculature. This approach excites microbubble contrast agents near their resonance frequency and receives echoes at nonoverlapping superharmonic bandwidths. No integrated system currently exists could fully support this application. To fulfill this need, an integrated dual-channel transmit/receive system for superharmonic imaging was designed, built, and characterized experimentally. The system was uniquely designed for superharmonic imaging and high-resolution B-mode imaging. A complete ultrasound system including a pulse generator, a data acquisition unit, and a signal processing unit were integrated into a single package. The system was controlled by a field-programmable gate array, on which multiple user-defined modes were implemented. A 6-, 35-MHz dual-frequency dual-element intravascular ultrasound transducer was designed and used for imaging. The system successfully obtained high-resolution B-mode images of coronary artery ex vivo with 45-dB dynamic range. The system was capable of acquiring in vitro superharmonic images of a vasa vasorum mimicking phantom with 30-dB contrast. It could detect a contrast agent filled tissue mimicking tube of 200 μm diameter. For the first time, high-resolution B-mode images and superharmonic images were obtained in an intravascular phantom, made possible by the dedicated integrated system proposed. The system greatly reduced the cost and complexity of the superharmonic imaging intended for preclinical study. Significant: The system showed promise for high-contrast intravascular microvascular imaging, which may have significant importance in assessment of the vasa vasorum associated with atherosclerotic plaques.
Design of the compact high-resolution imaging spectrometer (CHRIS), and future developments
NASA Astrophysics Data System (ADS)
Cutter, Mike; Lobb, Dan
2017-11-01
The CHRIS instrument was launched on ESA's PROBA platform in October 2001, and is providing hyperspectral images of selected ground areas at 17m ground sampling distance, in the spectral range 415nm to 1050nm. Platform agility allows image sets to be taken at multiple view angles in each overpass. The design of the instrument is briefly outlined, including design of optics, structures, detection and in-flight calibration system. Lessons learnt from construction and operation of the experimental system, and possible design directions for future hyperspectral systems, are discussed.
High-resolution CCD imaging alternatives
NASA Astrophysics Data System (ADS)
Brown, D. L.; Acker, D. E.
1992-08-01
High resolution CCD color cameras have recently stimulated the interest of a large number of potential end-users for a wide range of practical applications. Real-time High Definition Television (HDTV) systems are now being used or considered for use in applications ranging from entertainment program origination through digital image storage to medical and scientific research. HDTV generation of electronic images offers significant cost and time-saving advantages over the use of film in such applications. Further in still image systems electronic image capture is faster and more efficient than conventional image scanners. The CCD still camera can capture 3-dimensional objects into the computing environment directly without having to shoot a picture on film develop it and then scan the image into a computer. 2. EXTENDING CCD TECHNOLOGY BEYOND BROADCAST Most standard production CCD sensor chips are made for broadcast-compatible systems. One popular CCD and the basis for this discussion offers arrays of roughly 750 x 580 picture elements (pixels) or a total array of approximately 435 pixels (see Fig. 1). FOR. A has developed a technique to increase the number of available pixels for a given image compared to that produced by the standard CCD itself. Using an inter-lined CCD with an overall spatial structure several times larger than the photo-sensitive sensor areas each of the CCD sensors is shifted in two dimensions in order to fill in spatial gaps between adjacent sensors.
High-speed upper-airway imaging using full-range optical coherence tomography
NASA Astrophysics Data System (ADS)
Jing, Joseph; Zhang, Jun; Loy, Anthony Chin; Wong, Brian J. F.; Chen, Zhongping
2012-11-01
Obstruction in the upper airway can often cause reductions in breathing or gas exchange efficiency and lead to rest disorders such as sleep apnea. Imaging diagnosis of the obstruction region has been accomplished using computed tomography (CT) and magnetic resonance imaging (MRI). However CT requires the use of ionizing radiation, and MRI typically requires sedation of the patient to prevent motion artifacts. Long-range optical coherence tomography (OCT) has the potential to provide high-speed three-dimensional tomographic images with high resolution and without the use of ionizing radiation. In this paper, we present work on the development of a long-range OCT endoscopic probe with 1.2 mm OD and 20 mm working distance used in conjunction with a modified Fourier domain swept source OCT system to acquire structural and anatomical datasets of the human airway. Imaging from the bottom of the larynx to the end of the nasal cavity is completed within 40 s.
A device to measure the effects of strong magnetic fields on the image resolution of PET scanners
NASA Astrophysics Data System (ADS)
Burdette, D.; Albani, D.; Chesi, E.; Clinthorne, N. H.; Cochran, E.; Honscheid, K.; Huh, S. S.; Kagan, H.; Knopp, M.; Lacasta, C.; Mikuz, M.; Schmalbrock, P.; Studen, A.; Weilhammer, P.
2009-10-01
Very high resolution images can be achieved in small animal PET systems utilizing solid state silicon pad detectors. As these systems approach sub-millimeter resolutions, the range of the positron is becoming the dominant contribution to image blur. The size of the positron range effect depends on the initial positron energy and hence the radioactive tracer used. For higher energy positron emitters, such as Ga68 and Tc94m, which are gaining importance in small animal studies, the width of the annihilation point distribution dominates the spatial resolution. This positron range effect can be reduced by embedding the field of view of the PET scanner in a strong magnetic field. In order to confirm this effect experimentally, we developed a high resolution PET instrument based on silicon pad detectors that can operate in a 7 T magnetic field. In this paper, we describe the instrument and present initial results of a study of the effects of magnetic fields up to 7 T on PET image resolution for Na22 and Ga68 point sources.
Preclinical Whole-body Fluorescence Imaging: Review of Instruments, Methods and Applications
Leblond, Frederic; Davis, Scott C.; Valdés, Pablo A.; Pogue, Brain W.
2013-01-01
Fluorescence sampling of cellular function is widely used in all aspects of biology, allowing the visualization of cellular and sub-cellular biological processes with spatial resolutions in the range from nanometers up to centimeters. Imaging of fluorescence in vivo has become the most commonly used radiological tool in all pre-clinical work. In the last decade, full-body pre-clinical imaging systems have emerged with a wide range of utilities and niche application areas. The range of fluorescent probes that can be excited in the visible to near-infrared part of the electromagnetic spectrum continues to expand, with the most value for in vivo use being beyond the 630 nm wavelength, because the absorption of light sharply decreases. Whole-body in vivo fluorescence imaging has not yet reached a state of maturity that allows its routine use in the scope of large-scale pre-clinical studies. This is in part due to an incomplete understanding of what the actual fundamental capabilities and limitations of this imaging modality are. However, progress is continuously being made in research laboratories pushing the limits of the approach to consistently improve its performance in terms of spatial resolution, sensitivity and quantification. This paper reviews this imaging technology with a particular emphasis on its potential uses and limitations, the required instrumentation, and the possible imaging geometries and applications. A detailed account of the main commercially available systems is provided as well as some perspective relating to the future of the technology development. Although the vast majority of applications of in vivo small animal imaging are based on epi-illumination planar imaging, the future success of the method relies heavily on the design of novel imaging systems based on state-of-the-art optical technology used in conjunction with high spatial resolution structural modalities such as MRI, CT or ultra-sound. PMID:20031443
Athermal design and analysis of glass-plastic hybrid lens
NASA Astrophysics Data System (ADS)
Yang, Jian; Cen, Zhaofeng; Li, Xiaotong
2018-01-01
With the rapid development of security market, the glass-plastic hybrid lens has gradually become a choice for the special requirements like high imaging quality in a wide temperature range and low cost. The reduction of spherical aberration is achieved by using aspherical surface instead of increasing the number of lenses. Obviously, plastic aspherical lens plays a great role in the cost reduction. However, the hybrid lens has a priority issue, which is the large thermal coefficient of expansion of plastic, causing focus shift and seriously affecting the imaging quality, so the hybrid lens is highly sensitive to the change of temperature. To ensure the system operates normally in a wide temperature range, it is necessary to eliminate the influence of temperature on the hybrid lens system. A practical design method named the Athermal Material Map is summarized and verified by an athermal design example according to the design index. It includes the distribution of optical power and selection of glass or plastic. The design result shows that the optical system has excellent imaging quality at a wide temperature range from -20 ° to 70 °. The method of athermal design in this paper has generality which could apply to optical system with plastic aspherical surface.
A technology review of time-of-flight photon counting for advanced remote sensing
NASA Astrophysics Data System (ADS)
Lamb, Robert A.
2010-04-01
Time correlated single photon counting (TCSPC) has made tremendous progress during the past ten years enabling improved performance in precision time-of-flight (TOF) rangefinding and lidar. In this review the development and performance of several ranging systems is presented that use TCSPC for accurate ranging and range profiling over distances up to 17km. A range resolution of a few millimetres is routinely achieved over distances of several kilometres. These systems include single wavelength devices operating in the visible; multi-wavelength systems covering the visible and near infra-red; the use of electronic gating to reduce in-band solar background and, most recently, operation at high repetition rates without range aliasing- typically 10MHz over several kilometres. These systems operate at very low optical power (<100μW). The technique therefore has potential for eye-safe lidar monitoring of the environment and obvious military, security and surveillance sensing applications. The review will highlight the theoretical principles of photon counting and progress made in developing absolute ranging techniques that enable high repetition rate data acquisition that avoids range aliasing. Technology trends in TCSPC rangefinding are merging with those of quantum cryptography and its future application to revolutionary quantum imaging provides diverse and exciting research into secure covert sensing, ultra-low power active imaging and quantum rangefinding.
Cardiac motion correction based on partial angle reconstructed images in x-ray CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
2015-05-15
Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogrammore » with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of two conjugate PAR images. To evaluate the proposed algorithm, digital XCAT and physical dynamic cardiac phantom datasets are used. The XCAT phantom datasets were generated with heart rates of 70 and 100 bpm, respectively, by assuming a system rotation time of 300 ms. A physical dynamic cardiac phantom was scanned using a slowly rotating XCT system so that the effective heart rate will be 70 bpm for a system rotation speed of 300 ms. Results: In the XCAT phantom experiment, motion-compensated 3D images obtained from the proposed algorithm show coronary arteries with fewer motion artifacts for all phases. Moreover, object boundaries contaminated by motion are well restored. Even though object positions and boundary shapes are still somewhat different from the ground truth in some cases, the authors see that visibilities of coronary arteries are improved noticeably and motion artifacts are reduced considerably. The physical phantom study also shows that the visual quality of motion-compensated images is greatly improved. Conclusions: The authors propose a novel PAR image-based cardiac motion estimation and compensation algorithm. The algorithm requires an angular scan range of less than 360°. The excellent performance of the proposed algorithm is illustrated by using digital XCAT and physical dynamic cardiac phantom datasets.« less
Extended axial imaging range, widefield swept source optical coherence tomography angiography.
Liu, Gangjun; Yang, Jianlong; Wang, Jie; Li, Yan; Zang, Pengxiao; Jia, Yali; Huang, David
2017-11-01
We developed a high-speed, swept source OCT system for widefield OCT angiography (OCTA) imaging. The system has an extended axial imaging range of 6.6 mm. An electrical lens is used for fast, automatic focusing. The recently developed split-spectrum amplitude and phase-gradient angiography allow high-resolution OCTA imaging with only two B-scan repetitions. An improved post-processing algorithm effectively removed trigger jitter artifacts and reduced noise in the flow signal. We demonstrated high contrast 3 mm×3 mm OCTA image with 400×400 pixels acquired in 3 seconds and high-definition 8 mm×6 mm and 12 mm×6 mm OCTA images with 850×400 pixels obtained in 4 seconds. A widefield 8 mm×11 mm OCTA image is produced by montaging two 8 mm×6 mm scans. An ultra-widefield (with a maximum of 22 mm along both vertical and horizontal directions) capillary-resolution OCTA image is obtained by montaging six 12 mm×6 mm scans. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Correction of a liquid lens for 3D imaging systems
NASA Astrophysics Data System (ADS)
Bower, Andrew J.; Bunch, Robert M.; Leisher, Paul O.; Li, Weixu; Christopher, Lauren A.
2012-06-01
3D imaging systems are currently being developed using liquid lens technology for use in medical devices as well as in consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow extraction of depth information from the object field through a controlled introduction of defocus into the system. The design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the depth of field requirements for image processing. In this work a corrective model has been designed for use with the Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a 3D imaging application. The modeled performance is compared to the measured performance of the corrected system over a large range of focal lengths.
Document Examination: Applications of Image Processing Systems.
Kopainsky, B
1989-12-01
Dealing with images is a familiar business for an expert in questioned documents: microscopic, photographic, infrared, and other optical techniques generate images containing the information he or she is looking for. A recent method for extracting most of this information is digital image processing, ranging from the simple contrast and contour enhancement to the advanced restoration of blurred texts. When combined with a sophisticated physical imaging system, an image pricessing system has proven to be a powerful and fast tool for routine non-destructive scanning of suspect documents. This article reviews frequent applications, comprising techniques to increase legibility, two-dimensional spectroscopy (ink discrimination, alterations, erased entries, etc.), comparison techniques (stamps, typescript letters, photo substitution), and densitometry. Computerized comparison of handwriting is not included. Copyright © 1989 Central Police University.
Active laser radar (lidar) for measurement of corresponding height and reflectance images
NASA Astrophysics Data System (ADS)
Froehlich, Christoph; Mettenleiter, M.; Haertl, F.
1997-08-01
For the survey and inspection of environmental objects, a non-tactile, robust and precise imaging of height and depth is the basis sensor technology. For visual inspection,surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent 3D height and 2D reflectance images. The laser radar is an optical-wavelength system, and is comparable to devices built by ERIM, Odetics, and Perceptron, measuring the range between sensor and target surfaces as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to these range sensing devices, the laser radar under consideration is designed for high speed and precise operation in both indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a laser range measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports on design details of the laser radar for surface inspection tasks. It outlines the performance requirements and introduces the measurement principle. The hardware design, including the main modules, such as the laser head, the high frequency unit, the laser beam deflection system, and the digital signal processing unit are discussed.the signal processing unit consists of dedicated signal processors for real-time sensor data preprocessing as well as a sensor computer for high-level image analysis and feature extraction. The paper focuses on performance data of the system, including noise, drift over time, precision, and accuracy with measurements. It discuses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of buildings, monuments and industrial environments are presented. The paper concludes by summarizing results achieved in industrial environments and gives a short outlook to future work.
Utilizing the Southwest Ultraviolet Imaging System (SwUIS) on the International Space Station
NASA Astrophysics Data System (ADS)
Schindhelm, Eric; Stern, S. Alan; Ennico-Smith, Kimberly
2013-09-01
We present the Southwest Ultraviolet Imaging System (SwUIS), a compact, low-cost instrument designed for remote sensing observations from a manned platform in space. It has two chief configurations; a high spatial resolution mode with a 7-inch Maksutov-Cassegrain telescope, and a large field-of-view camera mode using a lens assembly. It can operate with either an intensified CCD or an electron multiplying CCD camera. Interchangeable filters and lenses enable broadband and narrowband imaging at UV/visible/near-infrared wavelengths, over a range of spatial resolution. SwUIS has flown previously on Space Shuttle flights STS-85 and STS-93, where it recorded multiple UV images of planets, comets, and vulcanoids. We describe the instrument and its capabilities in detail. The SWUIS's broad wavelength coverage and versatile range of hardware configurations make it an attractive option for use as a facility instrument for Earth science and astronomical imaging investigations aboard the International Space Station.
Design and development of a simple UV fluorescence multi-spectral imaging system
NASA Astrophysics Data System (ADS)
Tovar, Carlos; Coker, Zachary; Yakovlev, Vladislav V.
2018-02-01
Healthcare access in low-resource settings is compromised by the availability of affordable and accurate diagnostic equipment. The four primary poverty-related diseases - AIDS, pneumonia, malaria, and tuberculosis - account for approximately 400 million annual deaths worldwide as of 2016 estimates. Current diagnostic procedures for these diseases are prolonged and can become unreliable under various conditions. We present the development of a simple low-cost UV fluorescence multi-spectral imaging system geared towards low resource settings for a variety of biological and in-vitro applications. Fluorescence microscopy serves as a useful diagnostic indicator and imaging tool. The addition of a multi-spectral imaging modality allows for the detection of fluorophores within specific wavelength bands, as well as the distinction between fluorophores possessing overlapping spectra. The developed instrument has the potential for a very diverse range of diagnostic applications in basic biomedical science and biomedical diagnostics and imaging. Performance assessment of the microscope will be validated with a variety of samples ranging from organic compounds to biological samples.
Evaluation of a gamma camera system for the RITS-6 accelerator using the self-magnetic pinch diode
NASA Astrophysics Data System (ADS)
Webb, Timothy J.; Kiefer, Mark L.; Gignac, Raymond; Baker, Stuart A.
2015-08-01
The self-magnetic pinch (SMP) diode is an intense radiographic source fielded on the Radiographic Integrated Test Stand (RITS-6) accelerator at Sandia National Laboratories in Albuquerque, NM. The accelerator is an inductive voltage adder (IVA) that can operate from 2-10 MV with currents up to 160 kA (at 7 MV). The SMP diode consists of an annular cathode separated from a flat anode, holding the bremsstrahlung conversion target, by a vacuum gap. Until recently the primary imaging diagnostic utilized image plates (storage phosphors) which has generally low DQE at these photon energies along with other problems. The benefits of using image plates include a high-dynamic range, good spatial resolution, and ease of use. A scintillator-based X-ray imaging system or "gamma camera" has been fielded in front of RITS and the SMP diode which has been able to provide vastly superior images in terms of signal-to-noise with similar resolution and acceptable dynamic range.
Automatic Focusing for a 675 GHz Imaging Radar with Target Standoff Distances from 14 to 34 Meters
NASA Technical Reports Server (NTRS)
Tang, Adrian; Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Siegel, Peter H.
2013-01-01
This paper dicusses the issue of limited focal depth for high-resolution imaging radar operating over a wide range of standoff distances. We describe a technique for automatically focusing a THz imaging radar system using translational optics combined with range estimation based on a reduced chirp bandwidth setting. The demonstarted focusing algorithm estimates the correct focal depth for desired targets in the field of view at unknown standoffs and in the presence of clutter to provide good imagery at 14 to 30 meters of standoff.
Hausken, T; Li, X N; Goldman, B; Leotta, D; Ødegaard, S; Martin, R W
2001-07-01
To develop a non-invasive method for evaluating gastric emptying and duodenogastric reflux stroke volumes using three-dimensional (3D) guided digital color Doppler imaging. The technique involved color Doppler digital images of transpyloric flow in which the 3D position and orientation of the images were known by using a magnetic location system. In vitro, the system was found to slightly underestimate the reference flow (by average 8.8%). In vivo (five volunteers), stroke volume of gastric emptying episodes lasted on average only 0.69 s with a volume on average of 4.3 ml (range 1.1-7.4 ml), and duodenogastric reflux episodes on average 1.4 s with a volume of 8.3 ml (range 1.3-14.1 ml). With the appropriate instrument settings, orientation determined color Doppler can be used for stroke volume quantification of gastric emptying and duodenogastric reflux episodes.
Lettau, Michael; Bendszus, Martin; Hähnel, Stefan
2013-06-01
Our aim was to evaluate the in vitro visualization of different carotid artery stents on angiographic CT (ACT). Of particular interest was the influence of stent orientation to the angiography system by measurement of artificial lumen narrowing (ALN) caused by the stent material within the stented vessel segment to determine whether ACT can be used to detect restenosis within the stent. ACT appearances of 17 carotid artery stents of different designs and sizes (4.0 to 11.0 mm) were investigated in vitro. Stents were placed in different orientations to the angiography system. Standard algorithm image reconstruction and stent-optimized algorithm image reconstruction was performed. For each stent, ALN was calculated. With standard algorithm image reconstruction, ALN ranged from 19.0 to 43.6 %. With stent-optimized algorithm image reconstruction, ALN was significantly lower and ranged from 8.2 to 18.7 %. Stent struts could be visualized in all stents. Differences in ALN between the different stent orientations to the angiography system were not significant. ACT evaluation of vessel patency after stent placement is possible but is impaired by ALN. Stent orientation of the stents to the angiography system did not significantly influence ALN. Stent-optimized algorithm image reconstruction decreases ALN but further research is required to define the visibility of in-stent stenosis depending on image reconstruction.
NASA Astrophysics Data System (ADS)
Huang, Yongyang; Degenhardt, Karl R.; Astrof, Sophie; Zhou, Chao
2016-03-01
We have demonstrated the capability of spectral domain optical coherence tomography (SDOCT) system to image full development of mouse embryonic cardiovascular system. Monitoring morphological changes of mouse embryonic heart occurred in different embryonic stages helps identify structural or functional cardiac anomalies and understand how these anomalies lead to congenital heart diseases (CHD) present at birth. In this study, mouse embryo hearts ranging from E9.5 to E15.5 were prepared and imaged in vitro. A customized spectral domain OCT system was used for imaging, with a central wavelength of 1310nm, spectral bandwidth of ~100nm and imaging speed of 47kHz A-scans/s. Axial resolution of this system was 8.3µm in air, and transverse resolution was 6.2 µm with 5X objective. Key features of mouse embryonic cardiovascular development such as vasculature remodeling into circulatory system, separation of atria and ventricles and emergence of valves could be clearly seen in three-dimensional OCT images. Optical clearing was applied to overcome the penetration limit of OCT system. With high resolution, fast imaging speed, 3D imaging capability, OCT proves to be a promising biomedical imaging modality for developmental biology studies, rivaling histology and micro-CT.
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
A LWIR hyperspectral imager using a Sagnac interferometer and cooled HgCdTe detector array
NASA Astrophysics Data System (ADS)
Lucey, Paul G.; Wood, Mark; Crites, Sarah T.; Akagi, Jason
2012-06-01
LWIR hyperspectral imaging has a wide range of civil and military applications with its ability to sense chemical compositions at standoff ranges. Most recent implementations of this technology use spectrographs employing varying degrees of cryogenic cooling to reduce sensor self-emission that can severely limit sensitivity. We have taken an interferometric approach that promises to reduce the need for cooling while preserving high resolution. Reduced cooling has multiple benefits including faster system readiness from a power off state, lower mass, and potentially lower cost owing to lower system complexity. We coupled an uncooled Sagnac interferometer with a 256x320 mercury cadmium telluride array with an 11 micron cutoff to produce a spatial interferometric LWIR hyperspectral imaging system operating from 7.5 to 11 microns. The sensor was tested in ground-ground applications, and from a small aircraft producing spectral imagery including detection of gas emission from high vapor pressure liquids.
NASA Astrophysics Data System (ADS)
Wang, Xuan-Yin; Du, Jia-Wei; Zhu, Shi-Qiang
2017-09-01
A bionic variable-focus lens with symmetrical layered structure was designed to mimic the crystalline lens. An optical imaging system based on this lens and with a symmetrical structure that mimics the human eye structure was proposed. The refractive index of the bionic variable-focus lens increases from outside to inside. The two PDMS lenses with a certain thickness were designed to improve the optical performance of the optical imaging system and minimise the gravity effect of liquid. The paper presents the overall structure of the optical imaging system and the detailed description of the bionic variable-focus lens. By pumping liquid in or out of the cavity, the surface curvatures of the rear PDMS lens were varied, resulting in a change in the focal length. The focal length range of the optical imaging system was 20.71-24.87 mm. The optical performance of the optical imaging system was evaluated by imaging experiments and analysed by ray tracing simulations. On the basis of test and simulation results, the optical performance of the system was quite satisfactory. Off-axis aberrations were well corrected, and the image quality was greatly improved.
Imaging based refractometer for hyperspectral refractive index detection
Baba, Justin S.; Boudreaux, Philip R.
2015-11-24
Refractometers for simultaneously measuring refractive index of a sample over a range of wavelengths of light include dispersive and focusing optical systems. An optical beam including the range of wavelengths is spectrally spread along a first axis and focused along a second axis so as to be incident to an interface between the sample and a prism at a range of angles of incidence including a critical angle for at least one wavelength. An imaging detector is situated to receive the spectrally spread and focused light from the interface and form an image corresponding to angle of incidence as a function of wavelength. One or more critical angles are identified and corresponding refractive indices are determined.
Acousto-optic tunable filter chromatic aberration analysis and reduction with auto-focus system
NASA Astrophysics Data System (ADS)
Wang, Yaoli; Chen, Yuanyuan
2018-07-01
An acousto-optic tunable filter (AOTF) displays optical band broadening and sidelobes as a result of the coupling between the acoustic wave and optical waves of different wavelengths. These features were analysed by wave-vector phase matching between the optical and acoustic waves. A crossed-line test board was imaged by an AOTF multi-spectral imaging system, showing image blurring in the direction of diffraction and image sharpness in the orthogonal direction produced by the greater bandwidth and sidelobes in the former direction. Applying the secondary-imaging principle and considering the wavelength-dependent refractive index, focal length varies over the broad wavelength range. An automatic focusing method is therefore proposed for use in AOTF multi-spectral imaging systems. A new method for image-sharpness evaluation, based on improved Structure Similarity Index Measurement (SSIM), is also proposed, based on the characteristics of the AOTF imaging system. Compared with the traditional gradient operator, as same as it, the new evaluation function realized the evaluation between different image quality, thus could achieve the automatic focusing for different multispectral images.
A new Schwarzschild optical system for two-dimensional EUV imaging of MRX plasmas
NASA Astrophysics Data System (ADS)
Bolgert, P.; Bitter, M.; Efthimion, P.; Hill, K. W.; Ji, H.; Myers, C. E.; Yamada, M.; Yoo, J.; Zweben, S.
2013-10-01
This poster describes the design and construction of a new Schwarzschild optical system for two-dimensional EUV imaging of plasmas. This optical system consists of two concentric spherical mirrors with radii R1 and R2, and is designed to operate with certain angles of incidence θ1 and θ2. The special feature of this system resides in the fact that all the rays passing through the system are tangential to a third concentric circle; it assures that the condition for Bragg reflection is simultaneously fulfilled at each point on the two reflecting surfaces if the spherical mirrors are replaced by spherical multi-layer structures. A prototype of this imaging system will be implemented in the Magnetic Reconnection Experiment (MRX) at PPPL to obtain two-dimensional EUV images of the plasma in the energy range from 18 to 62 eV; the relative intensity of the emitted radiation in this energy range was determined from survey measurements with a photodiode. It is thought that the radiation at these energies is due to Bremsstrahlung and line emission caused by suprathermal electrons. This research is supported by DoE Contract Number DE-AC02-09CH11466 and by the Center for Magnetic Self-Organization (CMSO).
Zhang, Qi; Yang, Xiong; Hu, Qinglei; Bai, Ke; Yin, Fangfang; Li, Ning; Gang, Yadong; Wang, Xiaojun; Zeng, Shaoqun
2017-01-01
To resolve fine structures of biological systems like neurons, it is required to realize microscopic imaging with sufficient spatial resolution in three dimensional systems. With regular optical imaging systems, high lateral resolution is accessible while high axial resolution is hard to achieve in a large volume. We introduce an imaging system for high 3D resolution fluorescence imaging of large volume tissues. Selective plane illumination was adopted to provide high axial resolution. A scientific CMOS working in sub-array mode kept the imaging area in the sample surface, which restrained the adverse effect of aberrations caused by inclined illumination. Plastic embedding and precise mechanical sectioning extended the axial range and eliminated distortion during the whole imaging process. The combination of these techniques enabled 3D high resolution imaging of large tissues. Fluorescent bead imaging showed resolutions of 0.59 μm, 0.47μm, and 0.59 μm in the x, y, and z directions, respectively. Data acquired from the volume sample of brain tissue demonstrated the applicability of this imaging system. Imaging of different depths showed uniform performance where details could be recognized in either the near-soma area or terminal area, and fine structures of neurons could be seen in both the xy and xz sections. PMID:29296503
Overview of TAMU-CC Unmanned Aircraft Systems Coastal Research in the Port Mansfield Area, June 2015
NASA Astrophysics Data System (ADS)
Starek, M. J.; Bridges, D. H.
2016-02-01
In June, 2015, the TAMU-CC Unmanned Aircraft Systems Program, with the support of the Lone Star UAS Center of Excellence and Innovation, conducted a week-long UAS exercise in the coastal region near Port Mansfield, Texas. The platform used was TAMU-CC's RS-16, a variant of the Arcturus T-16XL, that was equipped with a three-camera imaging system which acquired high-resolution images in the optical range of the electromagnetic spectrum and lower resolution images in the infrared and ultraviolet ranges of the spectrum. The RS-16 has a wingspan of 12.9 ft, a typical take-off weight of 70 lbs, and a typical cruising speed of 60 kt. A total of 9 flights were conducted over 7 days, with a total of 22.9 flight hours. Different areas of interest were mapped for different researchers investigating specific coastal phenomena. This poster will describe the overall operational aspects of the exercise. The aircraft and imaging system will be described in detail, as will the operational procedures and subsequent data reduction procedures. The process of selection of the coastal regions for investigation and the flight planning involved in mapping those regions will be discussed. A summary of the resulting image data will be presented.
Method of orthogonally splitting imaging pose measurement
NASA Astrophysics Data System (ADS)
Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong
2018-01-01
In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization
NASA Astrophysics Data System (ADS)
Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian
2018-03-01
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
Development of a Raman chemical image detection algorithm for authenticating dry milk
USDA-ARS?s Scientific Manuscript database
This research developed a Raman chemical imaging method for detecting multiple adulterants in skim milk powder. Ammonium sulfate, dicyandiamide, melamine, and urea were mixed into the milk powder as chemical adulterants in the concentration range of 0.1–5.0%. A Raman imaging system using a 785-nm la...
Adaptive Optics For Imaging Bright Objects Next To Dim Ones
NASA Technical Reports Server (NTRS)
Shao, Michael; Yu, Jeffrey W.; Malbet, Fabien
1996-01-01
Adaptive optics used in imaging optical systems, according to proposal, to enhance high-dynamic-range images (images of bright objects next to dim objects). Designed to alter wavefronts to correct for effects of scattering of light from small bumps on imaging optics. Original intended application of concept in advanced camera installed on Hubble Space Telescope for imaging of such phenomena as large planets near stars other than Sun. Also applicable to other high-quality telescopes and cameras.
Silicon Nanoparticles as Hyperpolarized Magnetic Resonance Imaging Agents
Aptekar, Jacob W.; Cassidy, Maja C.; Johnson, Alexander C.; Barton, Robert A.; Lee, Menyoung; Ogier, Alexander C.; Vo, Chinh; Anahtar, Melis N.; Ren, Yin; Bhatia, Sangeeta N.; Ramanathan, Chandrasekhar; Cory, David G.; Hill, Alison L.; Mair, Ross W.; Rosen, Matthew S.; Walsworth, Ronald L.
2014-01-01
Magnetic resonance imaging of hyperpolarized nuclei provides high image contrast with little or no background signal. To date, in-vivo applications of pre-hyperpolarized materials have been limited by relatively short nuclear spin relaxation times. Here, we investigate silicon nanoparticles as a new type of hyperpolarized magnetic resonance imaging agent. Nuclear spin relaxation times for a variety of Si nanoparticles are found to be remarkably long, ranging from many minutes to hours at room temperature, allowing hyperpolarized nanoparticles to be transported, administered, and imaged on practical time scales. Additionally, we demonstrate that Si nanoparticles can be surface functionalized using techniques common to other biologically targeted nanoparticle systems. These results suggest that Si nanoparticles can be used as a targetable, hyperpolarized magnetic resonance imaging agent with a large range of potential applications. PMID:19950973
Silicon nanoparticles as hyperpolarized magnetic resonance imaging agents.
Aptekar, Jacob W; Cassidy, Maja C; Johnson, Alexander C; Barton, Robert A; Lee, Menyoung; Ogier, Alexander C; Vo, Chinh; Anahtar, Melis N; Ren, Yin; Bhatia, Sangeeta N; Ramanathan, Chandrasekhar; Cory, David G; Hill, Alison L; Mair, Ross W; Rosen, Matthew S; Walsworth, Ronald L; Marcus, Charles M
2009-12-22
Magnetic resonance imaging of hyperpolarized nuclei provides high image contrast with little or no background signal. To date, in vivo applications of prehyperpolarized materials have been limited by relatively short nuclear spin relaxation times. Here, we investigate silicon nanoparticles as a new type of hyperpolarized magnetic resonance imaging agent. Nuclear spin relaxation times for a variety of Si nanoparticles are found to be remarkably long, ranging from many minutes to hours at room temperature, allowing hyperpolarized nanoparticles to be transported, administered, and imaged on practical time scales. Additionally, we demonstrate that Si nanoparticles can be surface functionalized using techniques common to other biologically targeted nanoparticle systems. These results suggest that Si nanoparticles can be used as a targetable, hyperpolarized magnetic resonance imaging agent with a large range of potential applications.
Infrared Camera Diagnostic for Heat Flux Measurements on NSTX
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. Mastrovito; R. Maingi; H.W. Kugel
2003-03-25
An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 {micro}m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported.
Noise analysis for near field 3-D FM-CW radar imaging systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheen, David M.
2015-06-19
Near field radar imaging systems are used for several applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit the performance in several ways including reduction in system sensitivity and reduction of image dynamic range. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of thesemore » noise sources on a fast-chirping FM-CW system.« less
Contrast changes in fluoroscopic imaging systems and statistical variations of these changes
NASA Technical Reports Server (NTRS)
Bailey, N. A.
1973-01-01
Experimental studies have indicated that: (1) The response of digitized fluoroscopic imaging systems is linear systems is linear with contrast over a rather wide range of absorber and cavity thicknesses. (2) Contrast changes associated with the addition of aluminum, iodine containing contrast agents and air of thicknesses 1mm or less can be detected with a 95% confidence level. (3) The standard deviation associated with such determination using clinically available X-ray generators and video disc recording is less than 1 percent. A large flat screen X-ray image intensifier has been constructed and some preliminary results obtained. Sensitivity achieved makes dose reduction a factor often greater than previously reported for a system using a conventional X-ray image intensifier.
NASA Technical Reports Server (NTRS)
Erickson, W. K.; Hofman, L. B.; Donovan, W. E.
1984-01-01
Difficulties regarding the digital image analysis of remotely sensed imagery can arise in connection with the extensive calculations required. In the past, an expensive large to medium mainframe computer system was needed for performing these calculations. For image-processing applications smaller minicomputer-based systems are now used by many organizations. The costs for such systems are still in the range from $100K to $300K. Recently, as a result of new developments, the use of low-cost microcomputers for image processing and display systems appeared to have become feasible. These developments are related to the advent of the 16-bit microprocessor and the concept of the microcomputer workstation. Earlier 8-bit microcomputer-based image processing systems are briefly examined, and a computer workstation architecture is discussed. Attention is given to a microcomputer workstation developed by Stanford University, and the design and implementation of a workstation network.
Light microscopy applications in systems biology: opportunities and challenges
2013-01-01
Biological systems present multiple scales of complexity, ranging from molecules to entire populations. Light microscopy is one of the least invasive techniques used to access information from various biological scales in living cells. The combination of molecular biology and imaging provides a bottom-up tool for direct insight into how molecular processes work on a cellular scale. However, imaging can also be used as a top-down approach to study the behavior of a system without detailed prior knowledge about its underlying molecular mechanisms. In this review, we highlight the recent developments on microscopy-based systems analyses and discuss the complementary opportunities and different challenges with high-content screening and high-throughput imaging. Furthermore, we provide a comprehensive overview of the available platforms that can be used for image analysis, which enable community-driven efforts in the development of image-based systems biology. PMID:23578051
A real-time monitoring system for night glare protection
NASA Astrophysics Data System (ADS)
Ma, Jun; Ni, Xuxiang
2010-11-01
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP. LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows: this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation depth of LCoS determines how strong the glare can be removed.
Dual-energy micro-CT with a dual-layer, dual-color, single-crystal scintillator.
Maier, Daniel Simon; Schock, Jonathan; Pfeiffer, Franz
2017-03-20
A wide range of X-ray imaging applications demand micrometer spatial resolution. In material science and biology especially, there is a great interest in material determination and material separation methods. Here we present a new detector design that allows the recording of a low- and a high-energy radiography image simultaneously with micrometer spatial resolution. The detector system is composed of a layered scintillator stack, two CCDs and an optical system to image the scintillator responses onto the CCDs. We used the detector system with a standard laboratory microfocus X-ray tube to prove the working principle of the system and derive important design characteristics. With the recorded and registered dual-energy data set, the material separation and determination could be shown at an X-ray tube peak energy of up to 160 keV with a spatial resolution of 12 μm. The detector design shows a great potential for further development and a wide range of possible applications.
Pulse-compression ghost imaging lidar via coherent detection.
Deng, Chenjin; Gong, Wenlin; Han, Shensheng
2016-11-14
Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.
Improved characterization of scenes with a combination of MMW radar and radiometer information
NASA Astrophysics Data System (ADS)
Dill, Stephan; Peichl, Markus; Schreiber, Eric; Anglberger, Harald
2017-05-01
For security related applications MMW radar and radiometer systems in remote sensing or stand-off configurations are well established techniques. The range of development stages extends from experimental to commercial systems on the civil and military market. Typical examples are systems for personnel screening at airports for concealed object detection under clothing, enhanced vision or landing aid for helicopter and vehicle based systems for suspicious object or IED detection along roads. Due to the physical principle of active (radar) and passive (radiometer) MMW measurement techniques the appearance of single objects and thus the complete scenario is rather different for radar and radiometer images. A reasonable combination of both measurement techniques could lead to enhanced object information. However, some technical requirements should be taken into account. The imaging geometry for both sensors should be nearly identical, the geometrical resolution and the wavelength should be similar and at best the imaging process should be carried out simultaneously. Therefore theoretical and experimental investigations on a suitable combination of MMW radar and radiometer information have been conducted. First experiments in 2016 have been done with an imaging linescanner based on a cylindrical imaging geometry [1]. It combines a horizontal line scan in azimuth with a linear motion in vertical direction for the second image dimension. The main drawback of the system is the limited number of pixel in vertical dimension at a certain distance. Nevertheless the near range imaging results where promising. Therefore the combination of radar and radiometer sensor was assembled on the DLR wide-field-of-view linescanner ABOSCA which is based on a spherical imaging geometry [2]. A comparison of both imaging systems is discussed. The investigations concentrate on rather basic scenarios with canonical targets like flat plates, spheres, corner reflectors and cylinders. First experimental measurement results with the ABOSCA linescanner are shown.
Passive millimeter wave simulation in blender
NASA Astrophysics Data System (ADS)
Murakowski, Maciej
Imaging in the millimeter wave (mmW) frequency range is being explored for applications where visible or infrared (IR) imaging fails, such as through atmospheric obscurants. However, mmW imaging is still in its infancy and imager systems are still bulky, expensive, and fragile, so experiments on imaging in real-world scenarios are difficult or impossible to perform. Therefore, a simulation system capable of predicting mmW phenomenology would be valuable in determining the requirements (e.g. resolution or noise floor) of an imaging system for a particular scenario and aid in the design of such an imager. Producing simulation software for this purpose is the objective of the work described in this thesis. The 3D software package Blender was modified to simulate the images produced by a passive mmW imager, based on a Geometrical Optics approach. Simulated imagery was validated against experimental data and the software was applied to novel imaging scenarios. Additionally, a database of material properties for use in the simulation was collected.
Fluorescence hyperspectral imaging (fHSI) using a spectrally resolved detector array
Luthman, Anna Siri; Dumitru, Sebastian; Quiros‐Gonzalez, Isabel; Joseph, James
2017-01-01
Abstract The ability to resolve multiple fluorescent emissions from different biological targets in video rate applications, such as endoscopy and intraoperative imaging, has traditionally been limited by the use of filter‐based imaging systems. Hyperspectral imaging (HSI) facilitates the detection of both spatial and spectral information in a single data acquisition, however, instrumentation for HSI is typically complex, bulky and expensive. We sought to overcome these limitations using a novel robust and low cost HSI camera based on a spectrally resolved detector array (SRDA). We integrated this HSI camera into a wide‐field reflectance‐based imaging system operating in the near‐infrared range to assess the suitability for in vivo imaging of exogenous fluorescent contrast agents. Using this fluorescence HSI (fHSI) system, we were able to accurately resolve the presence and concentration of at least 7 fluorescent dyes in solution. We also demonstrate high spectral unmixing precision, signal linearity with dye concentration and at depth in tissue mimicking phantoms, and delineate 4 fluorescent dyes in vivo. Our approach, including statistical background removal, could be directly generalised to broader spectral ranges, for example, to resolve tissue reflectance or autofluorescence and in future be tailored to video rate applications requiring snapshot HSI data acquisition. PMID:28485130
NASA Astrophysics Data System (ADS)
Zoratti, Paul K.; Gilbert, R. Kent; Majewski, Ronald; Ference, Jack
1995-12-01
Development of automotive collision warning systems has progressed rapidly over the past several years. A key enabling technology for these systems is millimeter-wave radar. This paper addresses a very critical millimeter-wave radar sensing issue for automotive radar, namely the scattering characteristics of common roadway objects such as vehicles, roadsigns, and bridge overpass structures. The data presented in this paper were collected on ERIM's Fine Resolution Radar Imaging Rotary Platform Facility and processed with ERIM's image processing tools. The value of this approach is that it provides system developers with a 2D radar image from which information about individual point scatterers `within a single target' can be extracted. This information on scattering characteristics will be utilized to refine threat assessment processing algorithms and automotive radar hardware configurations. (1) By evaluating the scattering characteristics identified in the radar image, radar signatures as a function of aspect angle for common roadway objects can be established. These signatures will aid in the refinement of threat assessment processing algorithms. (2) Utilizing ERIM's image manipulation tools, total RCS and RCS as a function of range and azimuth can be extracted from the radar image data. This RCS information will be essential in defining the operational envelope (e.g. dynamic range) within which any radar sensor hardware must be designed.
Galante, Angelo; Sinibaldi, Raffaele; Conti, Allegra; De Luca, Cinzia; Catallo, Nadia; Sebastiani, Piero; Pizzella, Vittorio; Romani, Gian Luca; Sotgiu, Antonello; Della Penna, Stefania
2015-01-01
In recent years, ultra-low field (ULF)-MRI is being given more and more attention, due to the possibility of integrating ULF-MRI and Magnetoencephalography (MEG) in the same device. Despite the signal-to-noise ratio (SNR) reduction, there are several advantages to operating at ULF, including increased tissue contrast, reduced cost and weight of the scanners, the potential to image patients that are not compatible with clinical scanners, and the opportunity to integrate different imaging modalities. The majority of ULF-MRI systems are based, until now, on magnetic field pulsed techniques for increasing SNR, using SQUID based detectors with Larmor frequencies in the kHz range. Although promising results were recently obtained with such systems, it is an open question whether similar SNR and reduced acquisition time can be achieved with simpler devices. In this work a room-temperature, MEG-compatible very-low field (VLF)-MRI device working in the range of several hundred kHz without sample pre-polarization is presented. This preserves many advantages of ULF-MRI, but for equivalent imaging conditions and SNR we achieve reduced imaging time based on preliminary results using phantoms and ex-vivo rabbits heads. PMID:26630172
Transperineal prostate biopsy with ECHO-MRI fusion. Biopsee system. Initial experience.
Romero-Selas, E; Cuadros, V; Montáns, J; Sánchez, E; López-Alcorocho, J M; Gómez-Sancha, F
2016-06-01
The aim of this study is to present our initial experience with the stereotactic echo-MRI fusion system for diagnosing prostate cancer. Between September 2014 and January 2015, we performed 50 prostate biopsies using the stereotactic echo-MRI fusion system. The 3-Tesla multiparameter MR images were superimposed using this image fusion system on 3D echo images obtained with the Biopsee system for the exact locating of areas suspected of prostate cancer. The lesions were classified using the Prostate Imaging Report and Date System. We assessed a total of 50 patients, with a mean age of 63 years (range, 45-79), a mean prostate-specific antigen level of 8 ng/mL (range, 1.9-20) and a mean prostate volume of 52mL (range, 12-118). Prostate cancer was diagnosed in 69% of the patients and intraepithelial neoplasia in 6%. The results of the biopsy were negative for 24% of the patients. The results of the biopsy and MRI were in agreement for 62% of the patients; however, 46% also had a tumour outside of the suspicious lesion. We diagnosed 46% anterior tumours and 33% apical tumours. One patient had a haematuria, another had a haematoma and a third had acute urine retention. Multiparametric prostatic MRI helps identify prostate lesions suggestive of cancer. The Biopsee echo-MRI fusion system provides for guided biopsy and increases the diagnostic performance, reducing the false negatives of classical biopsies and increasing the diagnosis of anterior tumours. Transperineal access minimises the risk of prostatic infection and sepsis. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Oh, Gyong Jin; Kim, Lyang-June; Sheen, Sue-Ho; Koo, Gyou-Phyo; Jin, Sang-Hun; Yeo, Bo-Yeon; Lee, Jong-Ho
2009-05-01
This paper presents a real time implementation of Non Uniformity Correction (NUC). Two point correction and one point correction with shutter were carried out in an uncooled imaging system which will be applied to a missile application. To design a small, light weight and high speed imaging system for a missile system, SoPC (System On a Programmable Chip) which comprises of FPGA and soft core (Micro-blaze) was used. Real time NUC and generation of control signals are implemented using FPGA. Also, three different NUC tables were made to make the operating time shorter and to reduce the power consumption in a large range of environment temperature. The imaging system consists of optics and four electronics boards which are detector interface board, Analog to Digital converter board, Detector signal generation board and Power supply board. To evaluate the imaging system, NETD was measured. The NETD was less than 160mK in three different environment temperatures.
Comparison of Near-Infrared Imaging Camera Systems for Intracranial Tumor Detection.
Cho, Steve S; Zeh, Ryan; Pierce, John T; Salinas, Ryan; Singhal, Sunil; Lee, John Y K
2018-04-01
Distinguishing neoplasm from normal brain parenchyma intraoperatively is critical for the neurosurgeon. 5-Aminolevulinic acid (5-ALA) has been shown to improve gross total resection and progression-free survival but has limited availability in the USA. Near-infrared (NIR) fluorescence has advantages over visible light fluorescence with greater tissue penetration and reduced background fluorescence. In order to prepare for the increasing number of NIR fluorophores that may be used in molecular imaging trials, we chose to compare a state-of-the-art, neurosurgical microscope (System 1) to one of the commercially available NIR visualization platforms (System 2). Serial dilutions of indocyanine green (ICG) were imaged with both systems in the same environment. Each system's sensitivity and dynamic range for NIR fluorescence were documented and analyzed. In addition, brain tumors from six patients were imaged with both systems and analyzed. In vitro, System 2 demonstrated greater ICG sensitivity and detection range (System 1 1.5-251 μg/l versus System 2 0.99-503 μg/l). Similarly, in vivo, System 2 demonstrated signal-to-background ratio (SBR) of 2.6 ± 0.63 before dura opening, 5.0 ± 1.7 after dura opening, and 6.1 ± 1.9 after tumor exposure. In contrast, System 1 could not easily detect ICG fluorescence prior to dura opening with SBR of 1.2 ± 0.15. After the dura was reflected, SBR increased to 1.4 ± 0.19 and upon exposure of the tumor SBR increased to 1.8 ± 0.26. Dedicated NIR imaging platforms can outperform conventional microscopes in intraoperative NIR detection. Future microscopes with improved NIR detection capabilities could enhance the use of NIR fluorescence to detect neoplasm and improve patient outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jie; Wang, Xin, E-mail: wangx@tongji.edu.cn, E-mail: mubz@tongji.edu.cn; Zhan, Qi
This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system basedmore » on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.« less
NASA Astrophysics Data System (ADS)
Xu, Jie; Wang, Xin; Zhan, Qi; Huang, Shengling; Chen, Yifan; Mu, Baozhong
2016-07-01
This paper presents a novel lobster-eye imaging system for X-ray-backscattering inspection. The system was designed by modifying the Schmidt geometry into a treble-lens structure in order to reduce the resolution difference between the vertical and horizontal directions, as indicated by ray-tracing simulations. The lobster-eye X-ray imaging system is capable of operating over a wide range of photon energies up to 100 keV. In addition, the optics of the lobster-eye X-ray imaging system was tested to verify that they meet the requirements. X-ray-backscattering imaging experiments were performed in which T-shaped polymethyl-methacrylate objects were imaged by the lobster-eye X-ray imaging system based on both the double-lens and treble-lens Schmidt objectives. The results show similar resolution of the treble-lens Schmidt objective in both the vertical and horizontal directions. Moreover, imaging experiments were performed using a second treble-lens Schmidt objective with higher resolution. The results show that for a field of view of over 200 mm and with a 500 mm object distance, this lobster-eye X-ray imaging system based on a treble-lens Schmidt objective offers a spatial resolution of approximately 3 mm.
Evaluation of a novel collimator for molecular breast tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon
Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less
Evaluation of a novel collimator for molecular breast tomosynthesis.
Gilland, David R; Welch, Benjamin L; Lee, Seungjoon; Kross, Brian; Weisenberger, Andrew G
2017-11-01
This study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelated (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (-25° to 25°) using 99m Tc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging. © 2017 American Association of Physicists in Medicine.
Evaluation of a novel collimator for molecular breast tomosynthesis
Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon; ...
2017-09-06
Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, S; Komori, M; Toshito, T
Purpose: Since proton therapy has the ability to selectively deliver a dose to a target tumor, the dose distribution should be accurately measured. A precise and efficient method to evaluate the dose distribution is desired. We found that luminescence was emitted from water during proton irradiation and thought this phenomenon could be used for estimating the dose distribution. Methods: For this purpose, we placed water phantoms set on a table with a spot-scanning proton-therapy system, and luminescence images of these phantoms were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton-beam irradiation. We also conducted the imagingmore » of phantoms of pure-water, fluorescein solution and acrylic block. We made three dimensional images from the projection data. Results: The luminescence images of water phantoms during the proton-beam irradiations showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. The image of the pure-water phantom also showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had 14.5% shorter proton range than that of water; the proton range in the acrylic phantom was relatively matched with the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 sec. Three dimensional images were successfully obtained which have more quantitative information. Conclusion: Luminescence imaging during proton-beam irradiation has the potential to be a new method for range estimations in proton therapy.« less
The application of coded excitation technology in medical ultrasonic Doppler imaging
NASA Astrophysics Data System (ADS)
Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin
2008-03-01
Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.
Pulsed photoacoustic flow imaging with a handheld system
NASA Astrophysics Data System (ADS)
van den Berg, Pim J.; Daoudi, Khalid; Steenbergen, Wiendelt
2016-02-01
Flow imaging is an important technique in a range of disease areas, but estimating low flow speeds, especially near the walls of blood vessels, remains challenging. Pulsed photoacoustic flow imaging can be an alternative since there is little signal contamination from background tissue with photoacoustic imaging. We propose flow imaging using a clinical photoacoustic system that is both handheld and portable. The system integrates a linear array with 7.5 MHz central frequency in combination with a high-repetition-rate diode laser to allow high-speed photoacoustic imaging-ideal for this application. This work shows the flow imaging performance of the system in vitro using microparticles. Both two-dimensional (2-D) flow images and quantitative flow velocities from 12 to 75 mm/s were obtained. In a transparent bulk medium, flow estimation showed standard errors of ˜7% the estimated speed; in the presence of tissue-realistic optical scattering, the error increased to 40% due to limited signal-to-noise ratio. In the future, photoacoustic flow imaging can potentially be performed in vivo using fluorophore-filled vesicles or with an improved setup on whole blood.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
UV-sensitive scientific CCD image sensors
NASA Astrophysics Data System (ADS)
Vishnevsky, Grigory I.; Kossov, Vladimir G.; Iblyaminova, A. F.; Lazovsky, Leonid Y.; Vydrevitch, Michail G.
1997-06-01
An investigation of probe laser irradiation interaction with substances containing in an environment has long since become a recognized technique for contamination detection and identification. For this purpose, a near and midrange-IR laser irradiation is traditionally used. However, as many works presented on last ecology monitoring conferences show, in addition to traditional systems, rapidly growing are systems with laser irradiation from near-UV range (250 - 500 nm). Use of CCD imagers is one of the prerequisites for this allowing the development of a multi-channel computer-based spectral research system. To identify and analyze contaminating impurities on an environment, such methods as laser fluorescence analysis, UV absorption and differential spectroscopy, Raman scattering are commonly used. These methods are used to identify a large number of impurities (petrol, toluene, Xylene isomers, SO2, acetone, methanol), to detect and identify food pathogens in real time, to measure a concentration of NH3, SO2 and NO in combustion outbursts, to detect oil products in a water, to analyze contaminations in ground waters, to define ozone distribution in the atmosphere profile, to monitor various chemical processes including radioactive materials manufacturing, heterogeneous catalytic reactions, polymers production etc. Multi-element image sensor with enhanced UV sensitivity, low optical non-uniformity, low intrinsic noise and high dynamic range is a key element of all above systems. Thus, so called Virtual Phase (VP) CCDs possessing all these features, seems promising for ecology monitoring spectral measuring systems. Presently, a family of VP CCDs with different architecture and number of pixels is developed and being manufactured. All CCDs from this family are supported with a precise slow-scan digital image acquisition system that can be used in various image processing systems in astronomy, biology, medicine, ecology etc. An image is displayed directly on a PC monitor through a software support.
Moore, C S; Wood, T J; Avery, G; Balcam, S; Needler, L; Beavis, A W; Saunderson, J R
2014-05-07
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
NASA Astrophysics Data System (ADS)
Moore, C. S.; Wood, T. J.; Avery, G.; Balcam, S.; Needler, L.; Beavis, A. W.; Saunderson, J. R.
2014-05-01
The purpose of this study was to examine the use of three physical image quality metrics in the calibration of an automatic exposure control (AEC) device for chest radiography with a computed radiography (CR) imaging system. The metrics assessed were signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and mean effective noise equivalent quanta (eNEQm), all measured using a uniform chest phantom. Subsequent calibration curves were derived to ensure each metric was held constant across the tube voltage range. Each curve was assessed for its clinical appropriateness by generating computer simulated chest images with correct detector air kermas for each tube voltage, and grading these against reference images which were reconstructed at detector air kermas correct for the constant detector dose indicator (DDI) curve currently programmed into the AEC device. All simulated chest images contained clinically realistic projected anatomy and anatomical noise and were scored by experienced image evaluators. Constant DDI and CNR curves do not appear to provide optimized performance across the diagnostic energy range. Conversely, constant eNEQm and SNR do appear to provide optimized performance, with the latter being the preferred calibration metric given as it is easier to measure in practice. Medical physicists may use the SNR image quality metric described here when setting up and optimizing AEC devices for chest radiography CR systems with a degree of confidence that resulting clinical image quality will be adequate for the required clinical task. However, this must be done with close cooperation of expert image evaluators, to ensure appropriate levels of detector air kerma.
Hult, Johan; Richter, Mattias; Nygren, Jenny; Aldén, Marcus; Hultqvist, Anders; Christensen, Magnus; Johansson, Bengt
2002-08-20
High-repetition-rate laser-induced fluorescence measurements of fuel and OH concentrations in internal combustion engines are demonstrated. Series of as many as eight fluorescence images, with a temporal resolution ranging from 10 micros to 1 ms, are acquired within one engine cycle. A multiple-laser system in combination with a multiple-CCD camera is used for cycle-resolved imaging in spark-ignition, direct-injection stratified-charge, and homogeneous-charge compression-ignition engines. The recorded data reveal unique information on cycle-to-cycle variations in fuel transport and combustion. Moreover, the imaging system in combination with a scanning mirror is used to perform instantaneous three-dimensional fuel-concentration measurements.
Ruggeri, Marco; Uhlhorn, Stephen R.; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie
2012-01-01
Abstract: An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time. PMID:22808424
Ruggeri, Marco; Uhlhorn, Stephen R; De Freitas, Carolina; Ho, Arthur; Manns, Fabrice; Parel, Jean-Marie
2012-07-01
An optical switch was implemented in the reference arm of an extended depth SD-OCT system to sequentially acquire OCT images at different depths into the eye ranging from the cornea to the retina. A custom-made accommodation module was coupled with the delivery of the OCT system to provide controlled step stimuli of accommodation and disaccommodation that preserve ocular alignment. The changes in the lens shape were imaged and ocular distances were dynamically measured during accommodation and disaccommodation. The system is capable of dynamic in vivo imaging of the entire anterior segment and eye-length measurement during accommodation in real-time.
NIR DLP hyperspectral imaging system for medical applications
NASA Astrophysics Data System (ADS)
Wehner, Eleanor; Thapa, Abhas; Livingston, Edward; Zuzak, Karel
2011-03-01
DLP® hyperspectral reflectance imaging in the visible range has been previously shown to quantify hemoglobin oxygenation in subsurface tissues, 1 mm to 2 mm deep. Extending the spectral range into the near infrared reflects biochemical information from deeper subsurface tissues. Unlike any other illumination method, the digital micro-mirror device, DMD, chip is programmable, allowing the user to actively illuminate with precisely predetermined spectra of illumination with a minimum bandpass of approximately 10 nm. It is possible to construct active spectral-based illumination that includes but is not limited to containing sharp cutoffs to act as filters or forming complex spectra, varying the intensity of light at discrete wavelengths. We have characterized and tested a pure NIR, 760 nm to 1600 nm, DLP hyperspectral reflectance imaging system. In its simplest application, the NIR system can be used to quantify the percentage of water in a subject, enabling edema visualization. It can also be used to map vein structure in a patient in real time. During gall bladder surgery, this system could be invaluable in imaging bile through fatty tissue, aiding surgeons in locating the common bile duct in real time without injecting any contrast agents.
TRM4: Range performance model for electro-optical imaging systems
NASA Astrophysics Data System (ADS)
Keßler, Stefan; Gal, Raanan; Wittenstein, Wolfgang
2017-05-01
TRM4 is a commonly used model for assessing device and range performance of electro-optical imagers. The latest version, TRM4.v2, has been released by Fraunhofer IOSB of Germany in June 2016. While its predecessor, TRM3, was developed for thermal imagers, assuming blackbody targets and backgrounds, TRM4 extends the TRM approach to assess three imager categories: imagers that exploit emitted radiation (TRM4 category Thermal), reflected radiation (TRM4 category Visible/NIR/SWIR), and both emitted and reflected radiation (TRM4 category General). Performance assessment in TRM3 and TRM4 is based on the perception of standard four-bar test patterns, whether distorted by under-sampling or not. Spatial and sampling characteristics are taken into account by the Average Modulation at Optimum Phase (AMOP), which replaces the system MTF used in previous models. The Minimum Temperature Difference Perceived (MTDP) figure of merit was introduced in TRM3 for assessing the range performance of thermal imagers. In TRM4, this concept is generalized to the MDSP (Minimum Difference Signal Perceived), which can be applied to all imager categories. In this paper, we outline and discuss the TRM approach and pinpoint differences between TRM4 and TRM3. In addition, an overview of the TRM4 software and its functionality is given. Features newly introduced in TRM4, such as atmospheric turbulence, irradiation sources, and libraries are addressed. We conclude with an outlook on future work and the new module for intensified CCD cameras that is currently under development
Gravett, Matthew; Cepek, Jeremy; Fenster, Aaron
2017-11-01
The purpose of this study was to develop and validate an image-guided robotic needle delivery system for accurate and repeatable needle targeting procedures in mouse brains inside the 12 cm inner diameter gradient coil insert of a 9.4 T MR scanner. Many preclinical research techniques require the use of accurate needle deliveries to soft tissues, including brain tissue. Soft tissues are optimally visualized in MR images, which offer high-soft tissue contrast, as well as a range of unique imaging techniques, including functional, spectroscopy and thermal imaging, however, there are currently no solutions for delivering needles to small animal brains inside the bore of an ultra-high field MR scanner. This paper describes the mechatronic design, evaluation of MR compatibility, registration technique, mechanical calibration, the quantitative validation of the in-bore image-guided needle targeting accuracy and repeatability, and demonstrated the system's ability to deliver needles in situ. Our six degree-of-freedom, MR compatible, mechatronic system was designed to fit inside the bore of a 9.4 T MR scanner and is actuated using a combination of piezoelectric and hydraulic mechanisms. The MR compatibility and targeting accuracy of the needle delivery system are evaluated to ensure that the system is precisely calibrated to perform the needle targeting procedures. A semi-automated image registration is performed to link the robot coordinates to the MR coordinate system. Soft tissue targets can be accurately localized in MR images, followed by automatic alignment of the needle trajectory to the target. Intra-procedure visualization of the needle target location and the needle were confirmed through MR images after needle insertion. The effects of geometric distortions and signal noise were found to be below threshold that would have an impact on the accuracy of the system. The system was found to have negligible effect on the MR image signal noise and geometric distortion. The system was mechanically calibrated and the mean image-guided needle targeting and needle trajectory accuracies were quantified in an image-guided tissue mimicking phantom experiment to be 178 ± 54 μm and 0.27 ± 0.65°, respectively. An MR image-guided system for in-bore needle deliveries to soft tissue targets in small animal models has been developed. The results of the needle targeting accuracy experiments in phantoms indicate that this system has the potential to deliver needles to the smallest soft tissue structures relevant in preclinical studies, at a wide variety of needle trajectories. Future work in the form of a fully-automated needle driver with precise depth control would benefit this system in terms of its applicability to a wider range of animal models and organ targets. © 2017 American Association of Physicists in Medicine.
Walk through screening with multistatic mmW technology
NASA Astrophysics Data System (ADS)
Gumbmann, Frank; Ahmed, Sherif Sayed
2016-10-01
Active imaging systems for security screening at the airport or other checkpoints have proven to offer good results. Present systems require a specific position and posture,13 or a specific movement2 of the passenger in front of the imaging system. Walk Through Systems (WTS) which screen the passenger while passing the imaging system or a screening hallway would be more pleasant for the passenger and would result in a great improvement in the throughput. Furthermore the detection performance could be enhanced since possible threats are visible from different perspectives and could be tracked within different frames. The combination of all frames is equivalent to a full illumination of the passenger. This paper presents the concept of a WTS basing on a multistatic imaging system in the mmW range. The benefit is that the technology of existing portals can we reused and updated to a WTS. First results are demonstrated with an experimental system.
Design of an open-ended plenoptic camera for three-dimensional imaging of dusty plasmas
NASA Astrophysics Data System (ADS)
Sanpei, Akio; Tokunaga, Kazuya; Hayashi, Yasuaki
2017-08-01
Herein, the design of a plenoptic imaging system for three-dimensional reconstructions of dusty plasmas using an integral photography technique has been reported. This open-ended system is constructed with a multi-convex lens array and a typical reflex CMOS camera. We validated the design of the reconstruction system using known target particles. Additionally, the system has been applied to observations of fine particles floating in a horizontal, parallel-plate radio-frequency plasma. Furthermore, the system works well in the range of our dusty plasma experiment. We can identify the three-dimensional positions of dust particles from a single-exposure image obtained from one viewing port.
LWIR pupil imaging and prospects for background compensation
NASA Astrophysics Data System (ADS)
LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg
2015-08-01
A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.
Technology transfer: Imaging tracker to robotic controller
NASA Technical Reports Server (NTRS)
Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don
1988-01-01
The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.
CHRONIS: an animal chromosome image database.
Toyabe, Shin-Ichi; Akazawa, Kouhei; Fukushi, Daisuke; Fukui, Kiichi; Ushiki, Tatsuo
2005-01-01
We have constructed a database system named CHRONIS (CHROmosome and Nano-Information System) to collect images of animal chromosomes and related nanotechnological information. CHRONIS enables rapid sharing of information on chromosome research among cell biologists and researchers in other fields via the Internet. CHRONIS is also intended to serve as a liaison tool for researchers who work in different centers. The image database contains more than 3,000 color microscopic images, including karyotypic images obtained from more than 1,000 species of animals. Researchers can browse the contents of the database using a usual World Wide Web interface in the following URL: http://chromosome.med.niigata-u.ac.jp/chronis/servlet/chronisservlet. The system enables users to input new images into the database, to locate images of interest by keyword searches, and to display the images with detailed information. CHRONIS has a wide range of applications, such as searching for appropriate probes for fluorescent in situ hybridization, comparing various kinds of microscopic images of a single species, and finding researchers working in the same field of interest.
A Flexible Annular-Array Imaging Platform for Micro-Ultrasound
Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei
2013-01-01
Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923
Development and Applications of Laminar Optical Tomography for In Vivo Imaging
NASA Astrophysics Data System (ADS)
Burgess, Sean A.
Laminar optical tomography (LOT) is an optical imaging technique capable of making depth-resolved measurements of absorption and fluorescence contrast in scattering tissue. LOT was first demonstrated in 2004 by Hillman et al [1]. The technique combines a non-contact laser scanning geometry, similar to a low magnification confocal microscope, with the imaging principles of diffuse optical tomography (DOT). This thesis describes the development and application of a second generation LOT system, which acquires both fluorescence and multi-wavelength measurements simultaneously and is better suited for in vivo measurements. Chapter 1 begins by reviewing the interactions of light with tissue that form the foundation of optical imaging. A range of related optical imaging techniques and the basic principles of LOT imaging are then described. In Chapter 2, the development of the new LOT imaging system is described including the implementation of a series of interfaces to allow clinical imaging. System performance is then evaluated on a range of imaging phantoms. Chapter 3 describes two in vivo imaging applications explored using the second generation LOT system, first in a clinical setting where skin lesions were imaged, and then in a laboratory setting where LOT imaging was performed on exposed rat cortex. The final chapter provides a brief summary and describes future directions for LOT. LOT has the potential to find applications in medical diagnostics, surgical guidance, and in-situ monitoring owing to its sensitivity to absorption and fluorescence contrast as well as its ability to provide depth sensitive measures. Optical techniques can characterize blood volume and oxygenation, two important biological parameters, through measurements at different wavelengths. Fluorescence measurements, either from autofluorescence or fluorescent dyes, have shown promise for identifying and analyzing lesions in various epithelial tissues including skin [2, 3], colon [4], esophagus [5, 6], oral mucosa [7, 8], and cervix [9]. The desire to capture these types of measurements with LOT motivated much of the work presented here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, A; Ding, G
Purpose: The use of image-guided radiation therapy (IGRT) has become increasingly common, but the additional radiation exposure resulting from repeated image guidance procedures raises concerns. Although there are many studies reporting imaging dose from different image guidance devices, imaging dose for the CyberKnife Robotic Radiosurgery System is not available. This study provides estimated organ doses resulting from image guidance procedures on the CyberKnife system. Methods: Commercially available Monte Carlo software, PCXMC, was used to calculate average organ doses resulting from x-ray tubes used in the CyberKnife system. There are seven imaging protocols with kVp ranging from 60 – 120 kVmore » and 15 mAs for treatment sites in the Cranium, Head and Neck, Thorax, and Abdomen. The output of each image protocol was measured at treatment isocenter. For each site and protocol, Adult body sizes ranging from anorexic to extremely obese were simulated since organ dose depends on patient size. Doses for all organs within the imaging field-of-view of each site were calculated for a single image acquisition from both of the orthogonal x-ray tubes. Results: Average organ doses were <1.0 mGy for every treatment site and imaging protocol. For a given organ, dose increases as kV increases or body size decreases. Higher doses are typically reported for skeletal components, such as the skull, ribs, or clavicles, than for softtissue organs. Typical organ doses due to a single exposure are estimated as 0.23 mGy to the brain, 0.29 mGy to the heart, 0.08 mGy to the kidneys, etc., depending on the imaging protocol and site. Conclusion: The organ doses vary with treatment site, imaging protocol and patient size. Although the organ dose from a single image acquisition resulting from two orthogonal beams is generally insignificant, the sum of repeated image acquisitions (>100) could reach 10–20 cGy for a typical treatment fraction.« less
NASA Astrophysics Data System (ADS)
Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.
2006-10-01
In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
Neural networks application to divergence-based passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1992-01-01
The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery.
Rezai, Ali R; Finelli, Daniel; Nyenhuis, John A; Hrdlicka, Greg; Tkach, Jean; Sharan, Ashwini; Rugieri, Paul; Stypulkowski, Paul H; Shellock, Frank G
2002-03-01
To assess magnetic resonance imaging (MRI)-related heating for a neurostimulation system (Activa Tremor Control System, Medtronic, Minneapolis, MN) used for chronic deep brain stimulation (DBS). Different configurations were evaluated for bilateral neurostimulators (Soletra Model 7426), extensions, and leads to assess worst-case and clinically relevant positioning scenarios. In vitro testing was performed using a 1.5-T/64-MHz MR system and a gel-filled phantom designed to approximate the head and upper torso of a human subject. MRI was conducted using the transmit/receive body and transmit/receive head radio frequency (RF) coils. Various levels of RF energy were applied with the transmit/receive body (whole-body averaged specific absorption rate (SAR); range, 0.98-3.90 W/kg) and transmit/receive head (whole-body averaged SAR; range, 0.07-0.24 W/kg) coils. A fluoroptic thermometry system was used to record temperatures at multiple locations before (1 minute) and during (15 minutes) MRI. Using the body RF coil, the highest temperature changes ranged from 2.5 degrees-25.3 degrees C. Using the head RF coil, the highest temperature changes ranged from 2.3 degrees-7.1 degrees C.Thus, these findings indicated that substantial heating occurs under certain conditions, while others produce relatively minor, physiologically inconsequential temperature increases. The temperature increases were dependent on the type of RF coil, level of SAR used, and how the lead wires were positioned. Notably, the use of clinically relevant positioning techniques for the neurostimulation system and low SARs commonly used for imaging the brain generated little heating. Based on this information, MR safety guidelines are provided. These observations are restricted to the tested neurostimulation system.
[Experience of Fusion image guided system in endonasal endoscopic surgery].
Wen, Jingying; Zhen, Hongtao; Shi, Lili; Cao, Pingping; Cui, Yonghua
2015-08-01
To review endonasal endoscopic surgeries aided by Fusion image guided system, and to explore the application value of Fusion image guided system in endonasal endoscopic surgeries. Retrospective research. Sixty cases of endonasal endoscopic surgeries aided by Fusion image guided system were analysed including chronic rhinosinusitis with polyp (n = 10), fungus sinusitis (n = 5), endoscopic optic nerve decompression (n = 16), inverted papilloma of the paranasal sinus (n = 9), ossifying fibroma of sphenoid bone (n = 1), malignance of the paranasal sinus (n = 9), cerebrospinal fluid leak (n = 5), hemangioma of orbital apex (n = 2) and orbital reconstruction (n = 3). Sixty cases of endonasal endoscopic surgeries completed successfully without any complications. Fusion image guided system can help to identify the ostium of paranasal sinus, lamina papyracea and skull base. Fused CT-CTA images, or fused MR-MRA images can help to localize the optic nerve or internal carotid arteiy . Fused CT-MR images can help to detect the range of the tumor. It spent (7.13 ± 1.358) minutes for image guided system to do preoperative preparation and the surgical navigation accuracy reached less than 1mm after proficient. There was no device localization problem because of block or head set loosed. Fusion image guided system make endonasal endoscopic surgery to be a true microinvasive and exact surgery. It spends less preoperative preparation time, has high surgical navigation accuracy, improves the surgical safety and reduces the surgical complications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lou, K; Rice University, Houston, TX; Sun, X
Purpose: To study the feasibility of clinical on-line proton beam range verification with PET imaging Methods: We simulated a 179.2-MeV proton beam with 5-mm diameter irradiating a PMMA phantom of human brain size, which was then imaged by a brain PET with 300*300*100-mm{sup 3} FOV and different system sensitivities and spatial resolutions. We calculated the mean and standard deviation of positron activity range (AR) from reconstructed PET images, with respect to different data acquisition times (from 5 sec to 300 sec with 5-sec step). We also developed a technique, “Smoothed Maximum Value (SMV)”, to improve AR measurement under a givenmore » dose. Furthermore, we simulated a human brain irradiated by a 110-MeV proton beam of 50-mm diameter with 0.3-Gy dose at Bragg peak and imaged by the above PET system with 40% system sensitivity at the center of FOV and 1.7-mm spatial resolution. Results: MC Simulations on the PMMA phantom showed that, regardless of PET system sensitivities and spatial resolutions, the accuracy and precision of AR were proportional to the reciprocal of the square root of image count if image smoothing was not applied. With image smoothing or SMV method, the accuracy and precision could be substantially improved. For a cylindrical PMMA phantom (200 mm diameter and 290 mm long), the accuracy and precision of AR measurement could reach 1.0 and 1.7 mm, with 100-sec data acquired by the brain PET. The study with a human brain showed it was feasible to achieve sub-millimeter accuracy and precision of AR measurement with acquisition time within 60 sec. Conclusion: This study established the relationship between count statistics and the accuracy and precision of activity-range verification. It showed the feasibility of clinical on-line BR verification with high-performance PET systems and improved AR measurement techniques. Cancer Prevention and Research Institute of Texas grant RP120326, NIH grant R21CA187717, The Cancer Center Support (Core) Grant CA016672 to MD Anderson Cancer Center.« less
Inverse Tone Mapping Based upon Retina Response
Huo, Yongqing; Yang, Fan; Brost, Vincent
2014-01-01
The development of high dynamic range (HDR) display arouses the research of inverse tone mapping methods, which expand dynamic range of the low dynamic range (LDR) image to match that of HDR monitor. This paper proposed a novel physiological approach, which could avoid artifacts occurred in most existing algorithms. Inspired by the property of the human visual system (HVS), this dynamic range expansion scheme performs with a low computational complexity and a limited number of parameters and obtains high-quality HDR results. Comparisons with three recent algorithms in the literature also show that the proposed method reveals more important image details and produces less contrast loss and distortion. PMID:24744678
Guo, Xiaohu; Dong, Liquan; Zhao, Yuejin; Jia, Wei; Kong, Lingqin; Wu, Yijian; Li, Bing
2015-04-01
Wavefront coding (WFC) technology is adopted in the space optical system to resolve the problem of defocus caused by temperature difference or vibration of satellite motion. According to the theory of WFC, we calculate and optimize the phase mask parameter of the cubic phase mask plate, which is used in an on-axis three-mirror Cassegrain (TMC) telescope system. The simulation analysis and the experimental results indicate that the defocused modulation transfer function curves and the corresponding blurred images have a perfect consistency in the range of 10 times the depth of focus (DOF) of the original TMC system. After digital image processing by a Wiener filter, the spatial resolution of the restored images is up to 57.14 line pairs/mm. The results demonstrate that the WFC technology in the TMC system has superior performance in extending the DOF and less sensitivity to defocus, which has great value in resolving the problem of defocus in the space optical system.
Lim, Byoung-Gyun; Woo, Jea-Choon; Lee, Hee-Young; Kim, Young-Soo
2008-01-01
Synthetic wideband waveforms (SWW) combine a stepped frequency CW waveform and a chirp signal waveform to achieve high range resolution without requiring a large bandwidth or the consequent very high sampling rate. If an efficient algorithm like the range-Doppler algorithm (RDA) is used to acquire the SAR images for synthetic wideband signals, errors occur due to approximations, so the images may not show the best possible result. This paper proposes a modified subpulse SAR processing algorithm for synthetic wideband signals which is based on RDA. An experiment with an automobile-based SAR system showed that the proposed algorithm is quite accurate with a considerable improvement in resolution and quality of the obtained SAR image. PMID:27873984
Development of high energy micro-tomography system at SPring-8
NASA Astrophysics Data System (ADS)
Uesugi, Kentaro; Hoshino, Masato
2017-09-01
A high energy X-ray micro-tomography system has been developed at BL20B2 in SPring-8. The available range of the energy is between 20keV and 113keV with a Si (511) double crystal monochromator. The system enables us to image large or heavy materials such as fossils and metals. The X-ray image detector consists of visible light conversion system and sCMOS camera. The effective pixel size is variable by changing a tandem lens between 6.5 μm/pixel and 25.5 μm/pixel discretely. The format of the camera is 2048 pixels x 2048 pixels. As a demonstration of the system, alkaline battery and a nodule from Bolivia were imaged. A detail of the structure of the battery and a female mold Trilobite were successfully imaged without breaking those fossils.
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
An automated digital imaging system for environmental monitoring applications
Bogle, Rian; Velasco, Miguel; Vogel, John
2013-01-01
Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.
Ramanan, B; Holmes, W M; Sloan, W T; Phoenix, V R
2012-01-03
Quantifying nanoparticle (NP) transport inside saturated porous geological media is imperative for understanding their fate in a range of natural and engineered water systems. While most studies focus upon finer grained systems representative of soils and aquifers, very few examine coarse-grained systems representative of riverbeds and gravel based sustainable urban drainage systems. In this study, we investigated the potential of magnetic resonance imaging (MRI) to image transport behaviors of nanoparticles (NPs) through a saturated coarse-grained system. MRI successfully imaged the transport of superparamagnetic NPs, inside a porous column composed of quartz gravel using T(2)-weighted images. A calibration protocol was then used to convert T(2)-weighted images into spatially resolved quantitative concentration maps of NPs at different time intervals. Averaged concentration profiles of NPs clearly illustrates that transport of a positively charged amine-functionalized NP within the column was slower compared to that of a negatively charged carboxyl-functionalized NP, due to electrostatic attraction between positively charged NP and negatively charged quartz grains. Concentration profiles of NPs were then compared with those of a convection-dispersion model to estimate coefficients of dispersivity and retardation. For the amine functionalized NPs (which exhibited inhibited transport), a better model fit was obtained when permanent attachment (deposition) was incorporated into the model as opposed to nonpermanent attachment (retardation). This technology can be used to further explore transport processes of NPs inside coarse-grained porous media, either by using the wide range of commercially available (super)paramagnetically tagged NPs or by using custom-made tagged NPs.
Improving human object recognition performance using video enhancement techniques
NASA Astrophysics Data System (ADS)
Whitman, Lucy S.; Lewis, Colin; Oakley, John P.
2004-12-01
Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.
Wood, T J; Avery, G; Balcam, S; Needler, L; Smith, A; Saunderson, J R; Beavis, A W
2015-01-01
Objective: The aim of this study was to investigate via simulation a proposed change to clinical practice for chest radiography. The validity of using a scatter rejection grid across the diagnostic energy range (60–125 kVp), in conjunction with appropriate tube current–time product (mAs) for imaging with a computed radiography (CR) system was investigated. Methods: A digitally reconstructed radiograph algorithm was used, which was capable of simulating CR chest radiographs with various tube voltages, receptor doses and scatter rejection methods. Four experienced image evaluators graded images with a grid (n = 80) at tube voltages across the diagnostic energy range and varying detector air kermas. These were scored against corresponding images reconstructed without a grid, as per current clinical protocol. Results: For all patients, diagnostic image quality improved with the use of a grid, without the need to increase tube mAs (and therefore patient dose), irrespective of the tube voltage used. Increasing tube mAs by an amount determined by the Bucky factor made little difference to image quality. Conclusion: A virtual clinical trial has been performed with simulated chest CR images. Results indicate that the use of a grid improves diagnostic image quality for average adults, without the need to increase tube mAs, even at low tube voltages. Advances in knowledge: Validated with images containing realistic anatomical noise, it is possible to improve image quality by utilizing grids for chest radiography with CR systems without increasing patient exposure. Increasing tube mAs by an amount determined by the Bucky factor is not justified. PMID:25571914
NASA Astrophysics Data System (ADS)
Siewerdsen, J. H.; Daly, M. J.; Bachar, G.; Moseley, D. J.; Bootsma, G.; Brock, K. K.; Ansell, S.; Wilson, G. A.; Chhabra, S.; Jaffray, D. A.; Irish, J. C.
2007-03-01
High-performance intraoperative imaging is essential to an ever-expanding scope of therapeutic procedures ranging from tumor surgery to interventional radiology. The need for precise visualization of bony and soft-tissue structures with minimal obstruction to the therapy setup presents challenges and opportunities in the development of novel imaging technologies specifically for image-guided procedures. Over the past ~5 years, a mobile C-arm has been modified in collaboration with Siemens Medical Solutions for 3D imaging. Based upon a Siemens PowerMobil, the device includes: a flat-panel detector (Varian PaxScan 4030CB); a motorized orbit; a system for geometric calibration; integration with real-time tracking and navigation (NDI Polaris); and a computer control system for multi-mode fluoroscopy, tomosynthesis, and cone-beam CT. Investigation of 3D imaging performance (noise-equivalent quanta), image quality (human observer studies), and image artifacts (scatter, truncation, and cone-beam artifacts) has driven the development of imaging techniques appropriate to a host of image-guided interventions. Multi-mode functionality presents a valuable spectrum of acquisition techniques: i.) fluoroscopy for real-time 2D guidance; ii.) limited-angle tomosynthesis for fast 3D imaging (e.g., ~10 sec acquisition of coronal slices containing the surgical target); and iii.) fully 3D cone-beam CT (e.g., ~30-60 sec acquisition providing bony and soft-tissue visualization across the field of view). Phantom and cadaver studies clearly indicate the potential for improved surgical performance - up to a factor of 2 increase in challenging surgical target excisions. The C-arm system is currently being deployed in patient protocols ranging from brachytherapy to chest, breast, spine, and head and neck surgery.
Intelligent imaging systems for automotive applications
NASA Astrophysics Data System (ADS)
Thompson, Chris; Huang, Yingping; Fu, Shan
2004-03-01
In common with many other application areas, visual signals are becoming an increasingly important information source for many automotive applications. For several years CCD cameras have been used as research tools for a range of automotive applications. Infrared cameras, RADAR and LIDAR are other types of imaging sensors that have also been widely investigated for use in cars. This paper will describe work in this field performed in C2VIP over the last decade - starting with Night Vision Systems and looking at various other Advanced Driver Assistance Systems. Emerging from this experience, we make the following observations which are crucial for "intelligent" imaging systems: 1. Careful arrangement of sensor array. 2. Dynamic-Self-Calibration. 3. Networking and processing. 4. Fusion with other imaging sensors, both at the image level and the feature level, provides much more flexibility and reliability in complex situations. We will discuss how these problems can be addressed and what are the outstanding issues.
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Lomheim, Terrence S.; Florio, Christopher J.; Harbold, Jeffrey M.; Muto, B. Michael; Schoolar, Richard B.; Wintz, Daniel T.; Keller, Robert A.
2011-10-01
In a previous paper in this series, we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) tool may be used to model space and airborne imaging systems operating in the visible to near-infrared (VISNIR). PICASSO is a systems-level tool, representative of a class of such tools used throughout the remote sensing community. It is capable of modeling systems over a wide range of fidelity, anywhere from conceptual design level (where it can serve as an integral part of the systems engineering process) to as-built hardware (where it can serve as part of the verification process). In the present paper, we extend the discussion of PICASSO to the modeling of Thermal Infrared (TIR) remote sensing systems, presenting the equations and methods necessary to modeling in that regime.
Dual Channel S-Band Frequency Modulated Continuous Wave Through-Wall Radar Imaging
Oh, Daegun; Kim, Sunwoo; Chong, Jong-Wha
2018-01-01
This article deals with the development of a dual channel S-Band frequency-modulated continuous wave (FMCW) system for a through-the-wall imaging (TWRI) system. Most existing TWRI systems using FMCW were developed for synthetic aperture radar (SAR) which has many drawbacks such as the need for several antenna elements and movement of the system. Our implemented TWRI system comprises a transmitting antenna and two receiving antennas, resulting in a significant reduction of the number of antenna elements. Moreover, a proposed algorithm for range-angle-Doppler 3D estimation based on a 3D shift invariant structure is utilized in our implemented dual channel S-band FMCW TWRI system. Indoor and outdoor experiments were conducted to image the scene beyond a wall for water targets and person targets, respectively. The experimental results demonstrate that high-quality imaging can be achieved under both experimental scenarios. PMID:29361777
A REMOTE SENSING AND GIS-ENABLED HIGHWAY ASSET MANAGEMENT SYSTEM PHASE 2
DOT National Transportation Integrated Search
2018-02-02
The objective of this project is to validate the use of commercial remote sensing and spatial information (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile light detection and ranging (LiDAR), image processing algorit...
Simultaneous three wavelength imaging with a scanning laser ophthalmoscope.
Reinholz, F; Ashman, R A; Eikelboom, R H
1999-11-01
Various imaging properties of scanning laser ophthalmoscopes (SLO) such as contrast or depth discrimination, are superior to those of the traditional photographic fundus camera. However, most SLO are monochromatic whereas photographic systems produce colour images, which inherently contain information over a broad wavelength range. An SLO system has been modified to allow simultaneous three channel imaging. Laser light sources in the visible and infrared spectrum were concurrently launched into the system. Using different wavelength triads, digital fundus images were acquired at high frame rates. Favourable wavelengths combinations were established and high contrast, true (red, green, blue) or false (red, green, infrared) colour images of the retina were recorded. The monochromatic frames which form the colour image exhibit improved distinctness of different retinal structures such as the nerve fibre layer, the blood vessels, and the choroid. A multi-channel SLO combines the advantageous imaging properties of a tunable, monochrome SLO with the benefits and convenience of colour ophthalmoscopy. The options to modify parameters such as wavelength, intensity, gain, beam profile, aperture sizes, independently for every channel assign a high degree of versatility to the system. Copyright 1999 Wiley-Liss, Inc.
On the relationships between higher and lower bit-depth system measurements
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Haefner, David P.; Doe, Joshua M.
2018-04-01
The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.
Modelling and restoration of ultrasonic phased-array B-scan images.
Ardouin, J P; Venetsanopoulos, A N
1985-10-01
A model is presented for the radio-frequency image produced by a B-scan (pulse-echo) ultrasound imaging system using a phased-array transducer. This type of scanner is widely used for real-time heart imaging. The model allows for dynamic focusing as well as an acoustic lens focusing the beam in the elevation plane. A result of the model is an expression to compute the space-variant point spread function (PSF) of the system. This is made possible by the use of a combination of Fresnel and Fraunhoffer approximations which are valid in the range of interest for practical applications. The PSF is used to design restoration filters in order to improve image resolution. The filters are then applied to experimental images of wires.
Three-dimensional ghost imaging lidar via sparsity constraint
NASA Astrophysics Data System (ADS)
Gong, Wenlin; Zhao, Chengqiang; Yu, Hong; Chen, Mingliang; Xu, Wendong; Han, Shensheng
2016-05-01
Three-dimensional (3D) remote imaging attracts increasing attentions in capturing a target’s characteristics. Although great progress for 3D remote imaging has been made with methods such as scanning imaging lidar and pulsed floodlight-illumination imaging lidar, either the detection range or application mode are limited by present methods. Ghost imaging via sparsity constraint (GISC), enables the reconstruction of a two-dimensional N-pixel image from much fewer than N measurements. By GISC technique and the depth information of targets captured with time-resolved measurements, we report a 3D GISC lidar system and experimentally show that a 3D scene at about 1.0 km range can be stably reconstructed with global measurements even below the Nyquist limit. Compared with existing 3D optical imaging methods, 3D GISC has the capability of both high efficiency in information extraction and high sensitivity in detection. This approach can be generalized in nonvisible wavebands and applied to other 3D imaging areas.
Fast, High-Resolution Terahertz Radar Imaging at 25 Meters
NASA Technical Reports Server (NTRS)
Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Talukder, Ashit; Panangadan, Anand V.; Peay, Chris S.; Siegel, Peter H.
2010-01-01
We report improvements in the scanning speed and standoff range of an ultra-wide bandwidth terahertz (THz) imaging radar for person-borne concealed object detection. Fast beam scanning of the single-transceiver radar is accomplished by rapidly deflecting a flat, light-weight subreflector in a confocal Gregorian optical geometry. With RF back-end improvements also implemented, the radar imaging rate has increased by a factor of about 30 compared to that achieved previously in a 4 m standoff prototype instrument. In addition, a new 100 cm diameter ellipsoidal aluminum reflector yields beam spot diameters of approximately 1 cm over a 50x50 cm field of view at a range of 25 m, although some aberrations are observed that probably arise from misaligned optics. Through-clothes images of a concealed threat at 25 m range, acquired in 5 seconds, are presented, and the impact of reduced signal-to-noise from an even faster frame rate is analyzed. These results inform the system requirements for eventually achieving sub-second or video-rate THz radar imaging.
Geometrical Calibration of the Photo-Spectral System and Digital Maps Retrieval
NASA Astrophysics Data System (ADS)
Bruchkouskaya, S.; Skachkova, A.; Katkovski, L.; Martinov, A.
2013-12-01
Imaging systems for remote sensing of the Earth are required to demonstrate high metric accuracy of the picture which can be provided through preliminary geometrical calibration of optical systems. Being defined as a result of the geometrical calibration, parameters of internal and external orientation of the cameras are needed while solving such problems of image processing, as orthotransformation, geometrical correction, geographical coordinate fixing, scale adjustment and image registration from various channels and cameras, creation of image mosaics of filmed territories, and determination of geometrical characteristics of objects in the images. The geometrical calibration also helps to eliminate image deformations arising due to manufacturing defects and errors in installation of camera elements and photo receiving matrices as well as those resulted from lens distortions. A Photo-Spectral System (PhSS), which is intended for registering reflected radiation spectra of underlying surfaces in a wavelength range from 350 nm to 1050 nm and recording images of high spatial resolution, has been developed at the A.N. Sevchenko Research Institute of Applied Physical Problems of the Belarusian State University. The PhSS has undergone flight tests over the territory of Belarus onboard the Antonov AN-2 aircraft with the aim to obtain visible range images of the underlying surface. Then we performed the geometrical calibration of the PhSS and carried out the correction of images obtained during the flight tests. Furthermore, we have plotted digital maps of the terrain using the stereo pairs of images acquired from the PhSS and evaluated the accuracy of the created maps. Having obtained the calibration parameters, we apply them for correction of the images from another identical PhSS device, which is located at the Russian Orbital Segment of the International Space Station (ROS ISS), aiming to retrieve digital maps of the terrain with higher accuracy.
The design and characterization of a digital optical breast cancer imaging system.
Flexman, Molly L; Li, Yang; Bur, Andres M; Fong, Christopher J; Masciotti, James M; Al Abdi, Rabah; Barbour, Randall L; Hielscher, Andreas H
2008-01-01
Optical imaging has the potential to play a major role in breast cancer screening and diagnosis due to its ability to image cancer characteristics such as angiogenesis and hypoxia. A promising approach to evaluate and quantify these characteristics is to perform dynamic imaging studies in which one monitors the hemodynamic response to an external stimulus, such as a valsalva maneuver. It has been shown that the response to such stimuli shows MARKED differences between cancerous and healthy tissues. The fast imaging rates and large dynamic range of digital devices makes them ideal for this type of imaging studies. Here we present a digital optical tomography system designed specifically for dynamic breast imaging. The instrument uses laser diodes at 4 different near-infrared wavelengths with 32 sources and 128 silicon photodiode detectors.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander
2017-05-01
The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.
Aguirre, Andres; Guo, Puyun; Gamelin, John; Yan, Shikui; Sanders, Mary M.; Brewer, Molly; Zhu, Quing
2009-01-01
Ovarian cancer has the highest mortality of all gynecologic cancers, with a five-year survival rate of only 30% or less. Current imaging techniques are limited in sensitivity and specificity in detecting early stage ovarian cancer prior to its widespread metastasis. New imaging techniques that can provide functional and molecular contrasts are needed to reduce the high mortality of this disease. One such promising technique is photoacoustic imaging. We develop a 1280-element coregistered 3-D ultrasound and photoacoustic imaging system based on a 1.75-D acoustic array. Volumetric images over a scan range of 80 deg in azimuth and 20 deg in elevation can be achieved in minutes. The system has been used to image normal porcine ovarian tissue. This is an important step toward better understanding of ovarian cancer optical properties obtained with photoacoustic techniques. To the best of our knowledge, such data are not available in the literature. We present characterization measurements of the system and compare coregistered ultrasound and photoacoustic images of ovarian tissue to histological images. The results show excellent coregistration of ultrasound and photoacoustic images. Strong optical absorption from vasculature, especially highly vascularized corpora lutea and low absorption from follicles, is demonstrated. PMID:19895116
Remote Sensing of Soils for Environmental Assessment and Management.
NASA Technical Reports Server (NTRS)
DeGloria, Stephen D.; Irons, James R.; West, Larry T.
2014-01-01
The next generation of imaging systems integrated with complex analytical methods will revolutionize the way we inventory and manage soil resources across a wide range of scientific disciplines and application domains. This special issue highlights those systems and methods for the direct benefit of environmental professionals and students who employ imaging and geospatial information for improved understanding, management, and monitoring of soil resources.
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
Modular wide spectrum lighting system for diagnosis, conservation, and restoration
NASA Astrophysics Data System (ADS)
Miccoli, Matteo; Melis, Marcello
2013-05-01
In the framework of imaging, lighting systems have always played a key role due to the primary importance of both the uniformity of the illumination and the richness of the emitted spectra. Multispectral imaging, i.e. imaging systems working inside and outside the visible wavelength range, are even more demanding and require to pay further attention to a number of parameters characterizing the lighting system. A critical issue for lighting systems, even in the visible light, is the shape of the emitted spectra and (only in the visible range) the Color Rendering Index. The color we perceive from a surface is our eyes' interpretation of the linear spectral combination of the illuminant spectrum and the surface spectral reflectance. If there is a lack of energy in a portion of the visible spectrum, that portion will turn into black to our eyes (and to whatever instrument) regardless the actual reflectance of the surface. In other words a lack in the exciting energy hides part of the spectral reflectance of the observed subject. Furthermore, the wider is the investigated spectrum, the fewer are the sources of light able to cover such a range. In this paper we show how we solved both the problems of the not uniformity of the light beam, independently on the incident angle, and of the selection of a light source with energy rich and continuous enough emitted spectrum.
NASA Astrophysics Data System (ADS)
Leader, Joseph K.; Chough, Denise; Clearfield, Ronald J.; Ganott, Marie A.; Hakim, Christiane; Hardesty, Lara; Shindel, Betty; Sumkin, Jules H.; Drescher, John M.; Maitz, Glenn S.; Gur, David
2005-04-01
Radiologists' performance reviewing and rating breast cancer screening mammography exams using a telemammography system was evaluated and compared with the actual clinical interpretations of the same interpretations. Mammography technologists from three remote imaging sites transmitted 245 exams to a central site (radiologists), which they (the technologists) believed needed additional procedures (termed "recall"). Current exam image data and non-image data (i.e., technologist's text message, technologist's graphic marks, patient's prior report, and Computer Aided Detection (CAD) results) were transmitted to the central site and displayed on three high-resolution, portrait monitors. Seven radiologists interpreted ("recall" or "no recall") the exams using the telemammography workstation in three separate multi-mode studies. The mean telemammography recall rates ranged from 72.3% to 82.5% while the actual clinical recall rates ranged from 38.4% to 42.3% across the three studies. Mean Kappa of agreement ranged from 0.102 to 0.213 and mean percent agreement ranged from 48.7% to 57.4% across the three studies. Eighty-seven percent of the disagreement interpretations occurred when the telemammography interpretation resulted in a recommendation to recall and the clinical interpretation resulted in a recommendation not to recall. The poor agreement between the telemammography and clinical interpretations may indicate a critical dependence on images from prior screening exams rather than any text based information. The technologists were sensitive, if not specific, to the mammography features and changes that may lead to recall. Using the telemammography system the radiologists were able to reduce the recommended recalls by the technologist by approximately 25 percent.
Evaluating Thin Compression Paddles for Mammographically Compatible Ultrasound
Booi, Rebecca C.; Krücker, Jochen F.; Goodsitt, Mitchell M.; O’Donnell, Matthew; Kapur, Ajay; LeCarpentier, Gerald L.; Roubidoux, Marilyn A.; Fowlkes, J. Brian; Carson, Paul L.
2007-01-01
We are developing a combined digital mammography/3D ultrasound system to improve detection and/or characterization of breast lesions. Ultrasound scanning through a mammographic paddle could significantly reduce signal level, degrade beam focusing, and create reverberations. Thus, appropriate paddle choice is essential for accurate sonographic lesion detection and assessment with this system. In this study, we characterized ultrasound image quality through paddles of varying materials (lexan, polyurethane, TPX, mylar) and thicknesses (0.25–2.5 mm). Analytical experiments focused on lexan and TPX, which preliminary results demonstrated were most competitive. Spatial and contrast resolution, sidelobe and range lobe levels, contrast and signal strength were compared with no-paddle images. When the beamforming of the system was corrected to account for imaging through the paddle, the TPX 2.5 mm paddle performed the best. Test objects imaged through this paddle demonstrated ≤ 15% reduction in spatial resolution, ≤ 7.5 dB signal loss, ≤ 3 dB contrast loss, and range lobe levels ≥ 35 dB below signal maximum over 4 cm. TPX paddles < 2.5 mm could also be used with this system, depending on imaging goals. In 10 human subjects with cysts, small CNR losses were observed but were determined to be statistically insignificant. Radiologists concluded that 75% of cysts in through-paddle scans were at least as detectable as in their corresponding direct-contact scans. (Email: rbooi@umich.edu) PMID:17280765
NASA Astrophysics Data System (ADS)
Quirin, Sean Albert
The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.
Wavefront Derived Refraction and Full Eye Biometry in Pseudophakic Eyes
Mao, Xinjie; Banta, James T.; Ke, Bilian; Jiang, Hong; He, Jichang; Liu, Che; Wang, Jianhua
2016-01-01
Purpose To assess wavefront derived refraction and full eye biometry including ciliary muscle dimension and full eye axial geometry in pseudophakic eyes using spectral domain OCT equipped with a Shack-Hartmann wavefront sensor. Methods Twenty-eight adult subjects (32 pseudophakic eyes) having recently undergone cataract surgery were enrolled in this study. A custom system combining two optical coherence tomography systems with a Shack-Hartmann wavefront sensor was constructed to image and monitor changes in whole eye biometry, the ciliary muscle and ocular aberration in the pseudophakic eye. A Badal optical channel and a visual target aligning with the wavefront sensor were incorporated into the system for measuring the wavefront-derived refraction. The imaging acquisition was performed twice. The coefficients of repeatability (CoR) and intraclass correlation coefficient (ICC) were calculated. Results Images were acquired and processed successfully in all patients. No significant difference was detected between repeated measurements of ciliary muscle dimension, full-eye biometry or defocus aberration. The CoR of full-eye biometry ranged from 0.36% to 3.04% and the ICC ranged from 0.981 to 0.999. The CoR for ciliary muscle dimensions ranged from 12.2% to 41.6% and the ICC ranged from 0.767 to 0.919. The defocus aberrations of the two measurements were 0.443 ± 0.534 D and 0.447 ± 0.586 D and the ICC was 0.951. Conclusions The combined system is capable of measuring full eye biometry and refraction with good repeatability. The system is suitable for future investigation of pseudoaccommodation in the pseudophakic eye. PMID:27010674
Wavefront Derived Refraction and Full Eye Biometry in Pseudophakic Eyes.
Mao, Xinjie; Banta, James T; Ke, Bilian; Jiang, Hong; He, Jichang; Liu, Che; Wang, Jianhua
2016-01-01
To assess wavefront derived refraction and full eye biometry including ciliary muscle dimension and full eye axial geometry in pseudophakic eyes using spectral domain OCT equipped with a Shack-Hartmann wavefront sensor. Twenty-eight adult subjects (32 pseudophakic eyes) having recently undergone cataract surgery were enrolled in this study. A custom system combining two optical coherence tomography systems with a Shack-Hartmann wavefront sensor was constructed to image and monitor changes in whole eye biometry, the ciliary muscle and ocular aberration in the pseudophakic eye. A Badal optical channel and a visual target aligning with the wavefront sensor were incorporated into the system for measuring the wavefront-derived refraction. The imaging acquisition was performed twice. The coefficients of repeatability (CoR) and intraclass correlation coefficient (ICC) were calculated. Images were acquired and processed successfully in all patients. No significant difference was detected between repeated measurements of ciliary muscle dimension, full-eye biometry or defocus aberration. The CoR of full-eye biometry ranged from 0.36% to 3.04% and the ICC ranged from 0.981 to 0.999. The CoR for ciliary muscle dimensions ranged from 12.2% to 41.6% and the ICC ranged from 0.767 to 0.919. The defocus aberrations of the two measurements were 0.443 ± 0.534 D and 0.447 ± 0.586 D and the ICC was 0.951. The combined system is capable of measuring full eye biometry and refraction with good repeatability. The system is suitable for future investigation of pseudoaccommodation in the pseudophakic eye.
A design of optical modulation system with pixel-level modulation accuracy
NASA Astrophysics Data System (ADS)
Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu
2018-01-01
Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.
Optics design for J-TEXT ECE imaging with field curvature adjustment lens.
Zhu, Y; Zhao, Z; Liu, W D; Xie, J; Hu, X; Muscatello, C M; Domier, C W; Luhmann, N C; Chen, M; Ren, X; Tobias, B J; Zhuang, G; Yang, Z
2014-11-01
Significant progress has been made in the imaging and visualization of magnetohydrodynamic and microturbulence phenomena in magnetic fusion plasmas. Of particular importance has been microwave electron cyclotron emission imaging (ECEI) for imaging Te fluctuations. Key to the success of ECEI is a large Gaussian optics system constituting a major portion of the focusing of the microwave radiation from the plasma to the detector array. Both the spatial resolution and observation range are dependent upon the imaging optics system performance. In particular, it is critical that the field curvature on the image plane is reduced to decrease crosstalk between vertical channels. The receiver optics systems for two ECEI on the J-TEXT device have been designed to ameliorate these problems and provide good performance with additional field curvature adjustment lenses with a meniscus shape to correct the aberrations from several spherical surfaces.
An embedded processor for real-time atmoshperic compensation
NASA Astrophysics Data System (ADS)
Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-05-01
Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.
Xie, Xiaoliang Sunney; Freudiger, Christian; Min, Wei
2016-03-15
A microscopy imaging system is disclosed that includes a light source system, a spectral shaper, a modulator system, an optics system, an optical detector and a processor. The light source system is for providing a first train of pulses and a second train of pulses. The spectral shaper is for spectrally modifying an optical property of at least some frequency components of the broadband range of frequency components such that the broadband range of frequency components is shaped producing a shaped first train of pulses to specifically probe a spectral feature of interest from a sample, and to reduce information from features that are not of interest from the sample. The modulator system is for modulating a property of at least one of the shaped first train of pulses and the second train of pulses at a modulation frequency. The optical detector is for detecting an integrated intensity of substantially all optical frequency components of a train of pulses of interest transmitted or reflected through the common focal volume. The processor is for detecting a modulation at the modulation frequency of the integrated intensity of substantially all of the optical frequency components of the train of pulses of interest due to the non-linear interaction of the shaped first train of pulses with the second train of pulses as modulated in the common focal volume, and for providing an output signal for a pixel of an image for the microscopy imaging system.
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
NASA Technical Reports Server (NTRS)
Friedman, J. D.; Frank, D. G.; Preble, D.; Painter, J. E.
1973-01-01
A combination of infrared images depicting areas of thermal emission and ground calibration points have proved to be particularly useful in plotting time-dependent changes in surface temperatures and radiance and in delimiting areas of predominantly convective heat flow to the earth's surface in the Cascade Range and on Surtsey Volcano, Iceland. In an integrated experiment group using ERTS-1 multispectral scanner (MSS) and aircraft infrared imaging systems in conjunction with multiple thermistor arrays, volcano surface temperatures are relayed daily to Washington via data communication platform (DCP) transmitters and ERTS-1. ERTS-1 MSS imagery has revealed curvilinear structures at Lassen, the full extent of which have not been previously mapped. Interestingly, the major surface thermal manifestations at Lassen are aligned along these structures, particularly in the Warner Valley.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
Analysis of background irradiation in thermal IR hyper-spectral imaging systems
NASA Astrophysics Data System (ADS)
Xu, Weiming; Yuan, Liyin; Lin, Ying; He, Zhiping; Shu, Rong; Wang, Jianyu
2010-04-01
Our group designed a thermal IR hyper-spectral imaging system in this paper mounted in a vacuum encapsulated cavity with temperature controlling equipments. The spectral resolution is 80 nm; the spatial resolution is 1.0 mrad; the spectral channels are 32. By comparing and verifying the theoretical simulated calculation and experimental results for this system, we obtained the precise relationship between the temperature and background irradiation of optical and mechanical structures, and found the most significant components in the optic path for improving imaging quality that should be traded especially, also we had a conclusion that it should cool the imaging optics and structures to about 100K if we need utilize the full dynamic range and capture high quality of imagery.
A Review of Aeromagnetic Anomalies in the Sawatch Range, Central Colorado
Bankey, Viki
2010-01-01
This report contains digital data and image files of aeromagnetic anomalies in the Sawatch Range of central Colorado. The primary product is a data layer of polygons with linked data records that summarize previous interpretations of aeromagnetic anomalies in this region. None of these data files and images are new; rather, they are presented in updated formats that are intended to be used as input to geographic information systems, standard graphics software, or map-plotting packages.
Satellite on-board real-time SAR processor prototype
NASA Astrophysics Data System (ADS)
Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François
2017-11-01
A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.
Digital radiographic imaging: is the dental practice ready?
Parks, Edwin T
2008-04-01
Digital radiographic imaging is slowly, but surely, replacing film-based imaging. It has many advantages over traditional imaging, but the technology also has some drawbacks. The author presents an overview of the types of digital image receptors available, image enhancement software and the range of costs for the new technology. PRACTICE IMPLICATIONS. The expenses associated with converting to digital radiographic imaging are considerable. The purpose of this article is to provide the clinician with an overview of digital radiographic imaging technology so that he or she can be an informed consumer when evaluating the numerous digital systems in the marketplace.
Structural and Functional Biomedical Imaging Using Polarization-Based Optical Coherence Tomography
NASA Astrophysics Data System (ADS)
Black, Adam J.
Biomedical imaging has had an enormous impact in medicine and research. There are numerous imaging modalities covering a large range of spatial and temporal scales, penetration depths, along with indicators for function and disease. As these imaging technologies mature, the quality of the images they produce increases to resolve finer details with greater contrast at higher speeds which aids in a faster, more accurate diagnosis in the clinic. In this dissertation, polarization-based optical coherence tomography (OCT) systems are used and developed to image biological structure and function with greater speeds, signal-to-noise (SNR) and stability. OCT can image with spatial and temporal resolutions in the micro range. When imaging any sample, feedback is very important to verify the fidelity and desired location on the sample being imaged. To increase frame rates for display as well as data throughput, field-programmable gate arrays (FPGAs) were used with custom algorithms to realize real-time display and streaming output for continuous acquisition of large datasets of swept-source OCT systems. For spectral domain (SD) OCT systems, significant increases in signal-to-noise ratios were achieved from a custom balanced detection (BD) OCT system. The BD system doubled measured signals while reducing common term. For functional imaging, a real-time directed scanner was introduced to visualize the 3D image of a sample to identify regions of interest prior to recording. Elucidating the characteristics of functional OCT signals with the aid of simulations, novel processing methods were also developed to stabilize samples being imaged and identify possible origins of functional signals being measured. Polarization-sensitive OCT was used to image cardiac tissue before and after clearing to identify the regions of vascular perfusion from a coronary artery. The resulting 3D image provides a visualization of the perfusion boundaries for the tissue that would be damaged from a myocardial infarction to possibly identity features that lead to fatal cardiac arrhythmias. 3D functional imaging was used to measure functional retinal activity from a light stimulus. In some cases, single trial responses were possible; measured at the outer segment of the photoreceptor layer. The morphology and time-course of these signals are similar to the intrinsic optical signals reported from phototransduction. Assessing function in the retina could aid in early detection of degenerative diseases of the retina, such as glaucoma and macular degeneration.
Imaging of laboratory magnetospheric plasmas using coherence imaging technique
NASA Astrophysics Data System (ADS)
Nishiura, Masaki; Takahashi, Noriki; Yoshida, Zensho; Nakamura, Kaori; Kawazura, Yohei; Kenmochi, Naoki; Nakatsuka, Masataka; Sugata, Tetsuya; Katsura, Shotaro; Howard, John
2017-10-01
The ring trap 1 (RT-1) device creates a laboratory magnetosphere for the studies on plasma physics and advanced nuclear fusion. A levitated superconducting coil produces magnetic dipole fields that realize a high beta plasma confinement that is motivated by self-organized plasmas in planetary magnetospheres. The electron cyclotron resonance heating (ECRH) with 8.2 GHz and 50 kW produces the plasmas with hot electrons in a few ten keV range. The electrons contribute to the local electron beta that exceeded 1 in RT-1. For the ion heating, ion cyclotron range of frequencies (ICRF) heating with 2-4 MHz and 10 kW has been performed in RT-1. The radial profile of ion temperature by a spectroscopic measurement indicates the signature of ion heating. In the holistic point of view, a coherence imaging system has been implemented for imaging the entire ion dynamics in the laboratory magnetosphere. The diagnostic system and obtained results will be presented.
Dedicated low-field MRI in mice
NASA Astrophysics Data System (ADS)
Choquet, P.; Breton, E.; Goetz, C.; Marin, C.; Constantinesco, A.
2009-09-01
The rationale of this work is to point out the relevance of in vivo MR images of mice obtained using a dedicated low-field system. For this purpose a small 0.1 T water-cooled electro-magnet and solenoidal radio frequency (RF) transmit-receive coils were used. All MR images were acquired in three-dimensional (3D) mode. An isolation cell was designed allowing easy placement of the RF coils and simple delivery of gaseous anesthesia as well as warming of the animal. Images with and without contrast agent were obtained in total acquisition times on the order of half an hour to four hours on normal mice as well as on animals bearing tumors. Typical in plane pixel dimensions range from 200 × 200 to 500 × 500 µm2 with slice thicknesses ranging between 0.65 and 1.50 mm. This work shows that, besides light installation and low cost, dedicated low-field MR systems are suitable for small rodents imaging, opening this technique even to small research units.
NASA Astrophysics Data System (ADS)
Trott, Wayne M.; Knudson, Marcus D.; Chhabildas, Lalit C.; Asay, James R.
2000-04-01
Relatively straightforward changes in the design of a conventional optically recording velocity interferometer system (ORVIS) can be used to produce a line-imaging instrument that allows adjustment of spatial resolution over a wide range. As a result, line-imaging ORVIS can be tailored to various specific applications involving dynamic deformation of heterogeneous materials as required by their characteristic length scales (ranging from a few μm for ferroelectric ceramics to a few mm for concrete). A line-imaging system has been successfully interfaced to a compressed gas gun driver and fielded on numerous tests in combination with simultaneous dual delay-leg, "push-pull" VISAR measurements. These tests include shock loading of glass-reinforced polyester composites, foam reverberation experiments (measurements at the free surface of a thin aluminum plate impacted by foam), and measurements of dispersive velocity in a shock-loaded explosive simulant (sugar). Results are presented that illustrate the capability for recording detailed spatially resolved material response.
NASA Astrophysics Data System (ADS)
Murakoshi, Dai; Hirota, Kazuhiro; Ishii, Hiroyasu; Hashimoto, Atsushi; Ebata, Tetsurou; Irisawa, Kaku; Wada, Takatsugu; Hayakawa, Toshiro; Itoh, Kenji; Ishihara, Miya
2018-02-01
Photoacoustic (PA) imaging technology is expected to be applied to clinical assessment for peripheral vascularity. We started a clinical evaluation with the prototype PA imaging system we recently developed. Prototype PA imaging system was composed with in-house Q-switched Alexandrite laser system which emits short-pulsed laser with 750 nm wavelength, handheld ultrasound transducer where illumination optics were integrated and signal processing for PA image reconstruction implemented in the clinical ultrasound (US) system. For the purpose of quantitative assessment of PA images, an image analyzing function has been developed and applied to clinical PA images. In this analyzing function, vascularity derived from PA signal intensity ranged for prescribed threshold was defined as a numerical index of vessel fulfillment and calculated for the prescribed region of interest (ROI). Skin surface was automatically detected by utilizing B-mode image acquired simultaneously with PA image. Skinsurface position is utilized to place the ROI objectively while avoiding unwanted signals such as artifacts which were imposed due to melanin pigment in the epidermal layer which absorbs laser emission and generates strong PA signals. Multiple images were available to support the scanned image set for 3D viewing. PA images for several fingers of patients with systemic sclerosis (SSc) were quantitatively assessed. Since the artifact region is trimmed off in PA images, the visibility of vessels with rather low PA signal intensity on the 3D projection image was enhanced and the reliability of the quantitative analysis was improved.
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
A contribution to laser range imaging technology
NASA Technical Reports Server (NTRS)
Defigueiredo, Rui J. P.; Denney, Bradley S.
1991-01-01
The goal of the project was to develop a methodology for fusion of a Laser Range Imaging Device (LRID) and camera data. Our initial work in the project led to the conclusion that none of the LRID's that were available were sufficiently adequate for this purpose. Thus we spent the time and effort on the development of the new LRID with several novel features which elicit the desired fusion objectives. In what follows, we describe the device developed and built under contract. The Laser Range Imaging Device (LRID) is an instrument which scans a scene using a laser and returns range and reflection intensity data. Such a system would be extremely useful in scene analysis in industry and space applications. The LRID will be eventually implemented on board a mobile robot. The current system has several advantages over some commercially available systems. One improvement is the use of X-Y galvonometer scanning mirrors instead of polygonal mirrors present in some systems. The advantage of the X-Y scanning mirrors is that the mirror system can be programmed to provide adjustable scanning regions. For each mirror there are two controls accessible by the computer. The first is the mirror position and the second is a zoom factor which modifies the amplitude of the position of the parameter. Another advantage of the LRID is the use of a visible low power laser. Some of the commercial systems use a higher intensity invisible laser which causes safety concerns. By using a low power visible laser, not only can one see the beam and avoid direct eye contact, but also the lower intensity reduces the risk of damage to the eye, and no protective eyeware is required.
Venus - Comparison of Venera and Magellan Resolutions
1996-09-26
These radar images show an identical area on Venus (centered at 110 degrees longitude and 64 degrees north latitude) as imaged by the U.S. NASA Magellan spacecraft in 1991 (left) and the U.S.S.R. Venera 15/16 spacecraft in the early 1980's (right). Illumination is from the left (or west) in the Magellan image (left) and from the right (or east) in the Venera image (right). Differences in apparent shading in the images are due to differences in the two radar imaging systems. Prior to Magellan, the Venera 15/16 data was the best available for scientists studying Venus. Much greater detail is visible in the Magellan image owing to the greater resolution of the Magellan radar system. In the area seen here, approximately 200 small volcanoes, ranging in diameter from 2 to 12 kilometers (1.2 to 7.4 miles) can be identified. These volcanoes were first identified as small hills in Venera 15/16 images and were predicted to be shield-type volcanoes constructed mainly from eruptions of fluid lava flows similar to those that produce the Hawaiian Islands and sea floor volcanoes - a prediction that was confirmed by Magellan. These small shield-type volcanoes are the most abundant geologic feature on the surface of Venus, believed to number in the hundreds of thousands, perhaps millions, and are important evidence in understanding the geologic evolution of the planet. The only other planet in our Solar System with this large number of volcanoes is Earth. Clearly visible in the Magellan image are details of volcano morphology, such as variation in slope, the occurrence and size range of summit craters, and geologic age relationships between adjacent volcanoes, as well as additional volcanoes that were not identifiable in the Venera image. http://photojournal.jpl.nasa.gov/catalog/PIA00465
Zhao, C; Vassiljev, N; Konstantinidis, A C; Speller, R D; Kanicki, J
2017-03-07
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm -1 ) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
NASA Astrophysics Data System (ADS)
Zhao, C.; Vassiljev, N.; Konstantinidis, A. C.; Speller, R. D.; Kanicki, J.
2017-03-01
High-resolution, low-noise x-ray detectors based on the complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) technology have been developed and proposed for digital breast tomosynthesis (DBT). In this study, we evaluated the three-dimensional (3D) imaging performance of a 50 µm pixel pitch CMOS APS x-ray detector named DynAMITe (Dynamic Range Adjustable for Medical Imaging Technology). The two-dimensional (2D) angle-dependent modulation transfer function (MTF), normalized noise power spectrum (NNPS), and detective quantum efficiency (DQE) were experimentally characterized and modeled using the cascaded system analysis at oblique incident angles up to 30°. The cascaded system model was extended to the 3D spatial frequency space in combination with the filtered back-projection (FBP) reconstruction method to calculate the 3D and in-plane MTF, NNPS and DQE parameters. The results demonstrate that the beam obliquity blurs the 2D MTF and DQE in the high spatial frequency range. However, this effect can be eliminated after FBP image reconstruction. In addition, impacts of the image acquisition geometry and detector parameters were evaluated using the 3D cascaded system analysis for DBT. The result shows that a wider projection angle range (e.g. ±30°) improves the low spatial frequency (below 5 mm-1) performance of the CMOS APS detector. In addition, to maintain a high spatial resolution for DBT, a focal spot size of smaller than 0.3 mm should be used. Theoretical analysis suggests that a pixelated scintillator in combination with the 50 µm pixel pitch CMOS APS detector could further improve the 3D image resolution. Finally, the 3D imaging performance of the CMOS APS and an indirect amorphous silicon (a-Si:H) thin-film transistor (TFT) passive pixel sensor (PPS) detector was simulated and compared.
Evaluation of Sun Glint Correction Algorithms for High-Spatial Resolution Hyperspectral Imagery
2012-09-01
ACRONYMS AND ABBREVIATIONS AISA Airborne Imaging Spectrometer for Applications AVIRIS Airborne Visible/Infrared Imaging Spectrometer BIL Band...sensor bracket mount combining Airborne Imaging Spectrometer for Applications ( AISA ) Eagle and Hawk sensors into a single imaging system (SpecTIR 2011...The AISA Eagle is a VNIR sensor with a wavelength range of approximately 400–970 nm and the AISA Hawk sensor is a SWIR sensor with a wavelength
Space Shuttle Columbia views the world with imaging radar: The SIR-A experiment
NASA Technical Reports Server (NTRS)
Ford, J. P.; Cimino, J. B.; Elachi, C.
1983-01-01
Images acquired by the Shuttle Imaging Radar (SIR-A) in November 1981, demonstrate the capability of this microwave remote sensor system to perceive and map a wide range of different surface features around the Earth. A selection of 60 scenes displays this capability with respect to Earth resources - geology, hydrology, agriculture, forest cover, ocean surface features, and prominent man-made structures. The combined area covered by the scenes presented amounts to about 3% of the total acquired. Most of the SIR-A images are accompanied by a LANDSAT multispectral scanner (MSS) or SEASAT synthetic-aperture radar (SAR) image of the same scene for comparison. Differences between the SIR-A image and its companion LANDSAT or SEASAT image at each scene are related to the characteristics of the respective imaging systems, and to seasonal or other changes that occurred in the time interval between acquisition of the images.
Small convolution kernels for high-fidelity image restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1991-01-01
An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.
Objective for EUV microscopy, EUV lithography, and x-ray imaging
Bitter, Manfred; Hill, Kenneth W.; Efthimion, Philip
2016-05-03
Disclosed is an imaging apparatus for EUV spectroscopy, EUV microscopy, EUV lithography, and x-ray imaging. This new imaging apparatus could, in particular, make significant contributions to EUV lithography at wavelengths in the range from 10 to 15 nm, which is presently being developed for the manufacturing of the next-generation integrated circuits. The disclosure provides a novel adjustable imaging apparatus that allows for the production of stigmatic images in x-ray imaging, EUV imaging, and EUVL. The imaging apparatus of the present invention incorporates additional properties compared to previously described objectives. The use of a pair of spherical reflectors containing a concave and convex arrangement has been applied to a EUV imaging system to allow for the image and optics to all be placed on the same side of a vacuum chamber. Additionally, the two spherical reflector segments previously described have been replaced by two full spheres or, more precisely, two spherical annuli, so that the total photon throughput is largely increased. Finally, the range of permissible Bragg angles and possible magnifications of the objective has been largely increased.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Sieno, Laura, E-mail: laura.disieno@polimi.it; Dalla Mora, Alberto; Contini, Davide
2016-03-15
We present a system for non-contact time-resolved diffuse reflectance imaging, based on small source-detector distance and high dynamic range measurements utilizing a fast-gated single-photon avalanche diode. The system is suitable for imaging of diffusive media without any contact with the sample and with a spatial resolution of about 1 cm at 1 cm depth. In order to objectively assess its performances, we adopted two standardized protocols developed for time-domain brain imagers. The related tests included the recording of the instrument response function of the setup and the responsivity of its detection system. Moreover, by using liquid turbid phantoms with absorbingmore » inclusions, depth-dependent contrast and contrast-to-noise ratio as well as lateral spatial resolution were measured. To illustrate the potentialities of the novel approach, the characteristics of the non-contact system are discussed and compared to those of a fiber-based brain imager.« less
Multibeam synthetic aperture radar for global oceanography
NASA Technical Reports Server (NTRS)
Jain, A.
1979-01-01
A single-frequency multibeam synthetic aperture radar concept for large swath imaging desired for global oceanography is evaluated. Each beam iilluminates a separate range and azimuth interval, and images for different beams may be separated on the basis of the Doppler spectrum of the beams or their spatial azimuth separation in the image plane of the radar processor. The azimuth resolution of the radar system is selected so that the Doppler spectrum of each beam does not interfere with the Doppler foldover due to the finite pulse repetition frequency of the radar system.
Image synthesis for SAR system, calibration and processor design
NASA Technical Reports Server (NTRS)
Holtzman, J. C.; Abbott, J. L.; Kaupp, V. H.; Frost, V. S.
1978-01-01
The Point Scattering Method of simulating radar imagery rigorously models all aspects of the imaging radar phenomena. Its computational algorithms operate on a symbolic representation of the terrain test site to calculate such parameters as range, angle of incidence, resolution cell size, etc. Empirical backscatter data and elevation data are utilized to model the terrain. Additionally, the important geometrical/propagation effects such as shadow, foreshortening, layover, and local angle of incidence are rigorously treated. Applications of radar image simulation to a proposed calibrated SAR system are highlighted: soil moisture detection and vegetation discrimination.
Long-wavelength optical coherence tomography at 1.7 µm for enhanced imaging depth
Sharma, Utkarsh; Chang, Ernest W.; Yun, Seok H.
2009-01-01
Multiple scattering in a sample presents a significant limitation to achieve meaningful structural information at deeper penetration depths in optical coherence tomography (OCT). Previous studies suggest that the spectral region around 1.7 µm may exhibit reduced scattering coefficients in biological tissues compared to the widely used wavelengths around 1.3 µm. To investigate this long-wavelength region, we developed a wavelength-swept laser at 1.7 µm wavelength and conducted OCT or optical frequency domain imaging (OFDI) for the first time in this spectral range. The constructed laser is capable of providing a wide tuning range from 1.59 to 1.75 µm over 160 nm. When the laser was operated with a reduced tuning range over 95 nm at a repetition rate of 10.9 kHz and an average output power of 12.3 mW, the OFDI imaging system exhibited a sensitivity of about 100 dB and axial and lateral resolution of 24 µm and 14 µm, respectively. We imaged several phantom and biological samples using 1.3 µm and 1.7 µm OFDI systems and found that the depth-dependent signal decay rate is substantially lower at 1.7 µm wavelength in most, if not all samples. Our results suggest that this imaging window may offer an advantage over shorter wavelengths by increasing the penetration depths as well as enhancing image contrast at deeper penetration depths where otherwise multiple scattered photons dominate over ballistic photons. PMID:19030057
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on our results, we recommend using this technique for high image quality.
NASA Technical Reports Server (NTRS)
Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret
1992-01-01
Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
NASA Astrophysics Data System (ADS)
Polito, C.; Pani, R.; Trigila, C.; Cinti, M. N.; Fabbri, A.; Frantellizzi, V.; De Vincentis, G.; Pellegrini, R.; Pani, R.
2017-01-01
The growing interest for new scintillation crystals with outstanding imaging performances (i.e. resolution and efficiency) has suggested the study of recently discovered scintillators named CRY018 and CRY019. The crystals under investigation are monolithic and have shown enhanced characteristics both for gamma ray spectrometry and for Nuclear Medicine imaging applications such as the dual isotope imaging. Moreover, the non-hygroscopic nature and the absence of afterglow make these scintillators even more attractive for the potential improvement in a wide range of applications. These scintillation crystals show a high energy resolution in the energy range involved in Nuclear Medicine, allowing the discrimination between very close energy values. Moreover, in order to prove their suitability of being powerful imaging systems, the imaging performances like the position linearity and the intrinsic spatial resolution have been evaluated obtaining satisfactory results thanks to the implementation of an optimized algorithm for the images reconstruction.
Acousto-optic RF signal acquisition system
NASA Astrophysics Data System (ADS)
Bloxham, Laurence H.
1990-09-01
This paper describes the architecture and performance of a prototype Acousto-Optic RF Signal Acquisition System designed to intercept, automatically identify, and track communication signals in the VHF band. The system covers 28.0 to 92.0 MHz with five manually selectable, dual conversion; 12.8 MHZ bandwidth front ends. An acousto-optic spectrum analyzer (AOSA) implemented using a tellurium dioxide (Te02) Bragg cell is used to channelize the 12.8 MHz pass band into 512 25 KHz channels. Polarization switching is used to suppress optical noise. Excellent isolation and dynamic range are achieved by using a linear array of 512 custom 40/50 micron fiber optic cables to collect the light at the focal plane of the AOSA and route the light to individual photodetectors. The photodetectors are operated in the photovoltaic mode to compress the greater than 60 dB input optical dynamic range into an easily processed electrical signal. The 512 signals are multiplexed and processed as a line in a video image by a customized digital image processing system. The image processor simultaneously analyzes the channelized signal data and produces a classical waterfall display.
Griffiths, J A; Chen, D; Turchetta, R; Royle, G J
2011-03-01
An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 10(3) to 1.6 at a gain of 3.93 × 10(1). The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 10(3), dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.
Characterization study of an intensified complementary metal-oxide-semiconductor active pixel sensor
NASA Astrophysics Data System (ADS)
Griffiths, J. A.; Chen, D.; Turchetta, R.; Royle, G. J.
2011-03-01
An intensified CMOS active pixel sensor (APS) has been constructed for operation in low-light-level applications: a high-gain, fast-light decay image intensifier has been coupled via a fiber optic stud to a prototype "VANILLA" APS, developed by the UK based MI3 consortium. The sensor is capable of high frame rates and sparse readout. This paper presents a study of the performance parameters of the intensified VANILLA APS system over a range of image intensifier gain levels when uniformly illuminated with 520 nm green light. Mean-variance analysis shows the APS saturating around 3050 Digital Units (DU), with the maximum variance increasing with increasing image intensifier gain. The system's quantum efficiency varies in an exponential manner from 260 at an intensifier gain of 7.45 × 103 to 1.6 at a gain of 3.93 × 101. The usable dynamic range of the system is 60 dB for intensifier gains below 1.8 × 103, dropping to around 40 dB at high gains. The conclusion is that the system shows suitability for the desired application.
Hawkins, Liam J; Storey, Kenneth B
2017-01-01
Common Western-blot imaging systems have previously been adapted to measure signals from luminescent microplate assays. This can be a cost saving measure as Western-blot imaging systems are common laboratory equipment and could substitute a dedicated luminometer if one is not otherwise available. One previously unrecognized limitation is that the signals captured by the cameras in these systems are not equal for all wells. Signals are dependent on the angle of incidence to the camera, and thus the location of the well on the microplate. Here we show that: •The position of a well on a microplate significantly affects the signal captured by a common Western-blot imaging system from a luminescent assay.•The effect of well position can easily be corrected for.•This method can be applied to commercially available luminescent assays, allowing for high-throughput quantification of a wide range of biological processes and biochemical reactions.
Active imaging system performance model for target acquisition
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.
2007-04-01
The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.
Design and Construction of Detector and Data Acquisition Elements for Proton Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermi Research Alliance; Northern Illinois University
2015-07-15
Proton computed tomography (pCT) offers an alternative to x-ray imaging with potential for three-dimensional imaging, reduced radiation exposure, and in-situ imaging. Northern Illinois University (NIU) is developing a second-generation proton computed tomography system with a goal of demonstrating the feasibility of three-dimensional imaging within clinically realistic imaging times. The second-generation pCT system is comprised of a tracking system, a calorimeter, data acquisition, a computing farm, and software algorithms. The proton beam encounters the upstream tracking detectors, the patient or phantom, the downstream tracking detectors, and a calorimeter. The schematic layout of the PCT system is shown. The data acquisition sendsmore » the proton scattering information to an offline computing farm. Major innovations of the second generation pCT project involve an increased data acquisition rate ( MHz range) and development of three-dimensional imaging algorithms. The Fermilab Particle Physics Division and Northern Illinois Center for Accelerator and Detector Development at Northern Illinois University worked together to design and construct the tracking detectors, calorimeter, readout electronics and detector mounting system.« less
Reconfigurable metasurface aperture for security screening and microwave imaging
NASA Astrophysics Data System (ADS)
Sleasman, Timothy; Imani, Mohammadreza F.; Boyarsky, Michael; Pulido-Mancera, Laura; Reynolds, Matthew S.; Smith, David R.
2017-05-01
Microwave imaging systems have seen growing interest in recent decades for applications ranging from security screening to space/earth observation. However, hardware architectures commonly used for this purpose have not seen drastic changes. With the advent of metamaterials a wealth of opportunities have emerged for honing metasurface apertures for microwave imaging systems. Recent thrusts have introduced dynamic reconfigurability directly into the aperture layer, providing powerful capabilities from a physical layer with considerable simplicity. The waveforms generated from such dynamic metasurfaces make them suitable for application in synthetic aperture radar (SAR) and, more generally, computational imaging. In this paper, we investigate a dynamic metasurface aperture capable of performing microwave imaging in the K-band (17.5-26.5 GHz). The proposed aperture is planar and promises an inexpensive fabrication process via printed circuit board techniques. These traits are further augmented by the tunability of dynamic metasurfaces, which provides the dexterity necessary to generate field patterns ranging from a sequence of steered beams to a series of uncorrelated radiation patterns. Imaging is experimentally demonstrated with a voltage-tunable metasurface aperture. We also demonstrate the aperture's utility in real-time measurements and perform volumetric SAR imaging. The capabilities of a prototype are detailed and the future prospects of general dynamic metasurface apertures are discussed.
Quantum cascade lasers (QCL) for active hyperspectral imaging
NASA Astrophysics Data System (ADS)
Yang, Quankui; Fuchs, Frank; Wagner, Joachim
2014-04-01
There is an increasing demand for wavelength agile laser sources covering the mid-infrared (MIR, 3.5-12 µm) wavelength range, among others in active imaging. The MIR range comprises a particularly interesting part of the electromagnetic spectrum for active hyperspectral imaging applications, due to the fact that the characteristic `fingerprint' absorption spectra of many chemical compounds lie in that range. Conventional semiconductor diode laser technology runs out of steam at such long wavelengths. For many applications, MIR coherent light sources based on solid state lasers in combination with optical parametric oscillators are too complex and thus bulky and expensive. In contrast, quantum cascade lasers (QCLs) constitute a class of very compact and robust semiconductor-based lasers, which are able to cover the mentioned wavelength range using the same semiconductor material system. In this tutorial, a brief review will be given on the state-of-the-art of QCL technology. Special emphasis will be addressed on QCL variants with well-defined spectral properties and spectral tunability. As an example for the use of wavelength agile QCL for active hyperspectral imaging, stand-off detection of explosives based on imaging backscattering laser spectroscopy will be discussed.
Detecting personnel around UGVs using stereo vision
NASA Astrophysics Data System (ADS)
Bajracharya, Max; Moghaddam, Baback; Howard, Andrew; Matthies, Larry H.
2008-04-01
Detecting people around unmanned ground vehicles (UGVs) to facilitate safe operation of UGVs is one of the highest priority issues in the development of perception technology for autonomous navigation. Research to date has not achieved the detection ranges or reliability needed in deployed systems to detect upright pedestrians in flat, relatively uncluttered terrain, let alone in more complex environments and with people in postures that are more difficult to detect. Range data is essential to solve this problem. Combining range data with high resolution imagery may enable higher performance than range data alone because image appearance can complement shape information in range data and because cameras may offer higher angular resolution than typical range sensors. This makes stereo vision a promising approach for several reasons: image resolution is high and will continue to increase, the physical size and power dissipation of the cameras and computers will continue to decrease, and stereo cameras provide range data and imagery that are automatically spatially and temporally registered. We describe a stereo vision-based pedestrian detection system, focusing on recent improvements to a shape-based classifier applied to the range data, and present frame-level performance results that show great promise for the overall approach.
Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System
NASA Astrophysics Data System (ADS)
Stebner, K.; Wieden, A.
2014-03-01
Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.
A novel and compact spectral imaging system based on two curved prisms
NASA Astrophysics Data System (ADS)
Nie, Yunfeng; Bin, Xiangli; Zhou, Jinsong; Li, Yang
2013-09-01
As a novel detection approach which simultaneously acquires two-dimensional visual picture and one-dimensional spectral information, spectral imaging offers promising applications on biomedical imaging, conservation and identification of artworks, surveillance of food safety, and so forth. A novel moderate-resolution spectral imaging system consisting of merely two optical elements is illustrated in this paper. It can realize the function of a relay imaging system as well as a 10nm spectral resolution spectroscopy. Compared to conventional prismatic imaging spectrometers, this design is compact and concise with only two special curved prisms by utilizing two reflective surfaces. In contrast to spectral imagers based on diffractive grating, the usage of compound-prism possesses characteristics of higher energy utilization and wider free spectral range. The seidel aberration theory and dispersive principle of this special prism are analyzed at first. According to the results, the optical system of this design is simulated, and the performance evaluation including spot diagram, MTF and distortion, is presented. In the end, considering the difficulty and particularity of manufacture and alignment, an available method for fabrication and measurement is proposed.
Aguiar Santos, Susana; Robens, Anne; Boehm, Anna; Leonhardt, Steffen; Teichmann, Daniel
2016-01-01
A new prototype of a multi-frequency electrical impedance tomography system is presented. The system uses a field-programmable gate array as a main controller and is configured to measure at different frequencies simultaneously through a composite waveform. Both real and imaginary components of the data are computed for each frequency and sent to the personal computer over an ethernet connection, where both time-difference imaging and frequency-difference imaging are reconstructed and visualized. The system has been tested for both time-difference and frequency-difference imaging for diverse sets of frequency pairs in a resistive/capacitive test unit and in self-experiments. To our knowledge, this is the first work that shows preliminary frequency-difference images of in-vivo experiments. Results of time-difference imaging were compared with simulation results and shown that the new prototype performs well at all frequencies in the tested range of 60 kHz–960 kHz. For frequency-difference images, further development of algorithms and an improved normalization process is required to correctly reconstruct and interpreted the resulting images. PMID:27463715
Study on the high-frequency laser measurement of slot surface difference
NASA Astrophysics Data System (ADS)
Bing, Jia; Lv, Qiongying; Cao, Guohua
2017-10-01
In view of the measurement of the slot surface difference in the large-scale mechanical assembly process, Based on high frequency laser scanning technology and laser detection imaging principle, This paragraph designs a double galvanometer pulse laser scanning system. Laser probe scanning system architecture consists of three parts: laser ranging part, mechanical scanning part, data acquisition and processing part. The part of laser range uses high-frequency laser range finder to measure the distance information of the target shape and get a lot of point cloud data. Mechanical scanning part includes high-speed rotary table, high-speed transit and related structure design, in order to realize the whole system should be carried out in accordance with the design of scanning path on the target three-dimensional laser scanning. Data processing part mainly by FPGA hardware with LAbVIEW software to design a core, to process the point cloud data collected by the laser range finder at the high-speed and fitting calculation of point cloud data, to establish a three-dimensional model of the target, so laser scanning imaging is realized.
Development of real-time extensometer based on image processing
NASA Astrophysics Data System (ADS)
Adinanta, H.; Puranto, P.; Suryadi
2017-04-01
An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.
Low SWaP multispectral sensors using dichroic filter arrays
NASA Astrophysics Data System (ADS)
Dougherty, John; Varghese, Ron
2015-06-01
The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.
Gold nanoparticle contrast agents in advanced X-ray imaging technologies.
Ahn, Sungsook; Jung, Sung Yong; Lee, Sang Joon
2013-05-17
Recently, there has been significant progress in the field of soft- and hard-X-ray imaging for a wide range of applications, both technically and scientifically, via developments in sources, optics and imaging methodologies. While one community is pursuing extensive applications of available X-ray tools, others are investigating improvements in techniques, including new optics, higher spatial resolutions and brighter compact sources. For increased image quality and more exquisite investigation on characteristic biological phenomena, contrast agents have been employed extensively in imaging technologies. Heavy metal nanoparticles are excellent absorbers of X-rays and can offer excellent improvements in medical diagnosis and X-ray imaging. In this context, the role of gold (Au) is important for advanced X-ray imaging applications. Au has a long-history in a wide range of medical applications and exhibits characteristic interactions with X-rays. Therefore, Au can offer a particular advantage as a tracer and a contrast enhancer in X-ray imaging technologies by sensing the variation in X-ray attenuation in a given sample volume. This review summarizes basic understanding on X-ray imaging from device set-up to technologies. Then this review covers recent studies in the development of X-ray imaging techniques utilizing gold nanoparticles (AuNPs) and their relevant applications, including two- and three-dimensional biological imaging, dynamical processes in a living system, single cell-based imaging and quantitative analysis of circulatory systems and so on. In addition to conventional medical applications, various novel research areas have been developed and are expected to be further developed through AuNP-based X-ray imaging technologies.
Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2018-04-01
ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.
Automated generation of image products for Mars Exploration Rover Mission tactical operations
NASA Technical Reports Server (NTRS)
Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen
2005-01-01
This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.
NASA Astrophysics Data System (ADS)
Green, John R.; Robinson, Timothy
2015-05-01
There is a growing interest in developing helmet-mounted digital imaging systems (HMDIS) for integration into military aircraft cockpits. This interest stems from the multiple advantages of digital vs. analog imaging such as image fusion from multiple sensors, data processing to enhance the image contrast, superposition of non-imaging data over the image, and sending images to remote location for analysis. There are several properties an HMDIS must have in order to aid the pilot during night operations. In addition to the resolution, image refresh rate, dynamic range, and sensor uniformity over the entire Focal Plane Array (FPA); the imaging system must have the sensitivity to detect the limited night light available filtered through cockpit transparencies. Digital sensor sensitivity is generally measured monochromatically using a laser with a wavelength near the peak detector quantum efficiency, and is generally reported as either the Noise Equivalent Power (NEP) or Noise Equivalent Irradiance (NEI). This paper proposes a test system that measures NEI of Short-Wave Infrared (SWIR) digital imaging systems using a broadband source that simulates the night spectrum. This method has a few advantages over a monochromatic method. Namely, the test conditions provide spectrum closer to what is experienced by the end-user, and the resulting NEI may be compared directly to modeled night glow irradiance calculation. This comparison may be used to assess the Technology Readiness Level of the imaging system for the application. The test system is being developed under a Cooperative Research and Development Agreement (CRADA) with the Air Force Research Laboratory.
NASA Astrophysics Data System (ADS)
Dance, David R.; McVey, Graham; Sandborg, Michael P.; Persliden, Jan; Carlsson, Gudrun A.
1999-05-01
A Monte Carlo program has been developed to model X-ray imaging systems. It incorporates an adult voxel phantom and includes anti-scatter grid, radiographic screen and film. The program can calculate contrast and noise for a series of anatomical details. The use of measured H and D curves allows the absolute calculation of the patient entrance air kerma for a given film optical density (or vice versa). Effective dose can also be estimated. In an initial validation, the program was used to predict the optical density for exposures with plastic slabs of various thicknesses. The agreement between measurement and calculation was on average within 5%. In a second validation, a comparison was made between computer simulations and measurements for chest and lumbar spine patient radiographs. The predictions of entrance air kerma mostly fell within the range of measured values (e.g. chest PA calculated 0.15 mGy, measured 0.12 - 0.17 mGy). Good agreement was also obtained for the calculated and measured contrasts for selected anatomical details and acceptable agreement for dynamic range. It is concluded that the program provides a realistic model of the patient and imaging system. It can thus form the basis of a detailed study and optimization of X-ray imaging systems.
Mieog, J Sven D; Troyan, Susan L; Hutteman, Merlijn; Donohoe, Kevin J; van der Vorst, Joost R; Stockdale, Alan; Liefers, Gerrit-Jan; Choi, Hak Soo; Gibbs-Strauss, Summer L; Putter, Hein; Gioux, Sylvain; Kuppen, Peter J K; Ashitate, Yoshitomo; Löwik, Clemens W G M; Smit, Vincent T H B M; Oketokoun, Rafiou; Ngo, Long H; van de Velde, Cornelis J H; Frangioni, John V; Vahrmeijer, Alexander L
2011-09-01
Near-infrared (NIR) fluorescent sentinel lymph node (SLN) mapping in breast cancer requires optimized imaging systems and lymphatic tracers. A small, portable version of the FLARE imaging system, termed Mini-FLARE, was developed for capturing color video and two semi-independent channels of NIR fluorescence (700 and 800 nm) in real time. Initial optimization of lymphatic tracer dose was performed using 35-kg Yorkshire pigs and a 6-patient pilot clinical trial. More refined optimization was performed in 24 consecutive breast cancer patients. All patients received the standard of care using (99m)Technetium-nanocolloid and patent blue. In addition, 1.6 ml of indocyanine green adsorbed to human serum albumin (ICG:HSA) was injected directly after patent blue at the same location. Patients were allocated to 1 of 8 escalating ICG:HSA concentration groups from 50 to 1000 μM. The Mini-FLARE system was positioned easily in the operating room and could be used up to 13 in. from the patient. Mini-FLARE enabled visualization of lymphatic channels and SLNs in all patients. A total of 35 SLNs (mean = 1.45, range 1-3) were detected: 35 radioactive (100%), 30 blue (86%), and 35 NIR fluorescent (100%). Contrast agent quenching at the injection site and dilution within lymphatic channels were major contributors to signal strength of the SLN. Optimal injection dose of ICG:HSA ranged between 400 and 800 μM. No adverse reactions were observed. We describe the clinical translation of a new NIR fluorescence imaging system and define the optimal ICG:HSA dose range for SLN mapping in breast cancer.
NASA Astrophysics Data System (ADS)
Lewis, Keith
2014-10-01
Biological systems exploiting light have benefitted from thousands of years of genetic evolution and can provide insight to support the development of new approaches for imaging, image processing and communication. For example, biological vision systems can provide significant diversity, yet are able to function with only a minimal degree of neural processing. Examples will be described underlying the processes used to support the development of new concepts for photonic systems, ranging from uncooled bolometers and tunable filters, to asymmetric free-space optical communication systems and new forms of camera capable of simultaneously providing spectral and polarimetric diversity.
Image registration for a UV-Visible dual-band imaging system
NASA Astrophysics Data System (ADS)
Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua
2018-06-01
The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.
Thermographic imaging for high-temperature composite materials: A defect detection study
NASA Technical Reports Server (NTRS)
Roth, Don J.; Bodis, James R.; Bishop, Chip
1995-01-01
The ability of a thermographic imaging technique for detecting flat-bottom hole defects of various diameters and depths was evaluated in four composite systems (two types of ceramic matrix composites, one metal matrix composite, and one polymer matrix composite) of interest as high-temperature structural materials. The holes ranged from 1 to 13 mm in diameter and 0.1 to 2.5 mm in depth in samples approximately 2-3 mm thick. The thermographic imaging system utilized a scanning mirror optical system and infrared (IR) focusing lens in conjunction with a mercury cadmium telluride infrared detector element to obtain high resolution infrared images. High intensity flash lamps located on the same side as the infrared camera were used to heat the samples. After heating, up to 30 images were sequentially acquired at 70-150 msec intervals. Limits of detectability based on depth and diameter of the flat-bottom holes were defined for each composite material. Ultrasonic and radiographic images of the samples were obtained and compared with the thermographic images.
A Computational Observer For Performing Contrast-Detail Analysis Of Ultrasound Images
NASA Astrophysics Data System (ADS)
Lopez, H.; Loew, M. H.
1988-06-01
Contrast-Detail (C/D) analysis allows the quantitative determination of an imaging system's ability to display a range of varying-size targets as a function of contrast. Using this technique, a contrast-detail plot is obtained which can, in theory, be used to compare image quality from one imaging system to another. The C/D plot, however, is usually obtained by using data from human observer readings. We have shown earlier(7) that the performance of human observers in the task of threshold detection of simulated lesions embedded in random ultrasound noise is highly inaccurate and non-reproducible for untrained observers. We present an objective, computational method for the determination of the C/D curve for ultrasound images. This method utilizes digital images of the C/D phantom developed at CDRH, and lesion-detection algorithms that simulate the Bayesian approach using the likelihood function for an ideal observer. We present the results of this method, and discuss the relationship to the human observer and to the comparability of image quality between systems.
Realization of a single image haze removal system based on DaVinci DM6467T processor
NASA Astrophysics Data System (ADS)
Liu, Zhuang
2014-10-01
Video monitoring system (VMS) has been extensively applied in domains of target recognition, traffic management, remote sensing, auto navigation and national defence. However the VMS has a strong dependence on the weather, for instance, in foggy weather, the quality of images received by the VMS are distinct degraded and the effective range of VMS is also decreased. All in all, the VMS performs terribly in bad weather. Thus the research of fog degraded images enhancement has very high theoretical and practical application value. A design scheme of a fog degraded images enhancement system based on the TI DaVinci processor is presented in this paper. The main function of the referred system is to extract and digital cameras capture images and execute image enhancement processing to obtain a clear image. The processor used in this system is the dual core TI DaVinci DM6467T - ARM@500MHz+DSP@1GH. A MontaVista Linux operating system is running on the ARM subsystem which handles I/O and application processing. The DSP handles signal processing and the results are available to the ARM subsystem in shared memory.The system benefits from the DaVinci processor so that, with lower power cost and smaller volume, it provides the equivalent image processing capability of a X86 computer. The outcome shows that the system in this paper can process images at 25 frames per second on D1 resolution.
NASA Astrophysics Data System (ADS)
Clarkson, A.; Hamilton, D. J.; Hoek, M.; Ireland, D. G.; Johnstone, J. R.; Kaiser, R.; Keri, T.; Lumsden, S.; Mahon, D. F.; McKinnon, B.; Murray, M.; Nutbeam-Tuffs, S.; Shearer, C.; Staines, C.; Yang, G.; Zimmerman, C.
2014-05-01
Cosmic-ray muons are highly penetrative charged particles that are observed at the sea level with a flux of approximately one per square centimetre per minute. They interact with matter primarily through Coulomb scattering, which is exploited in the field of muon tomography to image shielded objects in a wide range of applications. In this paper, simulation studies are presented that assess the feasibility of a scintillating-fibre tracker system for use in the identification and characterisation of nuclear materials stored within industrial legacy waste containers. A system consisting of a pair of tracking modules above and a pair below the volume to be assayed is simulated within the GEANT4 framework using a range of potential fibre pitches and module separations. Each module comprises two orthogonal planes of fibres that allow the reconstruction of the initial and Coulomb-scattered muon trajectories. A likelihood-based image reconstruction algorithm has been developed that allows the container content to be determined with respect to the scattering density λ, a parameter which is related to the atomic number Z of the scattering material. Images reconstructed from this simulation are presented for a range of anticipated scenarios that highlight the expected image resolution and the potential of this system for the identification of high-Z materials within a shielded, concrete-filled container. First results from a constructed prototype system are presented in comparison with those from a detailed simulation. Excellent agreement between experimental data and simulation is observed showing clear discrimination between the different materials assayed throughout.
Kalpathy-Cramer, Jayashree; Campbell, J Peter; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D; Hutcheson, Kelly; Shapiro, Michael J; Repka, Michael X; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E; Chan, R V Paul; Chiang, Michael F
2016-11-01
To determine expert agreement on relative retinopathy of prematurity (ROP) disease severity and whether computer-based image analysis can model relative disease severity, and to propose consideration of a more continuous severity score for ROP. We developed 2 databases of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP (i-ROP) cohort study and recruited expert physician, nonexpert physician, and nonphysician graders to classify and perform pairwise comparisons on both databases. Six participating expert ROP clinician-scientists, each with a minimum of 10 years of clinical ROP experience and 5 ROP publications, and 5 image graders (3 physicians and 2 nonphysician graders) who analyzed images that were obtained during routine ROP screening in neonatal intensive care units. Images in both databases were ranked by average disease classification (classification ranking), by pairwise comparison using the Elo rating method (comparison ranking), and by correlation with the i-ROP computer-based image analysis system. Interexpert agreement (weighted κ statistic) compared with the correlation coefficient (CC) between experts on pairwise comparisons and correlation between expert rankings and computer-based image analysis modeling. There was variable interexpert agreement on diagnostic classification of disease (plus, preplus, or normal) among the 6 experts (mean weighted κ, 0.27; range, 0.06-0.63), but good correlation between experts on comparison ranking of disease severity (mean CC, 0.84; range, 0.74-0.93) on the set of 34 images. Comparison ranking provided a severity ranking that was in good agreement with ranking obtained by classification ranking (CC, 0.92). Comparison ranking on the larger dataset by both expert and nonexpert graders demonstrated good correlation (mean CC, 0.97; range, 0.95-0.98). The i-ROP system was able to model this continuous severity with good correlation (CC, 0.86). Experts diagnose plus disease on a continuum, with poor absolute agreement on classification but good relative agreement on disease severity. These results suggest that the use of pairwise rankings and a continuous severity score, such as that provided by the i-ROP system, may improve agreement on disease severity in the future. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Biomedical terahertz imaging with a quantum cascade laser
NASA Astrophysics Data System (ADS)
Kim, Seongsin M.; Hatami, Fariba; Harris, James S.; Kurian, Allison W.; Ford, James; King, Douglas; Scalari, Giacomo; Giovannini, Marcella; Hoyler, Nicolas; Faist, Jerome; Harris, Geoff
2006-04-01
We present biomedical imaging using a single frequency terahertz imaging system based on a low threshold quantum cascade laser emitting at 3.7THz (λ=81μm). With a peak output power of 4mW, coherent terahertz radiation and detection provide a relatively large dynamic range and high spatial resolution. We study image contrast based on water/fat content ratios in different tissues. Terahertz transmission imaging demonstrates a distinct anatomy in a rat brain slice. We also demonstrate malignant tissue contrast in an image of a mouse liver with developed tumors, indicating potential use of terahertz imaging for probing cancerous tissues.
Snapshot hyperspectral fovea vision system (HyperVideo)
NASA Astrophysics Data System (ADS)
Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.
2012-06-01
The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.
Optimization of radar imaging system parameters for geological analysis
NASA Technical Reports Server (NTRS)
Waite, W. P.; Macdonald, H. C.; Kaupp, V. H.
1981-01-01
The use of radar image simulation to model terrain variation and determine optimum sensor parameters for geological analysis is described. Optimum incidence angle is determined by the simulation, which evaluates separately the discrimination of surface features possible due to terrain geometry and that due to terrain scattering. Depending on the relative relief, slope, and scattering cross section, optimum incidence angle may vary from 20 to 80 degrees. Large incident angle imagery (more than 60 deg) is best for the widest range of geological applications, but in many cases these large angles cannot be achieved by satellite systems. Low relief regions require low incidence angles (less than 30 deg), so a satellite system serving a broad range of applications should have at least two selectable angles of incidence.
Digitally switchable multi-focal lens using freeform optics.
Wang, Xuan; Qin, Yi; Hua, Hong; Lee, Yun-Han; Wu, Shin-Tson
2018-04-16
Optical technologies offering electrically tunable optical power have found a broad range of applications, from head-mounted displays for virtual and augmented reality applications to microscopy. In this paper, we present a novel design and prototype of a digitally switchable multi-focal lens (MFL) that offers the capability of rapidly switching the optical power of the system among multiple foci. It consists of a freeform singlet and a customized programmable optical shutter array (POSA). Time-multiplexed multiple foci can be obtained by electrically controlling the POSA to switch the light path through different segments of the freeform singlet rapidly. While this method can be applied to a broad range of imaging and display systems, we experimentally demonstrate a proof-of-concept prototype for a multi-foci imaging system.
Comparing methods for analysis of biomedical hyperspectral image data
NASA Astrophysics Data System (ADS)
Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.
2017-02-01
Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.
Small image laser range finder for planetary rover
NASA Technical Reports Server (NTRS)
Wakabayashi, Yasufumi; Honda, Masahisa; Adachi, Tadashi; Iijima, Takahiko
1994-01-01
A variety of technical subjects need to be solved before planetary rover navigation could be a part of future missions. The sensors which will perceive terrain environment around the rover will require critical development efforts. The image laser range finder (ILRF) discussed here is one of the candidate sensors because of its advantage in providing range data required for its navigation. The authors developed a new compact-sized ILRF which is a quarter of the size of conventional ones. Instead of the current two directional scanning system which is comprised of nodding and polygon mirrors, the new ILRF is equipped with the new concept of a direct polygon mirror driving system, which successfully made its size compact to accommodate the design requirements. The paper reports on the design concept and preliminary technical specifications established in the current development phase.
Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.
Barter, James D; Thompson, Harold R; Richardson, Christine L
2003-03-20
A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.
An FPGA-based heterogeneous image fusion system design method
NASA Astrophysics Data System (ADS)
Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong
2011-08-01
Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Joshua Pfefer, T.
2016-01-01
Abstract. Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681
Heywood, Charles E.; Galloway, Devin L.; Stork, Sylvia V.
2002-01-01
Six synthetic aperture radar (SAR) images were processed to form five unwrapped interferometric (InSAR) images of the greater metropolitan area in the Albuquerque Basin. Most interference patterns in the images were caused by range displacements resulting from changes in land-surface elevation. Loci of land- surface elevation changes correlate with changes in aquifer-system water levels and largely result from the elastic response of the aquifer-system skeletal material to changes in pore-fluid pressure. The magnitude of the observed land-surface subsidence and rebound suggests that aquifer-system deformation resulting from ground-water withdrawals in the Albuquerque area has probably remained in the elastic (recoverable) range from July 1993 through September 1999. Evidence of inelastic (permanent) land subsidence in the Rio Rancho area exists, but its relation to compaction of the aquifer system is inconclusive because of insufficient water-level data. Patterns of elastic deformation in both Albuquerque and Rio Rancho suggest that intrabasin faults impede ground- water-pressure diffusion at seasonal time scales and that these faults are probably important in controlling patterns of regional ground-water flow.
NASA Astrophysics Data System (ADS)
Ishihara, Miya; Tsujita, Kazuhiro; Horiguchi, Akio; Irisawa, Kaku; Komatsu, Tomohiro; Ayaori, Makoto; Hirasawa, Takeshi; Kasamatsu, Tadashi; Hirota, Kazuhiro; Tsuda, Hitoshi; Ikewaki, Katsunori; Asano, Tomohiko
2015-03-01
Purpose: Photoacoustic imaging (PAI) enables one to visualize the distribution of hemoglobin and acquire a map of microvessels without using contrast agents. The purpose of our study is to develop a clinically applicable PAI system integrated with a clinical ultrasound (US) array system with handheld PAI probes providing coregistered PAI and US images. Clinical research trials were performed to evaluate the performance and feasibility of clinical value. Materials and Methods: We developed two types of handheld PAI probes: a linear PAI probe combining a conventional linear-array US probe with optical illumination and a transrectal ultrasonography (TRUS)-type PAI probe. We performed experiments with Japanese white rabbits and conducted clinical research trials of urology and vascular medicine with the approval of the medical human ethics committee of the National Defense Medical College. Results: We successfully acquired high-dynamic-range images of the vascular network ranging from capillaries to landmark arteries and identified the femoral vein, deep femoral vein, and great saphenous vein of rabbits. These major vessels in the rabbits groin are surrounded with microvessels connected to each other. Periprostatic microvessels were monitored during radical prostatectomy for localized prostate cancer and they were colocalized with nerve fibers, and their distribution was consistent with the corresponding PAI. The TRUS-type PAI probe clearly demonstrated the location and extent of the neurovascular bundle (NVB) better than does TRUS alone. Conclusions: The system, which can obtain a PAI, a US image, and a merged image, was innovatively designed so that medical doctors can easily find the location without any prior knowledge or extended skills to analyze the obtained images. Our pilot feasibility study confirms that PAI could be an imaging modality useful in the screening study and diagnostic biopsy.
Imaging and characterizing root systems using electrical impedance tomography
NASA Astrophysics Data System (ADS)
Kemna, A.; Weigand, M.; Kelter, M.; Pfeifer, J.; Zimmermann, E.; Walter, A.
2011-12-01
Root architecture, growth, and activity play an essential role regarding the nutrient uptake of roots in soils. While in recent years advances could be achieved concerning the modeling of root systems, measurement methods capable of imaging, characterizing, and monitoring root structure and dynamics in a non-destructive manner are still lacking, in particular at the field scale. We here propose electrical impedance tomography (EIT) for the imaging of root systems. The approach takes advantage of the low-frequency capacitive electrical properties of the soil-root interface and the root tissue. These properties are based on the induced migration of ions in an externally applied electric field and give rise to characteristic impedance spectra which can be measured by means of electrical impedance spectroscopy. The latter technique was already successfully applied in the 10 Hz to 1 MHz range by Ozier-Lafontaine and Bajazet (2005) to monitor root growth of tomato. We here apply the method in the 1 mHz to 45 kHz range, requiring four-electrode measurements, and demonstrate its implementation and potential in an imaging framework. Images of real and imaginary components of complex electrical conductivity are computed using a finite-element based inversion algorithm with smoothness-constraint regularization. Results from laboratory measurements on rhizotrons with different root systems (barley, rape) show that images of imaginary conductivity delineate the spatial extent of the root system under investigation, while images of real conductivity show a less clear response. As confirmed by numerical simulations, the latter could be explained by the partly compensating electrical conduction properties of epidermis (resistive) and inner root cells (conductive), indicating the limitations of conventional electrical resistivity tomography. The captured spectral behavior exhibits two distinct relaxation processes with Cole-Cole type signatures, which we interpret as the responses of the soil-root interface (phase peak in the range of 10 Hz) and the root tissue (phase peak above 10 kHz). Importantly, our measurements prove an almost linear relationship between root mass and the electrical polarizability associated with the low-frequency relaxation, suggesting the potential of the method to quantify root structural parameters. In future studies we will in particular investigate a hypothesized relationship between time constant and effective root radius. Based on our results, we believe that spectral EIT, by combining the spatial resolution benefits of a tomographic method with the diagnostic capability of spectroscopy, can be developed into a valuable tool for imaging, characterizing, and monitoring root systems both at laboratory and field scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Sierra Madre Oriental in Coahuila, Mexico
NASA Technical Reports Server (NTRS)
2002-01-01
This desolate landscape is part of the Sierra Madre Oriental mountain range, on the border between the Coahuila and Nuevo Leon provinces of Mexico. This image was acquired by Landsat 7's Enhanced Thematic Mapper plus (ETM+) sensor on November 28, 1999. This is a false-color composite image made using shortwave infrared, infrared, and green wavelengths. The image has also been sharpened using the sensor's panchromatic band. Image provided by the USGS EROS Data Center Satellite Systems Branch
Biomagnetic Imaging Applications using NV Centers in Diamond
NASA Astrophysics Data System (ADS)
Glenn, David; Lesage, David; Connolly, Colin; Walsworth, Ronald
2015-05-01
We present new measurements of magnetic fields produced by a range of biological specimens using a wide-field magnetic imaging system based on NV centers in diamond. In particular, we show (i) the first magnetic images of a previously unstudied strain of magnetotactic bacteria, and (ii) a general platform for magnetic imaging of immunomagnetically labeled cells, which provides a useful alternative to traditional immunofluorescence techniques in the presence of strong autofluorescence and/or optically scattering media.
Cho, Hyo-Min; Ding, Huanjun; Barber, William C; Iwanczyk, Jan S; Molloi, Sabee
2015-07-01
To investigate the feasibility of detecting breast microcalcification (μCa) with a dedicated breast computed tomography (CT) system based on energy-resolved photon-counting silicon (Si) strip detectors. The proposed photon-counting breast CT system and a bench-top prototype photon-counting breast CT system were simulated using a simulation package written in matlab to determine the smallest detectable μCa. A 14 cm diameter cylindrical phantom made of breast tissue with 20% glandularity was used to simulate an average-sized breast. Five different size groups of calcium carbonate grains, from 100 to 180 μm in diameter, were simulated inside of the cylindrical phantom. The images were acquired with a mean glandular dose (MGD) in the range of 0.7-8 mGy. A total of 400 images was used to perform a reader study. Another simulation study was performed using a 1.6 cm diameter cylindrical phantom to validate the experimental results from a bench-top prototype breast CT system. In the experimental study, a bench-top prototype CT system was constructed using a tungsten anode x-ray source and a single line 256-pixels Si strip photon-counting detector with a pixel pitch of 100 μm. Calcium carbonate grains, with diameter in the range of 105-215 μm, were embedded in a cylindrical plastic resin phantom to simulate μCas. The physical phantoms were imaged at 65 kVp with an entrance exposure in the range of 0.6-8 mGy. A total of 500 images was used to perform another reader study. The images were displayed in random order to three blinded observers, who were asked to give a 4-point confidence rating on each image regarding the presence of μCa. The μCa detectability for each image was evaluated by using the average area under the receiver operating characteristic curve (AUC) across the readers. The simulation results using a 14 cm diameter breast phantom showed that the proposed photon-counting breast CT system can achieve high detection accuracy with an average AUC greater than 0.89 ± 0.07 for μCas larger than 120 μm in diameter at a MGD of 3 mGy. The experimental results using a 1.6 cm diameter breast phantom showed that the prototype system can achieve an average AUC greater than 0.98 ± 0.01 for μCas larger than 140 μm in diameter using an entrance exposure of 1.2 mGy. The proposed photon-counting breast CT system based on a Si strip detector can potentially offer superior image quality to detect μCa with a lower dose level than a standard two-view mammography.
NEMA NU-4 performance evaluation of PETbox4, a high sensitivity dedicated PET preclinical tomograph
NASA Astrophysics Data System (ADS)
Gu, Z.; Taschereau, R.; Vu, N. T.; Wang, H.; Prout, D. L.; Silverman, R. W.; Bai, B.; Stout, D. B.; Phelps, M. E.; Chatziioannou, A. F.
2013-06-01
PETbox4 is a new, fully tomographic bench top PET scanner dedicated to high sensitivity and high resolution imaging of mice. This manuscript characterizes the performance of the prototype system using the National Electrical Manufacturers Association NU 4-2008 standards, including studies of sensitivity, spatial resolution, energy resolution, scatter fraction, count-rate performance and image quality. The PETbox4 performance is also compared with the performance of PETbox, a previous generation limited angle tomography system. PETbox4 consists of four opposing flat-panel type detectors arranged in a box-like geometry. Each panel is made by a 24 × 50 pixelated array of 1.82 × 1.82 × 7 mm bismuth germanate scintillation crystals with a crystal pitch of 1.90 mm. Each of these scintillation arrays is coupled to two Hamamatsu H8500 photomultiplier tubes via a glass light guide. Volumetric images for a 45 × 45 × 95 mm field of view (FOV) are reconstructed with a maximum likelihood expectation maximization algorithm incorporating a system model based on a parameterized detector response. With an energy window of 150-650 keV, the peak absolute sensitivity is approximately 18% at the center of FOV. The measured crystal energy resolution ranges from 13.5% to 48.3% full width at half maximum (FWHM), with a mean of 18.0%. The intrinsic detector spatial resolution is 1.5 mm FWHM in both transverse and axial directions. The reconstructed image spatial resolution for different locations in the FOV ranges from 1.32 to 1.93 mm, with an average of 1.46 mm. The peak noise equivalent count rate for the mouse-sized phantom is 35 kcps for a total activity of 1.5 MBq (40 µCi) and the scatter fraction is 28%. The standard deviation in the uniform region of the image quality phantom is 5.7%. The recovery coefficients range from 0.10 to 0.93. In comparison to the first generation two panel PETbox system, PETbox4 achieves substantial improvements on sensitivity and spatial resolution. The overall performance demonstrates that the PETbox4 scanner is suitable for producing high quality images for molecular imaging based biomedical research.
Design of optical system for binocular fundus camera.
Wu, Jun; Lou, Shiliang; Xiao, Zhitao; Geng, Lei; Zhang, Fang; Wang, Wen; Liu, Mengjia
2017-12-01
A non-mydriasis optical system for binocular fundus camera has been designed in this paper. It can capture two images of the same fundus retinal region from different angles at the same time, and can be used to achieve three-dimensional reconstruction of fundus. It is composed of imaging system and illumination system. In imaging system, Gullstrand Le Grand eye model is used to simulate normal human eye, and Schematic eye model is used to test the influence of ametropia in human eye on imaging quality. Annular aperture and black dot board are added into illumination system, so that the illumination system can eliminate stray light produced by corneal-reflected light and omentoscopic lens. Simulation results show that MTF of each visual field at the cut-off frequency of 90lp/mm is greater than 0.2, system distortion value is -2.7%, field curvature is less than 0.1 mm, radius of Airy disc is 3.25um. This system has a strong ability of chromatic aberration correction and focusing, and can image clearly for human fundus in which the range of diopters is from -10 D to +6 D(1 D = 1 m -1 ).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang
The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aimsmore » to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.« less
Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications
NASA Astrophysics Data System (ADS)
Budzan, Sebastian; Kasprzyk, Jerzy
2016-02-01
The problem of obstacle detection and recognition or, generally, scene mapping is one of the most investigated problems in computer vision, especially in mobile applications. In this paper a fused optical system using depth information with color images gathered from the Microsoft Kinect sensor and 3D laser range scanner data is proposed for obstacle detection and ground estimation in real-time mobile systems. The algorithm consists of feature extraction in the laser range images, processing of the depth information from the Kinect sensor, fusion of the sensor information, and classification of the data into two separate categories: road and obstacle. Exemplary results are presented and it is shown that fusion of information gathered from different sources increases the effectiveness of the obstacle detection in different scenarios, and it can be used successfully for road surface mapping.