Range determination for scannerless imaging
Muguira, Maritza Rosa; Sackos, John Theodore; Bradley, Bart Davis; Nellums, Robert
2000-01-01
A new method of operating a scannerless range imaging system (e.g., a scannerless laser radar) has been developed. This method is designed to compensate for nonlinear effects which appear in many real-world components. The system operates by determining the phase shift of the laser modulation, which is a physical quantity related physically to the path length between the laser source and the detector, for each pixel of an image.
Scannerless loss modulated flash color range imaging
Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM
2008-09-02
Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.
Scannerless loss modulated flash color range imaging
Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM
2009-02-24
Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.
Scannerless laser range imaging using loss modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandusky, John V
2011-08-09
A scannerless 3-D imaging apparatus is disclosed which utilizes an amplitude modulated cw light source to illuminate a field of view containing a target of interest. Backscattered light from the target is passed through one or more loss modulators which are modulated at the same frequency as the light source, but with a phase delay .delta. which can be fixed or variable. The backscattered light is demodulated by the loss modulator and detected with a CCD, CMOS or focal plane array (FPA) detector to construct a 3-D image of the target. The scannerless 3-D imaging apparatus, which can operate inmore » the eye-safe wavelength region 1.4-1.7 .mu.m and which can be constructed as a flash LADAR, has applications for vehicle collision avoidance, autonomous rendezvous and docking, robotic vision, industrial inspection and measurement, 3-D cameras, and facial recognition.« less
Scannerless laser range imaging using loss modulation
Sandusky, John V [Albuquerque, NM
2011-08-09
A scannerless 3-D imaging apparatus is disclosed which utilizes an amplitude modulated cw light source to illuminate a field of view containing a target of interest. Backscattered light from the target is passed through one or more loss modulators which are modulated at the same frequency as the light source, but with a phase delay .delta. which can be fixed or variable. The backscattered light is demodulated by the loss modulator and detected with a CCD, CMOS or focal plane array (FPA) detector to construct a 3-D image of the target. The scannerless 3-D imaging apparatus, which can operate in the eye-safe wavelength region 1.4-1.7 .mu.m and which can be constructed as a flash LADAR, has applications for vehicle collision avoidance, autonomous rendezvous and docking, robotic vision, industrial inspection and measurement, 3-D cameras, and facial recognition.
A low-power CMOS trans-impedance amplifier for FM/cw ladar imaging system
NASA Astrophysics Data System (ADS)
Hu, Kai; Zhao, Yi-qiang; Sheng, Yun; Zhao, Hong-liang; Yu, Hai-xia
2013-09-01
A scannerless ladar imaging system based on a unique frequency modulation/continuous wave (FM/cw) technique is able to entirely capture the target environment, using a focal plane array to construct a 3D picture of the target. This paper presents a low power trans-impedance amplifier (TIA) designed and implemented by 0.18 μm CMOS technology, which is used in the FM/cw imaging ladar with a 64×64 metal-semiconductor-metal(MSM) self-mixing detector array. The input stage of the operational amplifier (op amp) in TIA is realized with folded cascade structure to achieve large open loop gain and low offset. The simulation and test results of TIA with MSM detectors indicate that the single-end trans-impedance gain is beyond 100 kΩ, and the -3 dB bandwidth of Op Amp is beyond 60 MHz. The input common mode voltage ranges from 0.2 V to 1.5 V, and the power dissipation is reduced to 1.8 mW with a supply voltage of 3.3 V. The performance test results show that the TIA is a candidate for preamplifier of the read-out integrated circuit (ROIC) in the FM/cw scannerless ladar imaging system.
Single-Photon Detectors for Time-of-Flight Range Imaging
NASA Astrophysics Data System (ADS)
Stoppa, David; Simoni, Andrea
We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.
Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging.
Preiswerk, Frank; Toews, Matthew; Cheng, Cheng-Chieh; Chiou, Jr-Yuan George; Mei, Chang-Sheng; Schaefer, Lena F; Hoge, W Scott; Schwartz, Benjamin M; Panych, Lawrence P; Madore, Bruno
2017-09-01
To combine MRI, ultrasound, and computer science methodologies toward generating MRI contrast at the high frame rates of ultrasound, inside and even outside the MRI bore. A small transducer, held onto the abdomen with an adhesive bandage, collected ultrasound signals during MRI. Based on these ultrasound signals and their correlations with MRI, a machine-learning algorithm created synthetic MR images at frame rates up to 100 per second. In one particular implementation, volunteers were taken out of the MRI bore with the ultrasound sensor still in place, and MR images were generated on the basis of ultrasound signal and learned correlations alone in a "scannerless" manner. Hybrid ultrasound-MRI data were acquired in eight separate imaging sessions. Locations of liver features, in synthetic images, were compared with those from acquired images: The mean error was 1.0 pixel (2.1 mm), with best case 0.4 and worst case 4.1 pixels (in the presence of heavy coughing). For results from outside the bore, qualitative validation involved optically tracked ultrasound imaging with/without coughing. The proposed setup can generate an accurate stream of high-speed MR images, up to 100 frames per second, inside or even outside the MR bore. Magn Reson Med 78:897-908, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A novel optical scanner for laser radar
NASA Astrophysics Data System (ADS)
Yao, Shunyu; Peng, Renjun; Gao, Jianshuang
2013-09-01
Laser radar are ideally suitable for recognizing objects, detection, target tracking or obstacle avoidance, because of the high angular and range resolution. In recent years, scannerless ladar has developed rapidly. In contrast with traditional scanner ladar, scannerless ladar has distinct characteristics such as small, compact, high frame rate, wide field of view and high reliability. However, the scannerless ladar is still in the stage of laboratory and the performance cannot meet the demands of practical applications. Hence, traditional scanner laser radar is still mainly applied. In scanner ladar system, optical scanner is the key component which can deflect the direction of laser beam to the target. We investigated a novel scanner based on the characteristic of fiber's light-conductive. The fiber bundles are arranged in a special structure which connected to a motor. When motor working properly, the laser passes through the fibers on incident plane and the location of laser spot on output plane will move along with a straight line in a constant speed. The direction of light will be deflected by taking advantage of transmitting optics, then the linear sweeping of the target can be achieved. A laser radar scheme with high speed and large field of view can be realized. Some researches on scanner are simply introduced on section1. The structure of the optical scanner will be described and the practical applications of the scanner in transmitting and receiving optical paths are discussed in section2. Some characteristic of scanner is calculated in section3. In section4, we report the simulation and experiment of our prototype.
All-digital full waveform recording photon counting flash lidar
NASA Astrophysics Data System (ADS)
Grund, Christian J.; Harwit, Alex
2010-08-01
Current generation analog and photon counting flash lidar approaches suffer from limitation in waveform depth, dynamic range, sensitivity, false alarm rates, optical acceptance angle (f/#), optical and electronic cross talk, and pixel density. To address these issues Ball Aerospace is developing a new approach to flash lidar that employs direct coupling of a photocathode and microchannel plate front end to a high-speed, pipelined, all-digital Read Out Integrated Circuit (ROIC) to achieve photon-counting temporal waveform capture in each pixel on each laser return pulse. A unique characteristic is the absence of performance-limiting analog or mixed signal components. When implemented in 65nm CMOS technology, the Ball Intensified Imaging Photon Counting (I2PC) flash lidar FPA technology can record up to 300 photon arrivals in each pixel with 100 ps resolution on each photon return, with up to 6000 range bins in each pixel. The architecture supports near 100% fill factor and fast optical system designs (f/#<1), and array sizes to 3000×3000 pixels. Compared to existing technologies, >60 dB ultimate dynamic range improvement, and >104 reductions in false alarm rates are anticipated, while achieving single photon range precision better than 1cm. I2PC significantly extends long-range and low-power hard target imaging capabilities useful for autonomous hazard avoidance (ALHAT), navigation, imaging vibrometry, and inspection applications, and enables scannerless 3D imaging for distributed target applications such as range-resolved atmospheric remote sensing, vegetation canopies, and camouflage penetration from terrestrial, airborne, GEO, and LEO platforms. We discuss the I2PC architecture, development status, anticipated performance advantages, and limitations.
Active 3D camera design for target capture on Mars orbit
NASA Astrophysics Data System (ADS)
Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno
2010-04-01
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steele, B.J.
1996-12-31
There are many technologies emerging from this decade that can be used to help the law enforcement community protect the public as well as public and private facilities against ever increasing threats to this country and its resources. These technologies include sensors, closed circuit television (CCTV), access control, contraband detection, communications, control and display, barriers, and various component and system modeling techniques. This paper will introduce some of the various technologies that have been examined for the Department of Energy that could be applied to various law enforcement applications. They include: (1) scannerless laser radar; (2) next generation security systems;more » (3) response force video information helmet system; (4) access delay technologies; (5) rapidly deployable intrusion detection systems; and (6) cost risk benefit analysis.« less
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
High dynamic range image acquisition based on multiplex cameras
NASA Astrophysics Data System (ADS)
Zeng, Hairui; Sun, Huayan; Zhang, Tinghua
2018-03-01
High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.
Characteristics of different frequency ranges in scanning electron microscope images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sim, K. S., E-mail: kssim@mmu.edu.my; Nia, M. E.; Tan, T. L.
2015-07-22
We demonstrate a new approach to characterize the frequency range in general scanning electron microscope (SEM) images. First, pure frequency images are generated from low frequency to high frequency, and then, the magnification of each type of frequency image is implemented. By comparing the edge percentage of the SEM image to the self-generated frequency images, we can define the frequency ranges of the SEM images. Characterization of frequency ranges of SEM images benefits further processing and analysis of those SEM images, such as in noise filtering and contrast enhancement.
Target recognition of log-polar ladar range images using moment invariants
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong
2017-01-01
The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.
A detail enhancement and dynamic range adjustment algorithm for high dynamic range images
NASA Astrophysics Data System (ADS)
Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua
2014-08-01
Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.
Effects of Resolution, Range, and Image Contrast on Target Acquisition Performance.
Hollands, Justin G; Terhaar, Phil; Pavlovic, Nada J
2018-05-01
We sought to determine the joint influence of resolution, target range, and image contrast on the detection and identification of targets in simulated naturalistic scenes. Resolution requirements for target acquisition have been developed based on threshold values obtained using imaging systems, when target range was fixed, and image characteristics were determined by the system. Subsequent work has examined the influence of factors like target range and image contrast on target acquisition. We varied the resolution and contrast of static images in two experiments. Participants (soldiers) decided whether a human target was located in the scene (detection task) or whether a target was friendly or hostile (identification task). Target range was also varied (50-400 m). In Experiment 1, 30 participants saw color images with a single target exemplar. In Experiment 2, another 30 participants saw monochrome images containing different target exemplars. The effects of target range and image contrast were qualitatively different above and below 6 pixels per meter of target for both tasks in both experiments. Target detection and identification performance were a joint function of image resolution, range, and contrast for both color and monochrome images. The beneficial effects of increasing resolution for target acquisition performance are greater for closer (larger) targets.
Efficient generation of discontinuity-preserving adaptive triangulations from range images.
Garcia, Miguel Angel; Sappa, Angel Domingo
2004-10-01
This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
Target recognition for ladar range image using slice image
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Wang, Liang
2015-12-01
A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-01-01
With their significant features, the applications of complementary metal-oxide semiconductor (CMOS) image sensors covers a very extensive range, from industrial automation to traffic applications such as aiming systems, blind guidance, active/passive range finders, etc. In this paper CMOS image sensor-based active and passive range finders are presented. The measurement scheme of the proposed active/passive range finders is based on a simple triangulation method. The designed range finders chiefly consist of a CMOS image sensor and some light sources such as lasers or LEDs. The implementation cost of our range finders is quite low. Image processing software to adjust the exposure time (ET) of the CMOS image sensor to enhance the performance of triangulation-based range finders was also developed. An extensive series of experiments were conducted to evaluate the performance of the designed range finders. From the experimental results, the distance measurement resolutions achieved by the active range finder and the passive range finder can be better than 0.6% and 0.25% within the measurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests on applications of the developed CMOS image sensor-based range finders to the automotive field were also conducted. The experimental results demonstrated that our range finders are well-suited for distance measurements in this field. PMID:27879789
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
Heterodyne range imaging as an alternative to photogrammetry
NASA Astrophysics Data System (ADS)
Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard
2007-01-01
Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.
Vision based obstacle detection and grouping for helicopter guidance
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Chatterji, Gano
1993-01-01
Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.
Use of laser range finders and range image analysis in automated assembly tasks
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1990-01-01
A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.
Wu, Jih-Huah; Pen, Cheng-Chung; Jiang, Joe-Air
2008-03-13
With their significant features, the applications of complementary metal-oxidesemiconductor (CMOS) image sensors covers a very extensive range, from industrialautomation to traffic applications such as aiming systems, blind guidance, active/passiverange finders, etc. In this paper CMOS image sensor-based active and passive rangefinders are presented. The measurement scheme of the proposed active/passive rangefinders is based on a simple triangulation method. The designed range finders chieflyconsist of a CMOS image sensor and some light sources such as lasers or LEDs. Theimplementation cost of our range finders is quite low. Image processing software to adjustthe exposure time (ET) of the CMOS image sensor to enhance the performance oftriangulation-based range finders was also developed. An extensive series of experimentswere conducted to evaluate the performance of the designed range finders. From theexperimental results, the distance measurement resolutions achieved by the active rangefinder and the passive range finder can be better than 0.6% and 0.25% within themeasurement ranges of 1 to 8 m and 5 to 45 m, respectively. Feasibility tests onapplications of the developed CMOS image sensor-based range finders to the automotivefield were also conducted. The experimental results demonstrated that our range finders arewell-suited for distance measurements in this field.
Video-rate or high-precision: a flexible range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.
2008-02-01
A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.
High dynamic range coding imaging system
NASA Astrophysics Data System (ADS)
Wu, Renfan; Huang, Yifan; Hou, Guangqi
2014-10-01
We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.
Image interpolation used in three-dimensional range data compression.
Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian
2016-05-20
Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.
Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry
NASA Technical Reports Server (NTRS)
Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)
2016-01-01
A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.
Efficient geometric rectification techniques for spectral analysis algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Pang, S. S.; Curlander, J. C.
1992-01-01
The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, E; Chen, H; Polf, J
2016-06-15
Purpose: To report on the initial developments of a clinical 3-dimensional (3D) prompt gamma (PG) imaging system for proton radiotherapy range verification. Methods: The new imaging system under development consists of a prototype Compton camera to measure PG emission during proton beam irradiation and software to reconstruct, display, and analyze 3D images of the PG emission. For initial test of the system, PGs were measured with a prototype CC during a 200 cGy dose delivery with clinical proton pencil beams (ranging from 100 MeV – 200 MeV) to a water phantom. Measurements were also carried out with the CC placedmore » 15 cm from the phantom for a full range 150 MeV pencil beam and with its range shifted by 2 mm. Reconstructed images of the PG emission were displayed by the clinical PG imaging software and compared to the dose distributions of the proton beams calculated by a commercial treatment planning system. Results: Measurements made with the new PG imaging system showed that a 3D image could be reconstructed from PGs measured during the delivery of 200 cGy of dose, and that shifts in the Bragg peak range of as little as 2 mm could be detected. Conclusion: Initial tests of a new PG imaging system show its potential to provide 3D imaging and range verification for proton radiotherapy. Based on these results, we have begun work to improve the system with the goal that images can be produced from delivery of as little as 20 cGy so that the system could be used for in-vivo proton beam range verification on a daily basis.« less
Hdr Imaging for Feature Detection on Detailed Architectural Scenes
NASA Astrophysics Data System (ADS)
Kontogianni, G.; Stathopoulou, E. K.; Georgopoulos, A.; Doulamis, A.
2015-02-01
3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR) images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR) images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.
Luminescence imaging of water during carbon-ion irradiation for range estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Komori, Masataka; Koyama, Shuji
Purpose: The authors previously reported successful luminescence imaging of water during proton irradiation and its application to range estimation. However, since the feasibility of this approach for carbon-ion irradiation remained unclear, the authors conducted luminescence imaging during carbon-ion irradiation and estimated the ranges. Methods: The authors placed a pure-water phantom on the patient couch of a carbon-ion therapy system and measured the luminescence images with a high-sensitivity, cooled charge-coupled device camera during carbon-ion irradiation. The authors also carried out imaging of three types of phantoms (tap-water, an acrylic block, and a plastic scintillator) and compared their intensities and distributions withmore » those of a phantom containing pure-water. Results: The luminescence images of pure-water phantoms during carbon-ion irradiation showed clear Bragg peaks, and the measured carbon-ion ranges from the images were almost the same as those obtained by simulation. The image of the tap-water phantom showed almost the same distribution as that of the pure-water phantom. The acrylic block phantom’s luminescence image produced seven times higher luminescence and had a 13% shorter range than that of the water phantoms; the range with the acrylic phantom generally matched the calculated value. The plastic scintillator showed ∼15 000 times higher light than that of water. Conclusions: Luminescence imaging during carbon-ion irradiation of water is not only possible but also a promising method for range estimation in carbon-ion therapy.« less
Near-field three-dimensional radar imaging techniques and applications.
Sheen, David; McMakin, Douglas; Hall, Thomas
2010-07-01
Three-dimensional radio frequency imaging techniques have been developed for a variety of near-field applications, including radar cross-section imaging, concealed weapon detection, ground penetrating radar imaging, through-barrier imaging, and nondestructive evaluation. These methods employ active radar transceivers that operate at various frequency ranges covering a wide range, from less than 100 MHz to in excess of 350 GHz, with the frequency range customized for each application. Computational wavefront reconstruction imaging techniques have been developed that optimize the resolution and illumination quality of the images. In this paper, rectilinear and cylindrical three-dimensional imaging techniques are described along with several application results.
System and Method for Scan Range Gating
NASA Technical Reports Server (NTRS)
Lindemann, Scott (Inventor); Zuk, David M. (Inventor)
2017-01-01
A system for scanning light to define a range gated signal includes a pulsed coherent light source that directs light into the atmosphere, a light gathering instrument that receives the light modified by atmospheric backscatter and transfers the light onto an image plane, a scanner that scans collimated light from the image plane to form a range gated signal from the light modified by atmospheric backscatter, a control circuit that coordinates timing of a scan rate of the scanner and a pulse rate of the pulsed coherent light source so that the range gated signal is formed according to a desired range gate, an optical device onto which an image of the range gated signal is scanned, and an interferometer to which the image of the range gated signal is directed by the optical device. The interferometer is configured to modify the image according to a desired analysis.
NASA Astrophysics Data System (ADS)
Alzeyadi, Ahmed; Yu, Tzuyang
2018-03-01
Nondestructive evaluation (NDE) is an indispensable approach for the sustainability of critical civil infrastructure systems such as bridges and buildings. Recently, microwave/radar sensors are widely used for assessing the condition of concrete structures. Among existing imaging techniques in microwave/radar sensors, synthetic aperture radar (SAR) imaging enables researchers to conduct surface and subsurface inspection of concrete structures in the range-cross-range representation of SAR images. The objective of this paper is to investigate the range effect of concrete specimens in SAR images at various ranges (15 cm, 50 cm, 75 cm, 100 cm, and 200 cm). One concrete panel specimen (water-to-cement ratio = 0.45) of 30-cm-by-30-cm-by-5-cm was manufactured and scanned by a 10 GHz SAR imaging radar sensor inside an anechoic chamber. Scatterers in SAR images representing two corners of the concrete panel were used to estimate the width of the panel. It was found that the range-dependent pattern of corner scatters can be used to predict the width of concrete panels. Also, the maximum SAR amplitude decreases when the range increases. An empirical model was also proposed for width estimation of concrete panels.
Local dynamic range compensation for scanning electron microscope imaging system.
Sim, K S; Huang, Y H
2015-01-01
This is the extended project by introducing the modified dynamic range histogram modification (MDRHM) and is presented in this paper. This technique is used to enhance the scanning electron microscope (SEM) imaging system. By comparing with the conventional histogram modification compensators, this technique utilizes histogram profiling by extending the dynamic range of each tile of an image to the limit of 0-255 range while retains its histogram shape. The proposed technique yields better image compensation compared to conventional methods. © Wiley Periodicals, Inc.
Influence of range-gated intensifiers on underwater imaging system SNR
NASA Astrophysics Data System (ADS)
Wang, Xia; Hu, Ling; Zhi, Qiang; Chen, Zhen-yue; Jin, Wei-qi
2013-08-01
Range-gated technology has been a hot research field in recent years due to its high effective back scattering eliminating. As a result, it can enhance the contrast between a target and its background and extent the working distance of the imaging system. The underwater imaging system is required to have the ability to image in low light level conditions, as well as the ability to eliminate the back scattering effect, which means that the receiver has to be high-speed external trigger function, high resolution, high sensitivity, low noise, higher gain dynamic range. When it comes to an intensifier, the noise characteristics directly restrict the observation effect and range of the imaging system. The background noise may decrease the image contrast and sharpness, even covering the signal making it impossible to recognize the target. So it is quite important to investigate the noise characteristics of intensifiers. SNR is an important parameter reflecting the noise features of a system. Through the use of underwater laser range-gated imaging prediction model, and according to the linear SNR system theory, the gated imaging noise performance of the present market adopted super second generation and generation Ⅲ intensifiers were theoretically analyzed. Based on the active laser underwater range-gated imaging model, the effect to the system by gated intensifiers and the relationship between the system SNR and MTF were studied. Through theoretical and simulation analysis to the image intensifier background noise and SNR, the different influence on system SNR by super second generation and generation Ⅲ ICCD was obtained. Range-gated system SNR formula was put forward, and compared the different effect influence on the system by using two kind of ICCDs was compared. According to the matlab simulation, a detailed analysis was carried out theoretically. All the work in this paper lays a theoretical foundation to further eliminating back scattering effect, improving image SNR, designing and manufacturing higher performance underwater range-gated imaging systems.
Research on range-gated laser active imaging seeker
NASA Astrophysics Data System (ADS)
You, Mu; Wang, PengHui; Tan, DongJie
2013-09-01
Compared with other imaging methods such as millimeter wave imaging, infrared imaging and visible light imaging, laser imaging provides both a 2-D array of reflected intensity data as well as 2-D array of range data, which is the most important data for use in autonomous target acquisition .In terms of application, it can be widely used in military fields such as radar, guidance and fuse. In this paper, we present a laser active imaging seeker system based on range-gated laser transmitter and sensor technology .The seeker system presented here consist of two important part, one is laser image system, which uses a negative lens to diverge the light from a pulse laser to flood illuminate a target, return light is collected by a camera lens, each laser pulse triggers the camera delay and shutter. The other is stabilization gimbals, which is designed to be a rotatable structure both in azimuth and elevation angles. The laser image system consists of transmitter and receiver. The transmitter is based on diode pumped solid-state lasers that are passively Q-switched at 532nm wavelength. A visible wavelength was chosen because the receiver uses a Gen III image intensifier tube with a spectral sensitivity limited to wavelengths less than 900nm.The receiver is image intensifier tube's micro channel plate coupled into high sensitivity charge coupled device camera. The image has been taken at range over one kilometer and can be taken at much longer range in better weather. Image frame frequency can be changed according to requirement of guidance with modifiable range gate, The instantaneous field of views of the system was found to be 2×2 deg. Since completion of system integration, the seeker system has gone through a series of tests both in the lab and in the outdoor field. Two different kinds of buildings have been chosen as target, which is located at range from 200m up to 1000m.To simulate dynamic process of range change between missile and target, the seeker system has been placed on the truck vehicle running along the road in an expected speed. The test result shows qualified image and good performance of the seeker system.
Method of passive ranging from infrared image sequence based on equivalent area
NASA Astrophysics Data System (ADS)
Yang, Weiping; Shen, Zhenkang
2007-11-01
The information of range between missile and targets is important not only to missile controlling component, but also to automatic target recognition, so studying the technique of passive ranging from infrared images has important theoretic and practical meanings. Here we tried to get the range between guided missile and target and help to identify targets or dodge a hit. The issue of distance between missile and target is currently a hot and difficult research content. As all know, infrared imaging detector can not range so that it restricts the functions of the guided information processing system based on infrared images. In order to break through the technical puzzle, we investigated the principle of the infrared imaging, after analysing the imaging geometric relationship between the guided missile and the target, we brought forward the method of passive ranging based on equivalent area and provided mathematical analytic formulas. Validating Experiments demonstrate that the presented method has good effect, the lowest relative error can reach 10% in some circumstances.
Robust image registration for multiple exposure high dynamic range image synthesis
NASA Astrophysics Data System (ADS)
Yao, Susu
2011-03-01
Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..
Flash trajectory imaging of target 3D motion
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang
2011-03-01
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.
Unsynchronized scanning with a low-cost laser range finder for real-time range imaging
NASA Astrophysics Data System (ADS)
Hatipoglu, Isa; Nakhmani, Arie
2017-06-01
Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.
NASA Astrophysics Data System (ADS)
Turpin, Terry M.; Lafuse, James L.
1993-02-01
ImSynTM is an image synthesis technology, developed and patented by Essex Corporation. ImSynTM can provide compact, low cost, and low power solutions to some of the most difficult image synthesis problems existing today. The inherent simplicity of ImSynTM enables the manufacture of low cost and reliable photonic systems for imaging applications ranging from airborne reconnaissance to doctor's office ultrasound. The initial application of ImSynTM technology has been to SAR processing; however, it has a wide range of applications such as: image correlation, image compression, acoustic imaging, x-ray tomographic (CAT, PET, SPECT), magnetic resonance imaging (MRI), microscopy, range- doppler mapping (extended TDOA/FDOA). This paper describes ImSynTM in terms of synthetic aperture microscopy and then shows how the technology can be extended to ultrasound and synthetic aperture radar. The synthetic aperture microscope (SAM) enables high resolution three dimensional microscopy with greater dynamic range than real aperture microscopes. SAM produces complex image data, enabling the use of coherent image processing techniques. Most importantly SAM produces the image data in a form that is easily manipulated by a digital image processing workstation.
MEMS FPI-based smartphone hyperspectral imager
NASA Astrophysics Data System (ADS)
Rissanen, Anna; Saari, Heikki; Rainio, Kari; Stuns, Ingmar; Viherkanto, Kai; Holmlund, Christer; Näkki, Ismo; Ojanen, Harri
2016-05-01
This paper demonstrates a mobile phone- compatible hyperspectral imager based on a tunable MEMS Fabry-Perot interferometer. The realized iPhone 5s hyperspectral imager (HSI) demonstrator utilizes MEMS FPI tunable filter for visible-range, which consist of atomic layer deposited (ALD) Al2O3/TiO2-thin film Bragg reflectors. Characterization results for the mobile phone hyperspectral imager utilizing MEMS FPI chip optimized for 500 nm is presented; the operation range is λ = 450 - 550 nm with FWHM between 8 - 15 nm. Also a configuration of two cascaded FPIs (λ = 500 nm and λ = 650 nm) combined with an RGB colour camera is presented. With this tandem configuration, the overall wavelength tuning range of MEMS hyperspectral imagers can be extended to cover a larger range than with a single FPI chip. The potential applications of mobile hyperspectral imagers in the vis-NIR range include authentication, counterfeit detection and potential health/wellness and food sensing applications.
Luminescence imaging of water during uniform-field irradiation by spot scanning proton beams
NASA Astrophysics Data System (ADS)
Komori, Masataka; Sekihara, Eri; Yabe, Takuya; Horita, Ryo; Toshito, Toshiyuki; Yamamoto, Seiichi
2018-06-01
Luminescence was found during pencil-beam proton irradiation to water phantom and range could be estimated from the luminescence images. However, it is not yet clear whether the luminescence imaging is applied to the uniform fields made of spot-scanning proton-beam irradiations. For this purpose, imaging was conducted for the uniform fields having spread out Bragg peak (SOBP) made by spot scanning proton beams. We designed six types of the uniform fields with different ranges, SOBP widths and irradiation fields. One of the designed fields was irradiated to water phantom and a cooled charge coupled device camera was used to measure the luminescence image during irradiations. We estimated the ranges, field widths, and luminescence intensities from the luminescence images and compared those with the dose distribution calculated by a treatment planning system. For all types of uniform fields, we could obtain clear images of the luminescence showing the SOBPs. The ranges and field widths evaluated from the luminescence were consistent with those of the dose distribution calculated by a treatment planning system within the differences of ‑4 mm and ‑11 mm, respectively. Luminescence intensities were almost proportional to the SOBP widths perpendicular to the beam direction. The luminescence imaging could be applied to uniform fields made of spot scanning proton beam irradiations. Ranges and widths of the uniform fields with SOBP could be estimated from the images. The luminescence imaging is promising for the range and field width estimations in proton therapy.
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
Real-time high dynamic range laser scanning microscopy
NASA Astrophysics Data System (ADS)
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-04-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.
Plenoptic Imager for Automated Surface Navigation
NASA Technical Reports Server (NTRS)
Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael
2010-01-01
An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.
Histogram Matching Extends Acceptable Signal Strength Range on Optical Coherence Tomography Images
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Sigal, Ian A.; Kagemann, Larry; Schuman, Joel S.
2015-01-01
Purpose. We minimized the influence of image quality variability, as measured by signal strength (SS), on optical coherence tomography (OCT) thickness measurements using the histogram matching (HM) method. Methods. We scanned 12 eyes from 12 healthy subjects with the Cirrus HD-OCT device to obtain a series of OCT images with a wide range of SS (maximal range, 1–10) at the same visit. For each eye, the histogram of an image with the highest SS (best image quality) was set as the reference. We applied HM to the images with lower SS by shaping the input histogram into the reference histogram. Retinal nerve fiber layer (RNFL) thickness was automatically measured before and after HM processing (defined as original and HM measurements), and compared to the device output (device measurements). Nonlinear mixed effects models were used to analyze the relationship between RNFL thickness and SS. In addition, the lowest tolerable SSs, which gave the RNFL thickness within the variability margin of manufacturer recommended SS range (6–10), were determined for device, original, and HM measurements. Results. The HM measurements showed less variability across a wide range of image quality than the original and device measurements (slope = 1.17 vs. 4.89 and 1.72 μm/SS, respectively). The lowest tolerable SS was successfully reduced to 4.5 after HM processing. Conclusions. The HM method successfully extended the acceptable SS range on OCT images. This would qualify more OCT images with low SS for clinical assessment, broadening the OCT application to a wider range of subjects. PMID:26066749
Fourier Plane Image Combination by Feathering
NASA Astrophysics Data System (ADS)
Cotton, W. D.
2017-09-01
Astronomical objects frequently exhibit structure over a wide range of scales whereas many telescopes, especially interferometer arrays, only sample a limited range of spatial scales. To properly image these objects, images from a set of instruments covering the range of scales may be needed. These images then must be combined in a manner to recover all spatial scales. This paper describes the feathering technique for image combination in the Fourier transform plane. Implementations in several packages are discussed and example combinations of single dish and interferometric observations of both simulated and celestial radio emission are given.
110 °C range athermalization of wavefront coding infrared imaging systems
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Chang, Zheng; Liu, Haizheng; Zhao, Yaohong
2017-09-01
110 °C range athermalization is significant but difficult for designing infrared imaging systems. Our wavefront coding athermalized infrared imaging system adopts an optical phase mask with less manufacturing errors and a decoding method based on shrinkage function. The qualitative experiments prove that our wavefront coding athermalized infrared imaging system has three prominent merits: (1) working well over a temperature range of 110 °C; (2) extending the focal depth up to 15.2 times; (3) achieving a decoded image being approximate to its corresponding in-focus infrared image, with a mean structural similarity index (MSSIM) value greater than 0.85.
NASA Astrophysics Data System (ADS)
Tsuji, Hidenobu; Imaki, Masaharu; Kotake, Nobuki; Hirai, Akihito; Nakaji, Masaharu; Kameyama, Shumpei
2017-03-01
We demonstrate a range imaging pulsed laser sensor with two-dimensional scanning of a transmitted beam and a scanless receiver using a high-aspect avalanche photodiode (APD) array for the eye-safe wavelength. The system achieves a high frame rate and long-range imaging with a relatively simple sensor configuration. We developed a high-aspect APD array for the wavelength of 1.5 μm, a receiver integrated circuit, and a range and intensity detector. By combining these devices, we realized 160×120 pixels range imaging with a frame rate of 8 Hz at a distance of about 50 m.
Selection and Presentation of Imaging Figures in the Medical Literature
Siontis, George C. M.; Patsopoulos, Nikolaos A.; Vlahos, Antonios P.; Ioannidis, John P. A.
2010-01-01
Background Images are important for conveying information, but there is no empirical evidence on whether imaging figures are properly selected and presented in the published medical literature. We therefore evaluated the selection and presentation of radiological imaging figures in major medical journals. Methodology/Principal Findings We analyzed articles published in 2005 in 12 major general and specialty medical journals that had radiological imaging figures. For each figure, we recorded information on selection, study population, provision of quantitative measurements, color scales and contrast use. Overall, 417 images from 212 articles were analyzed. Any comment/hint on image selection was made in 44 (11%) images (range 0–50% across the 12 journals) and another 37 (9%) (range 0–60%) showed both a normal and abnormal appearance. In 108 images (26%) (range 0–43%) it was unclear whether the image came from the presented study population. Eighty-three images (20%) (range 0–60%) had any quantitative or ordered categorical value on a measure of interest. Information on the distribution of the measure of interest in the study population was given in 59 cases. For 43 images (range 0–40%), a quantitative measurement was provided for the depicted case and the distribution of values in the study population was also available; in those 43 cases there was no over-representation of extreme than average cases (p = 0.37). Significance The selection and presentation of images in the medical literature is often insufficiently documented; quantitative data are sparse and difficult to place in context. PMID:20526360
Gloss discrimination and eye movements
NASA Astrophysics Data System (ADS)
Phillips, Jonathan B.; Ferwerda, James A.; Nunziata, Ann
2010-02-01
Human observers are able to make fine discriminations of surface gloss. What cues are they using to perform this task? In previous studies, we identified two reflection-related cues-the contrast of the reflected image (c, contrast gloss) and the sharpness of reflected image (d, distinctness-of-image gloss)--but these were for objects rendered in standard dynamic range (SDR) images with compressed highlights. In ongoing work, we are studying the effects of image dynamic range on perceived gloss, comparing high dynamic range (HDR) images with accurate reflections and SDR images with compressed reflections. In this paper, we first present the basic findings of this gloss discrimination study then present an analysis of eye movement recordings that show where observers were looking during the gloss discrimination task. The results indicate that: 1) image dynamic range has significant influence on perceived gloss, with surfaces presented in HDR images being seen as glossier and more discriminable than their SDR counterparts; 2) observers look at both light source highlights and environmental interreflections when judging gloss; and 3) both of these results are modulated by surface geometry and scene illumination.
Geometrical calibration of an AOTF hyper-spectral imaging system
NASA Astrophysics Data System (ADS)
Špiclin, Žiga; Katrašnik, Jaka; Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2010-02-01
Optical aberrations present an important problem in optical measurements. Geometrical calibration of an imaging system is therefore of the utmost importance for achieving accurate optical measurements. In hyper-spectral imaging systems, the problem of optical aberrations is even more pronounced because optical aberrations are wavelength dependent. Geometrical calibration must therefore be performed over the entire spectral range of the hyper-spectral imaging system, which is usually far greater than that of the visible light spectrum. This problem is especially adverse in AOTF (Acousto- Optic Tunable Filter) hyper-spectral imaging systems, as the diffraction of light in AOTF filters is dependent on both wavelength and angle of incidence. Geometrical calibration of hyper-spectral imaging system was performed by stable caliber of known dimensions, which was imaged at different wavelengths over the entire spectral range. The acquired images were then automatically registered to the caliber model by both parametric and nonparametric transformation based on B-splines and by minimizing normalized correlation coefficient. The calibration method was tested on an AOTF hyper-spectral imaging system in the near infrared spectral range. The results indicated substantial wavelength dependent optical aberration that is especially pronounced in the spectral range closer to the infrared part of the spectrum. The calibration method was able to accurately characterize the aberrations and produce transformations for efficient sub-pixel geometrical calibration over the entire spectral range, finally yielding better spatial resolution of hyperspectral imaging system.
Real-time high dynamic range laser scanning microscopy
Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.
2016-01-01
In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging. PMID:27032979
Volumetric segmentation of range images for printed circuit board inspection
NASA Astrophysics Data System (ADS)
Van Dop, Erik R.; Regtien, Paul P. L.
1996-10-01
Conventional computer vision approaches towards object recognition and pose estimation employ 2D grey-value or color imaging. As a consequence these images contain information about projections of a 3D scene only. The subsequent image processing will then be difficult, because the object coordinates are represented with just image coordinates. Only complicated low-level vision modules like depth from stereo or depth from shading can recover some of the surface geometry of the scene. Recent advances in fast range imaging have however paved the way towards 3D computer vision, since range data of the scene can now be obtained with sufficient accuracy and speed for object recognition and pose estimation purposes. This article proposes the coded-light range-imaging method together with superquadric segmentation to approach this task. Superquadric segments are volumetric primitives that describe global object properties with 5 parameters, which provide the main features for object recognition. Besides, the principle axes of a superquadric segment determine the phase of an object in the scene. The volumetric segmentation of a range image can be used to detect missing, false or badly placed components on assembled printed circuit boards. Furthermore, this approach will be useful to recognize and extract valuable or toxic electronic components on printed circuit boards scrap that currently burden the environment during electronic waste processing. Results on synthetic range images with errors constructed according to a verified noise model illustrate the capabilities of this approach.
Luminescence imaging of water during proton-beam irradiation for range estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka
Purpose: Proton therapy has the ability to selectively deliver a dose to the target tumor, so the dose distribution should be accurately measured by a precise and efficient method. The authors found that luminescence was emitted from water during proton irradiation and conjectured that this phenomenon could be used for estimating the dose distribution. Methods: To achieve more accurate dose distribution, the authors set water phantoms on a table with a spot scanning proton therapy system and measured the luminescence images of these phantoms with a high-sensitivity, cooled charge coupled device camera during proton-beam irradiation. The authors imaged the phantomsmore » of pure water, fluorescein solution, and an acrylic block. Results: The luminescence images of water phantoms taken during proton-beam irradiation showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. Furthermore, the image of the pure-water phantom showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of the fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had a 14.5% shorter proton range than that of water; the proton range in the acrylic phantom generally matched the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 s. Conclusions: Luminescence imaging during proton-beam irradiation is promising as an effective method for range estimation in proton therapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torun, H.; Torello, D.; Degertekin, F. L.
2011-08-15
The authors describe a method of actuation for atomic force microscope (AFM) probes to improve imaging speed and displacement range simultaneously. Unlike conventional piezoelectric tube actuation, the proposed method involves a lever and fulcrum ''seesaw'' like actuation mechanism that uses a small, fast piezoelectric transducer. The lever arm of the seesaw mechanism increases the apparent displacement range by an adjustable gain factor, overcoming the standard tradeoff between imaging speed and displacement range. Experimental characterization of a cantilever holder implementing the method is provided together with comparative line scans obtained with contact mode imaging. An imaging bandwidth of 30 kHz inmore » air with the current setup was demonstrated.« less
Determining the 3-D structure and motion of objects using a scanning laser range sensor
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Smith, Philip W.
1993-01-01
In order for the EVAHR robot to autonomously track and grasp objects, its vision system must be able to determine the 3-D structure and motion of an object from a sequence of sensory images. This task is accomplished by the use of a laser radar range sensor which provides dense range maps of the scene. Unfortunately, the currently available laser radar range cameras use a sequential scanning approach which complicates image analysis. Although many algorithms have been developed for recognizing objects from range images, none are suited for use with single beam, scanning, time-of-flight sensors because all previous algorithms assume instantaneous acquisition of the entire image. This assumption is invalid since the EVAHR robot is equipped with a sequential scanning laser range sensor. If an object is moving while being imaged by the device, the apparent structure of the object can be significantly distorted due to the significant non-zero delay time between sampling each image pixel. If an estimate of the motion of the object can be determined, this distortion can be eliminated; but, this leads to the motion-structure paradox - most existing algorithms for 3-D motion estimation use the structure of objects to parameterize their motions. The goal of this research is to design a rigid-body motion recovery technique which overcomes this limitation. The method being developed is an iterative, linear, feature-based approach which uses the non-zero image acquisition time constraint to accurately recover the motion parameters from the distorted structure of the 3-D range maps. Once the motion parameters are determined, the structural distortion in the range images is corrected.
Kapke, Jonathan T; Epperla, Narendranath; Shah, Namrata; Richardson, Kristin; Carrum, George; Hari, Parameswaran N; Pingali, Sai R; Hamadani, Mehdi; Karmali, Reem; Fenske, Timothy S
2017-07-01
Patients with relapsed and refractory classical Hodgkin lymphoma (cHL) are often treated with autologous hematopoietic cell transplantation (auto-HCT). After auto-HCT, most transplant centers implement routine surveillance imaging to monitor for disease relapse; however, there is limited evidence to support this practice. In this multicenter, retrospective study, we identified cHL patients (n = 128) who received auto-HCT, achieved complete remission (CR) after transplantation, and then were followed with routine surveillance imaging. Of these, 29 (23%) relapsed after day 100 after auto-HCT. Relapse was detected clinically in 14 patients and with routine surveillance imaging in 15 patients. When clinically detected relapse was compared with to radiographically detected relapse respectively, the median overall survival (2084 days [range, 225-4161] vs. 2737 days [range, 172-2750]; P = .51), the median time to relapse (247 days [range, 141-3974] vs. 814 days [range, 96-1682]; P = .30) and the median postrelapse survival (674 days [range, 13-1883] vs. 1146 days [range, 4-2548]; P = .52) were not statistically different. In patients who never relapsed after auto-HCT, a median of 4 (range, 1-25) surveillance imaging studies were performed over a median follow-up period of 3.5 years. A minority of patients with cHL who achieve CR after auto-HCT will ultimately relapse. Surveillance imaging detected approximately half of relapses; however, outcomes were similar for those whose relapse was detected using routine surveillance imaging versus detected clinically in between surveillance imaging studies. There appears to be limited utility for routine surveillance imaging in cHL patients who achieve CR after auto-HCT. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kweon, In SO; Hebert, Martial; Kanade, Takeo
1989-01-01
A three-dimensional perception system for building a geometrical description of rugged terrain environments from range image data is presented with reference to the exploration of the rugged terrain of Mars. An intermediate representation consisting of an elevation map that includes an explicit representation of uncertainty and labeling of the occluded regions is proposed. The locus method used to convert range image to an elevation map is introduced, along with an uncertainty model based on this algorithm. Both the elevation map and the locus method are the basis of a terrain matching algorithm which does not assume any correspondences between range images. The two-stage algorithm consists of a feature-based matching algorithm to compute an initial transform and an iconic terrain matching algorithm to merge multiple range images into a uniform representation. Terrain modeling results on real range images of rugged terrain are presented. The algorithms considered are a fundamental part of the perception system for the Ambler, a legged locomotor.
Range side lobe inversion for chirp-encoded dual-band tissue harmonic imaging.
Shen, Che-Chou; Peng, Jun-Kai; Wu, Chi
2014-02-01
Dual-band (DB) harmonic imaging is performed by transmitting and receiving at both fundamental band (f0) and second-harmonic band (2f0). In our previous work, particular chirp excitation has been developed to increase the signal- to-noise ratio in DB harmonic imaging. However, spectral overlap between the second-order DB harmonic signals results in range side lobes in the pulse compression. In this study, a novel range side lobe inversion (RSI) method is developed to alleviate the level of range side lobes from spectral overlap. The method is implemented by firing an auxiliary chirp to change the polarity of the range side lobes so that the range side lobes can be suppressed in the combination of the original chirp and the auxiliary chirp. Hydrophone measurements show that the RSI method reduces the range side lobe level (RSLL) and thus increases the quality of pulse compression in DB harmonic imaging. With the signal bandwidth of 60%, the RSLL decreases from -23 dB to -36 dB and the corresponding compression quality improves from 78% to 94%. B-mode images also indicate that the magnitude of range side lobe is suppressed by 7 dB when the RSI method is applied.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Image dynamic range test and evaluation of Gaofen-2 dual cameras
NASA Astrophysics Data System (ADS)
Zhang, Zhenhua; Gan, Fuping; Wei, Dandan
2015-12-01
In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.
Fast exposure time decision in multi-exposure HDR imaging
NASA Astrophysics Data System (ADS)
Piao, Yongjie; Jin, Guang
2012-10-01
Currently available imaging and display system exists the problem of insufficient dynamic range, and the system cannot restore all the information for an high dynamic range (HDR) scene. The number of low dynamic range(LDR) image samples and fastness of exposure time decision impacts the real-time performance of the system dramatically. In order to realize a real-time HDR video acquisition system, this paper proposed a fast and robust method for exposure time selection in under and over exposure area which is based on system response function. The method utilized the monotony of the imaging system. According to this characteristic the exposure time is adjusted to an initial value to make the median value of the image equals to the middle value of the system output range; then adjust the exposure time to make the pixel value on two sides of histogram be the middle value of the system output range. Thus three low dynamic range images are acquired. Experiments show that the proposed method for adjusting the initial exposure time can converge in two iterations which is more fast and stable than average gray control method. As to the exposure time adjusting in under and over exposed area, the proposed method can use the dynamic range of the system more efficiently than fixed exposure time method.
3D super resolution range-gated imaging for canopy reconstruction and measurement
NASA Astrophysics Data System (ADS)
Huang, Hantao; Wang, Xinwei; Sun, Liang; Lei, Pingshun; Fan, Songtao; Zhou, Yan
2018-01-01
In this paper, we proposed a method of canopy reconstruction and measurement based on 3D super resolution range-gated imaging. In this method, high resolution 2D intensity images are grasped by active gate imaging, and 3D images of canopy are reconstructed by triangular-range-intensity correlation algorithm at the same time. A range-gated laser imaging system(RGLIS) is established based on 808 nm diode laser and gated intensified charge-coupled device (ICCD) camera with 1392´1040 pixels. The proof experiments have been performed for potted plants located 75m away and trees located 165m away. The experiments show it that can acquire more than 1 million points per frame, and 3D imaging has the spatial resolution about 0.3mm at the distance of 75m and the distance accuracy about 10 cm. This research is beneficial for high speed acquisition of canopy structure and non-destructive canopy measurement.
High Dynamic Range Imaging Using Multiple Exposures
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei
2017-06-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.
NASA Astrophysics Data System (ADS)
Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong
2018-01-01
An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.
Toward 1-mm depth precision with a solid state full-field range imaging system
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.
2006-02-01
Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.
Potsaid, Benjamin; Baumann, Bernhard; Huang, David; Barry, Scott; Cable, Alex E.; Schuman, Joel S.; Duker, Jay S.; Fujimoto, James G.
2011-01-01
We demonstrate ultrahigh speed swept source/Fourier domain ophthalmic OCT imaging using a short cavity swept laser at 100,000–400,000 axial scan rates. Several design configurations illustrate tradeoffs in imaging speed, sensitivity, axial resolution, and imaging depth. Variable rate A/D optical clocking is used to acquire linear-in-k OCT fringe data at 100kHz axial scan rate with 5.3um axial resolution in tissue. Fixed rate sampling at 1 GSPS achieves a 7.5mm imaging range in tissue with 6.0um axial resolution at 100kHz axial scan rate. A 200kHz axial scan rate with 5.3um axial resolution over 4mm imaging range is achieved by buffering the laser sweep. Dual spot OCT using two parallel interferometers achieves 400kHz axial scan rate, almost 2X faster than previous 1050nm ophthalmic results and 20X faster than current commercial instruments. Superior sensitivity roll-off performance is shown. Imaging is demonstrated in the human retina and anterior segment. Wide field 12×12mm data sets include the macula and optic nerve head. Small area, high density imaging shows individual cone photoreceptors. The 7.5mm imaging range configuration can show the cornea, iris, and anterior lens in a single image. These improvements in imaging speed and depth range provide important advantages for ophthalmic imaging. The ability to rapidly acquire 3D-OCT data over a wide field of view promises to simplify examination protocols. The ability to image fine structures can provide detailed information on focal pathologies. The large imaging range and improved image penetration at 1050nm wavelengths promises to improve performance for instrumentation which images both the retina and anterior eye. These advantages suggest that swept source OCT at 1050nm wavelengths will play an important role in future ophthalmic instrumentation. PMID:20940894
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, S; Komori, M; Toshito, T
Purpose: Since proton therapy has the ability to selectively deliver a dose to a target tumor, the dose distribution should be accurately measured. A precise and efficient method to evaluate the dose distribution is desired. We found that luminescence was emitted from water during proton irradiation and thought this phenomenon could be used for estimating the dose distribution. Methods: For this purpose, we placed water phantoms set on a table with a spot-scanning proton-therapy system, and luminescence images of these phantoms were measured with a high-sensitivity cooled charge coupled device (CCD) camera during proton-beam irradiation. We also conducted the imagingmore » of phantoms of pure-water, fluorescein solution and acrylic block. We made three dimensional images from the projection data. Results: The luminescence images of water phantoms during the proton-beam irradiations showed clear Bragg peaks, and the measured proton ranges from the images were almost the same as those obtained with an ionization chamber. The image of the pure-water phantom also showed almost the same distribution as the tap-water phantom, indicating that the luminescence image was not related to impurities in the water. The luminescence image of fluorescein solution had ∼3 times higher intensity than water, with the same proton range as that of water. The luminescence image of the acrylic phantom had 14.5% shorter proton range than that of water; the proton range in the acrylic phantom was relatively matched with the calculated value. The luminescence images of the tap-water phantom during proton irradiation could be obtained in less than 2 sec. Three dimensional images were successfully obtained which have more quantitative information. Conclusion: Luminescence imaging during proton-beam irradiation has the potential to be a new method for range estimations in proton therapy.« less
SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platt, M; Platt, M; Lamba, M
2016-06-15
Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less
Study on super-resolution three-dimensional range-gated imaging technology
NASA Astrophysics Data System (ADS)
Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao
2018-04-01
Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.
Multi-exposure high dynamic range image synthesis with camera shake correction
NASA Astrophysics Data System (ADS)
Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.
Doppler imaging with dual-detection full-range frequency domain optical coherence tomography
Meemon, Panomsak; Lee, Kye-Sung; Rolland, Jannick P.
2010-01-01
Most of full-range techniques for Frequency Domain Optical Coherence Tomography (FD-OCT) reported to date utilize the phase relation between consecutive axial lines to reconstruct a complex interference signal and hence may exhibit degradation in either mirror image suppression performance or detectable velocity dynamic range or both when monitoring a moving sample such as flow activity. We have previously reported a technique of mirror image removal by simultaneous detection of the quadrature components of a complex spectral interference called a Dual-Detection Frequency Domain OCT (DD-FD-OCT) [Opt. Lett. 35, 1058-1060 (2010)]. The technique enables full range imaging without any loss of acquisition speed and is intrinsically less sensitive to phase errors generated by involuntary movements of the subject. In this paper, we demonstrate the application of the DD-FD-OCT to a phase-resolved Doppler imaging without degradation in either mirror image suppression performance or detectable velocity dynamic range that were observed in other full-range Doppler methods. In order to accommodate for Doppler imaging, we have developed a fiber-based DD-FD-OCT that more efficiently utilizes the source power compared with the previous free-space DD-FD-OCT. In addition, the velocity sensitivity of the phase-resolved DD-FD-OCT was investigated, and the relation between the measured Doppler phase shift and set flow velocity of a flow phantom was verified. Finally, we demonstrate the Doppler imaging using the DD-FD-OCT in a biological sample. PMID:21258488
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
Device for imaging scenes with very large ranges of intensity
Deason, Vance Albert [Idaho Falls, ID
2011-11-15
A device for imaging scenes with a very large range of intensity having a pair of polarizers, a primary lens, an attenuating mask, and an imaging device optically connected along an optical axis. Preferably, a secondary lens, positioned between the attenuating mask and the imaging device is used to focus light on the imaging device. The angle between the first polarization direction and the second polarization direction is adjustable.
Results of ACTIM: an EDA study on spectral laser imaging
NASA Astrophysics Data System (ADS)
Hamoir, Dominique; Hespel, Laurent; Déliot, Philippe; Boucher, Yannick; Steinvall, Ove; Ahlberg, Jörgen; Larsson, Hakan; Letalick, Dietmar; Lutzmann, Peter; Repasi, Endre; Ritt, Gunnar
2011-11-01
The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising military applications, system analyses, a roadmap and recommendations. Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn, active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage, camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result in new capabilities. We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat detection. We present the system analyses that have been performed for confirming the interests, limitations and requirements of spectral active imaging in these three prioritized applications.
Ladar range image denoising by a nonlocal probability statistics algorithm
NASA Astrophysics Data System (ADS)
Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi
2013-01-01
According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
UTOFIA: an underwater time-of-flight image acquisition system
NASA Astrophysics Data System (ADS)
Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris
2017-10-01
In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.
A tone mapping operator based on neural and psychophysical models of visual perception
NASA Astrophysics Data System (ADS)
Cyriac, Praveen; Bertalmio, Marcelo; Kane, David; Vazquez-Corral, Javier
2015-03-01
High dynamic range imaging techniques involve capturing and storing real world radiance values that span many orders of magnitude. However, common display devices can usually reproduce intensity ranges only up to two to three orders of magnitude. Therefore, in order to display a high dynamic range image on a low dynamic range screen, the dynamic range of the image needs to be compressed without losing details or introducing artefacts, and this process is called tone mapping. A good tone mapping operator must be able to produce a low dynamic range image that matches as much as possible the perception of the real world scene. We propose a two stage tone mapping approach, in which the first stage is a global method for range compression based on a gamma curve that equalizes the lightness histogram the best, and the second stage performs local contrast enhancement and color induction using neural activity models for the visual cortex.
Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera
NASA Astrophysics Data System (ADS)
Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.
2007-09-01
We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.
County-Level Population Economic Status and Medicare Imaging Resource Consumption.
Rosenkrantz, Andrew B; Hughes, Danny R; Prabhakar, Anand M; Duszak, Richard
2017-06-01
The aim of this study was to assess relationships between county-level variation in Medicare beneficiary imaging resource consumption and measures of population economic status. The 2013 CMS Geographic Variation Public Use File was used to identify county-level per capita Medicare fee-for-service imaging utilization and nationally standardized costs to the Medicare program. The County Health Rankings public data set was used to identify county-level measures of population economic status. Regional variation was assessed, and multivariate regressions were performed. Imaging events per 1,000 Medicare beneficiaries varied 1.8-fold (range, 2,723-4,843) at the state level and 5.3-fold (range, 1,228-6,455) at the county level. Per capita nationally standardized imaging costs to Medicare varied 4.2-fold (range, $84-$353) at the state level and 14.1-fold (range, $33-$471) at the county level. Within individual states, county-level utilization varied on average 2.0-fold (range, 1.1- to 3.1-fold), and costs varied 2.8-fold (range, 1.1- to 6.4-fold). For both large urban populations and small rural states, Medicare imaging resource consumption was heterogeneously variable at the county level. Adjusting for county-level gender, ethnicity, rural status, and population density, countywide unemployment rates showed strong independent positive associations with Medicare imaging events (β = 26.96) and costs (β = 4.37), whereas uninsured rates showed strong independent positive associations with Medicare imaging costs (β = 2.68). Medicare imaging utilization and costs both vary far more at the county than at the state level. Unfavorable measures of county-level population economic status in the non-Medicare population are independently associated with greater Medicare imaging resource consumption. Future efforts to optimize Medicare imaging use should consider the influence of local indigenous socioeconomic factors outside the scope of traditional beneficiary-focused policy initiatives. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
Model-based restoration using light vein for range-gated imaging systems.
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen
2016-09-10
The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.
2014-01-01
Background Subcutaneous veins localization is usually performed manually by medical staff to find suitable vein to insert catheter for medication delivery or blood sample function. The rule of thumb is to find large and straight enough vein for the medication to flow inside of the selected blood vessel without any obstruction. The problem of peripheral difficult venous access arises when patient’s veins are not visible due to any reason like dark skin tone, presence of hair, high body fat or dehydrated condition, etc. Methods To enhance the visibility of veins, near infrared imaging systems is used to assist medical staff in veins localization process. Optimum illumination is crucial to obtain a better image contrast and quality, taking into consideration the limited power and space on portable imaging systems. In this work a hyperspectral image quality assessment is done to get the optimum range of illumination for venous imaging system. A database of hyperspectral images from 80 subjects has been created and subjects were divided in to four different classes on the basis of their skin tone. In this paper the results of hyper spectral image analyses are presented in function of the skin tone of patients. For each patient, four mean images were constructed by taking mean with a spectral span of 50 nm within near infrared range, i.e. 750–950 nm. Statistical quality measures were used to analyse these images. Conclusion It is concluded that the wavelength range of 800 to 850 nm serve as the optimum illumination range to get best near infrared venous image quality for each type of skin tone. PMID:25087016
SU-F-T-42: MRI and TRUS Image Fusion as a Mode of Generating More Accurate Prostate Contours
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petronek, M; Purysko, A; Balik, S
Purpose: Transrectal Ultrasound (TRUS) imaging is utilized intra-operatively for LDR permanent prostate seed implant treatment planning. Prostate contouring with TRUS can be challenging at the apex and base. This study attempts to improve accuracy of prostate contouring with MRI-TRUS fusion to prevent over- or under-estimation of the prostate volume. Methods: 14 patients with previous MRI guided prostate biopsy and undergone an LDR permanent prostate seed implant have been selected. The prostate was contoured on the MRI images (1 mm slice thickness) by a radiologist. The prostate was also contoured on TRUS images (5 mm slice thickness) during LDR procedure bymore » a urologist. MRI and TRUS images were rigidly fused manually and the prostate contours from MRI and TRUS were compared using Dice similarity coefficient, percentage volume difference and length, height and width differences. Results: The prostate volume was overestimated by 8 ± 18% (range: 34% to −25%) in TRUS images compared to MRI. The mean Dice was 0.77 ± 0.09 (range: 0.53 to 0.88). The mean difference (TRUS-MRI) in the prostate width was 0 ± 4 mm (range: −11 to 5 mm), height was −3 ± 6 mm (range: −13 to 6 mm) and length was 6 ± 6 (range: −10 to 16 mm). Prostate was overestimated with TRUS imaging at the base for 6 cases (mean: 8 ± 4 mm and range: 5 to 14 mm), at the apex for 6 cases (mean: 11 ± 3 mm and range: 5 to 15 mm) and 1 case was underestimated at both base and apex by 4 mm. Conclusion: Use of intra-operative TRUS and MRI image fusion can help to improve the accuracy of prostate contouring by accurately accounting for prostate over- or under-estimations, especially at the base and apex. The mean amount of discrepancy is within a range that is significant for LDR sources.« less
NASA Astrophysics Data System (ADS)
Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur
2009-05-01
Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.
Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors
NASA Technical Reports Server (NTRS)
Matthies, Larry; Grandjean, Pierrick
1993-01-01
Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.
Wen, Xuejiao; Qiu, Xiaolan; Han, Bing; Ding, Chibiao; Lei, Bin; Chen, Qi
2018-05-07
Range ambiguity is one of the factors which affect the SAR image quality. Alternately transmitting up and down chirp modulation pulses is one of the methods used to suppress the range ambiguity. However, the defocusing range ambiguous signal can still hold the stronger backscattering intensity than the mainlobe imaging area in some case, which has a severe impact on visual effects and subsequent applications. In this paper, a novel hybrid range ambiguity suppression method for up and down chirp modulation is proposed. The method can obtain the ambiguity area image and reduce the ambiguity signal power appropriately, by applying pulse compression using a contrary modulation rate and CFAR detecting method. The effectiveness and correctness of the approach is demonstrated by processing the archive images acquired by Chinese Gaofen-3 SAR sensor in full-polarization mode.
High dynamic range CMOS (HDRC) imagers for safety systems
NASA Astrophysics Data System (ADS)
Strobel, Markus; Döttling, Dietmar
2013-04-01
The first part of this paper describes the high dynamic range CMOS (HDRC®) imager - a special type of CMOS image sensor with logarithmic response. The powerful property of a high dynamic range (HDR) image acquisition is detailed by mathematical definition and measurement of the optoelectronic conversion function (OECF) of two different HDRC imagers. Specific sensor parameters will be discussed including the pixel design for the global shutter readout. The second part will give an outline on the applications and requirements of cameras for industrial safety. Equipped with HDRC global shutter sensors SafetyEYE® is a high-performance stereo camera system for safe three-dimensional zone monitoring enabling new and more flexible solutions compared to existing safety guards.
GAO, L.; HAGEN, N.; TKACZYK, T.S.
2012-01-01
Summary We implement a filterless illumination scheme on a hyperspectral fluorescence microscope to achieve full-range spectral imaging. The microscope employs polarisation filtering, spatial filtering and spectral unmixing filtering to replace the role of traditional filters. Quantitative comparisons between full-spectrum and filter-based microscopy are provided in the context of signal dynamic range and accuracy of measured fluorophores’ emission spectra. To show potential applications, a five-colour cell immunofluorescence imaging experiment is theoretically simulated. Simulation results indicate that the use of proposed full-spectrum imaging technique may result in three times improvement in signal dynamic range compared to that can be achieved in the filter-based imaging. PMID:22356127
Test technology on divergence angle of laser range finder based on CCD imaging fusion
NASA Astrophysics Data System (ADS)
Shi, Sheng-bing; Chen, Zhen-xing; Lv, Yao
2016-09-01
Laser range finder has been equipped with all kinds of weapons, such as tank, ship, plane and so on, is important component of fire control system. Divergence angle is important performance and incarnation of horizontal resolving power for laser range finder, is necessary appraised test item in appraisal test. In this paper, based on high accuracy test on divergence angle of laser range finder, divergence angle test system is designed based on CCD imaging, divergence angle of laser range finder is acquired through fusion technology for different attenuation imaging, problem that CCD characteristic influences divergence angle test is solved.
Generation of high-dynamic range image from digital photo
NASA Astrophysics Data System (ADS)
Wang, Ying; Potemin, Igor S.; Zhdanov, Dmitry D.; Wang, Xu-yang; Cheng, Han
2016-10-01
A number of the modern applications such as medical imaging, remote sensing satellites imaging, virtual prototyping etc use the High Dynamic Range Image (HDRI). Generally to obtain HDRI from ordinary digital image the camera is calibrated. The article proposes the camera calibration method based on the clear sky as the standard light source and takes sky luminance from CIE sky model for the corresponding geographical coordinates and time. The article considers base algorithms for getting real luminance values from ordinary digital image and corresponding programmed implementation of the algorithms. Moreover, examples of HDRI reconstructed from ordinary images illustrate the article.
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
Color transfer between high-dynamic-range images
NASA Astrophysics Data System (ADS)
Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi
2015-09-01
Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K
2016-11-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm 2 . The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm 2 ). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K.
2016-01-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm2. The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm2). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications. PMID:27896012
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Performance of PHOTONIS' low light level CMOS imaging sensor for long range observation
NASA Astrophysics Data System (ADS)
Bourree, Loig E.
2014-05-01
Identification of potential threats in low-light conditions through imaging is commonly achieved through closed-circuit television (CCTV) and surveillance cameras by combining the extended near infrared (NIR) response (800-10000nm wavelengths) of the imaging sensor with NIR LED or laser illuminators. Consequently, camera systems typically used for purposes of long-range observation often require high-power lasers in order to generate sufficient photons on targets to acquire detailed images at night. While these systems may adequately identify targets at long-range, the NIR illumination needed to achieve such functionality can easily be detected and therefore may not be suitable for covert applications. In order to reduce dependency on supplemental illumination in low-light conditions, the frame rate of the imaging sensors may be reduced to increase the photon integration time and thus improve the signal to noise ratio of the image. However, this may hinder the camera's ability to image moving objects with high fidelity. In order to address these particular drawbacks, PHOTONIS has developed a CMOS imaging sensor (CIS) with a pixel architecture and geometry designed specifically to overcome these issues in low-light level imaging. By combining this CIS with field programmable gate array (FPGA)-based image processing electronics, PHOTONIS has achieved low-read noise imaging with enhanced signal-to-noise ratio at quarter moon illumination, all at standard video frame rates. The performance of this CIS is discussed herein and compared to other commercially available CMOS and CCD for long-range observation applications.
Buried Object Detection Method Using Optimum Frequency Range in Extremely Shallow Underground
NASA Astrophysics Data System (ADS)
Sugimoto, Tsuneyoshi; Abe, Touma
2011-07-01
We propose a new detection method for buried objects using the optimum frequency response range of the corresponding vibration velocity. Flat speakers and a scanning laser Doppler vibrometer (SLDV) are used for noncontact acoustic imaging in the extremely shallow underground. The exploration depth depends on the sound pressure, but it is usually less than 10 cm. Styrofoam, wood (silver fir), and acrylic boards of the same size, different size styrofoam boards, a hollow toy duck, a hollow plastic container, a plastic container filled with sand, a hollow steel can and an unglazed pot are used as buried objects which are buried in sand to about 2 cm depth. The imaging procedure of buried objects using the optimum frequency range is given below. First, the standardized difference from the average vibration velocity is calculated for all scan points. Next, using this result, underground images are made using a constant frequency width to search for the frequency response range of the buried object. After choosing an approximate frequency response range, the difference between the average vibration velocity for all points and that for several points that showed a clear response is calculated for the final confirmation of the optimum frequency range. Using this optimum frequency range, we can obtain the clearest image of the buried object. From the experimental results, we confirmed the effectiveness of our proposed method. In particular, a clear image of the buried object was obtained when the SLDV image was unclear.
A research on radiation calibration of high dynamic range based on the dual channel CMOS
NASA Astrophysics Data System (ADS)
Ma, Kai; Shi, Zhan; Pan, Xiaodong; Wang, Yongsheng; Wang, Jianghua
2017-10-01
The dual channel complementary metal-oxide semiconductor (CMOS) can get high dynamic range (HDR) image through extending the gray level of the image by using image fusion with high gain channel image and low gain channel image in a same frame. In the process of image fusion with dual channel, it adopts the coefficients of radiation response of a pixel from dual channel in a same frame, and then calculates the gray level of the pixel in the HDR image. For the coefficients of radiation response play a crucial role in image fusion, it has to find an effective method to acquire these parameters. In this article, it makes a research on radiation calibration of high dynamic range based on the dual channel CMOS, and designs an experiment to calibrate the coefficients of radiation response for the sensor it used. In the end, it applies these response parameters in the dual channel CMOS which calibrates, and verifies the correctness and feasibility of the method mentioned in this paper.
A study of the effects of strong magnetic fields on the image resolution of PET scanners
NASA Astrophysics Data System (ADS)
Burdette, Don J.
Very high resolution images can be achieved in small animal PET systems utilizing solid state silicon pad detectors. In such systems using detectors with sub-millimeter intrinsic resolutions, the range of the positron is the largest contribution to the image blur. The size of the positron range effect depends on the initial positron energy and hence the radioactive tracer used. For higher energy positron emitters, such as 68Ga and 94mTc, the variation of the annihilation point dominates the spatial resolution. In this study two techniques are investigated to improve the image resolution of PET scanners limited by the range of the positron. One, the positron range can be reduced by embedding the PET field of view in a strong magnetic field. We have developed a silicon pad detector based PET instrument that can operate in strong magnetic fields with an image resolution of 0.7 mm FWHM to study this effect. Two, iterative reconstruction methods can be used to statistically correct for the range of the positron. Both strong magnetic fields and iterative reconstruction algorithms that statistically account for the positron range distribution are investigated in this work.
Image dissector camera system study
NASA Technical Reports Server (NTRS)
Howell, L.
1984-01-01
Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.
NASA Astrophysics Data System (ADS)
Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.
2006-10-01
In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.
Multibeam single frequency synthetic aperture radar processor for imaging separate range swaths
NASA Technical Reports Server (NTRS)
Jain, A. (Inventor)
1982-01-01
A single-frequency multibeam synthetic aperture radar for large swath imaging is disclosed. Each beam illuminates a separate ""footprint'' (i.e., range and azimuth interval). The distinct azimuth intervals for the separate beams produce a distinct Doppler frequency spectrum for each beam. After range correlation of raw data, an optical processor develops image data for the different beams by spatially separating the beams to place each beam of different Doppler frequency spectrum in a different location in the frequency plane as well as the imaging plane of the optical processor. Selection of a beam for imaging may be made in the frequency plane by adjusting the position of an aperture, or in the image plane by adjusting the position of a slit. The raw data may also be processed in digital form in an analogous manner.
Three-dimensional near-field MIMO array imaging using range migration techniques.
Zhuge, Xiaodong; Yarovoy, Alexander G
2012-06-01
This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.
Beam Width Robustness of a 670 GHz Imaging Radar
NASA Technical Reports Server (NTRS)
Cooper, K. B.; Llombart, N.; Dengler, R. J.; Siegel, P. H.
2009-01-01
Detection of a replica bomb belt concealed on a mannequin at 4 m standoff range is achieved using a 670 GHz imaging radar. At a somewhat larger standoff range of 4.6 m, the radar's beam width increases substantially, but the through-shirt image quality remains good. This suggests that a relatively modest increase in aperture size over the current design will be sufficient to detect person-borne concealed weapons at ranges exceeding 25 meters.
Chen, Ting; Zhang, Miao; Jabbour, Salma; Wang, Hesheng; Barbee, David; Das, Indra J; Yue, Ning
2018-04-10
Through-plane motion introduces uncertainty in three-dimensional (3D) motion monitoring when using single-slice on-board imaging (OBI) modalities such as cine MRI. We propose a principal component analysis (PCA)-based framework to determine the optimal imaging plane to minimize the through-plane motion for single-slice imaging-based motion monitoring. Four-dimensional computed tomography (4DCT) images of eight thoracic cancer patients were retrospectively analyzed. The target volumes were manually delineated at different respiratory phases of 4DCT. We performed automated image registration to establish the 4D respiratory target motion trajectories for all patients. PCA was conducted using the motion information to define the three principal components of the respiratory motion trajectories. Two imaging planes were determined perpendicular to the second and third principal component, respectively, to avoid imaging with the primary principal component of the through-plane motion. Single-slice images were reconstructed from 4DCT in the PCA-derived orthogonal imaging planes and were compared against the traditional AP/Lateral image pairs on through-plane motion, residual error in motion monitoring, absolute motion amplitude error and the similarity between target segmentations at different phases. We evaluated the significance of the proposed motion monitoring improvement using paired t test analysis. The PCA-determined imaging planes had overall less through-plane motion compared against the AP/Lateral image pairs. For all patients, the average through-plane motion was 3.6 mm (range: 1.6-5.6 mm) for the AP view and 1.7 mm (range: 0.6-2.7 mm) for the Lateral view. With PCA optimization, the average through-plane motion was 2.5 mm (range: 1.3-3.9 mm) and 0.6 mm (range: 0.2-1.5 mm) for the two imaging planes, respectively. The absolute residual error of the reconstructed max-exhale-to-inhale motion averaged 0.7 mm (range: 0.4-1.3 mm, 95% CI: 0.4-1.1 mm) using optimized imaging planes, averaged 0.5 mm (range: 0.3-1.0 mm, 95% CI: 0.2-0.8 mm) using an imaging plane perpendicular to the minimal motion component only and averaged 1.3 mm (range: 0.4-2.8 mm, 95% CI: 0.4-2.3 mm) in AP/Lateral orthogonal image pairs. The root-mean-square error of reconstructed displacement was 0.8 mm for optimized imaging planes, 0.6 mm for imaging plane perpendicular to the minimal motion component only, and 1.6 mm for AP/Lateral orthogonal image pairs. When using the optimized imaging planes for motion monitoring, there was no significant absolute amplitude error of the reconstructed motion (P = 0.0988), while AP/Lateral images had significant error (P = 0.0097) with a paired t test. The average surface distance (ASD) between overlaid two-dimensional (2D) tumor segmentation at end-of-inhale and end-of-exhale for all eight patients was 0.6 ± 0.2 mm in optimized imaging planes and 1.4 ± 0.8 mm in AP/Lateral images. The Dice similarity coefficient (DSC) between overlaid 2D tumor segmentation at end-of-inhale and end-of-exhale for all eight patients was 0.96 ± 0.03 in optimized imaging planes and 0.89 ± 0.05 in AP/Lateral images. Both ASD (P = 0.034) and DSC (P = 0.022) were significantly improved in the optimized imaging planes. Motion monitoring using imaging planes determined by the proposed PCA-based framework had significantly improved performance. Single-slice image-based motion tracking can be used for clinical implementations such as MR image-guided radiation therapy (MR-IGRT). © 2018 American Association of Physicists in Medicine.
Chahl, J S
2014-01-20
This paper describes an application for arrays of narrow-field-of-view sensors with parallel optical axes. These devices exhibit some complementary characteristics with respect to conventional perspective projection or angular projection imaging devices. Conventional imaging devices measure rotational egomotion directly by measuring the angular velocity of the projected image. Translational egomotion cannot be measured directly by these devices because the induced image motion depends on the unknown range of the viewed object. On the other hand, a known translational motion generates image velocities which can be used to recover the ranges of objects and hence the three-dimensional (3D) structure of the environment. A new method is presented for computing egomotion and range using the properties of linear arrays of independent narrow-field-of-view optical sensors. An approximate parallel projection can be used to measure translational egomotion in terms of the velocity of the image. On the other hand, a known rotational motion of the paraxial sensor array generates image velocities, which can be used to recover the 3D structure of the environment. Results of tests of an experimental array confirm these properties.
Segmentation, modeling and classification of the compact objects in a pile
NASA Technical Reports Server (NTRS)
Gupta, Alok; Funka-Lea, Gareth; Wohn, Kwangyoen
1990-01-01
The problem of interpreting dense range images obtained from the scene of a heap of man-made objects is discussed. A range image interpretation system consisting of segmentation, modeling, verification, and classification procedures is described. First, the range image is segmented into regions and reasoning is done about the physical support of these regions. Second, for each region several possible three-dimensional interpretations are made based on various scenarios of the objects physical support. Finally each interpretation is tested against the data for its consistency. The superquadric model is selected as the three-dimensional shape descriptor, plus tapering deformations along the major axis. Experimental results obtained from some complex range images of mail pieces are reported to demonstrate the soundness and the robustness of our approach.
Target recognition of ladar range images using even-order Zernike moments.
Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi
2012-11-01
Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.
Research on the range side lobe suppression method for modulated stepped frequency radar signals
NASA Astrophysics Data System (ADS)
Liu, Yinkai; Shan, Tao; Feng, Yuan
2018-05-01
The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.
NASA Astrophysics Data System (ADS)
Hasegawa, Hideyuki
2017-07-01
The range spatial resolution is an important factor determining the image quality in ultrasonic imaging. The range spatial resolution in ultrasonic imaging depends on the ultrasonic pulse length, which is determined by the mechanical response of the piezoelectric element in an ultrasonic probe. To improve the range spatial resolution without replacing the transducer element, in the present study, methods based on maximum likelihood (ML) estimation and multiple signal classification (MUSIC) were proposed. The proposed methods were applied to echo signals received by individual transducer elements in an ultrasonic probe. The basic experimental results showed that the axial half maximum of the echo from a string phantom was improved from 0.21 mm (conventional method) to 0.086 mm (ML) and 0.094 mm (MUSIC).
Wide-Field Imaging Using Nitrogen Vacancies
NASA Technical Reports Server (NTRS)
Englund, Dirk Robert (Inventor); Trusheim, Matthew Edwin (Inventor)
2017-01-01
Nitrogen vacancies in bulk diamonds and nanodiamonds can be used to sense temperature, pressure, electromagnetic fields, and pH. Unfortunately, conventional sensing techniques use gated detection and confocal imaging, limiting the measurement sensitivity and precluding wide-field imaging. Conversely, the present sensing techniques do not require gated detection or confocal imaging and can therefore be used to image temperature, pressure, electromagnetic fields, and pH over wide fields of view. In some cases, wide-field imaging supports spatial localization of the NVs to precisions at or below the diffraction limit. Moreover, the measurement range can extend over extremely wide dynamic range at very high sensitivity.
Robust mosiacs of close-range high-resolution images
NASA Astrophysics Data System (ADS)
Song, Ran; Szymanski, John E.
2008-03-01
This paper presents a robust algorithm which relies only on the information contained within the captured images for the construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and, then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic image.
Yoshida, Ken; Yamazaki, Hideya; Takenaka, Tadashi; Kotsuma, Tadayuki; Yoshida, Mineo; Furuya, Seiichi; Tanaka, Eiichi; Uegaki, Tadaaki; Kuriyama, Keiko; Matsumoto, Hisanobu; Yamada, Shigetoshi; Ban, Chiaki
2010-07-01
To investigate the feasibility of our novel image-based high-dose-rate interstitial brachytherapy (HDR-ISBT) for uterine cervical cancer, we evaluated the dose-volume histogram (DVH) according to the recommendations of the Gynecological GEC-ESTRO Working Group for image-based intracavitary brachytherapy (ICBT). Between June 2005 and June 2007, 18 previously untreated cervical cancer patients were enrolled. We implanted magnetic resonance imaging (MRI)-available plastic applicators by our unique ambulatory technique. Total treatment doses were 30-36 Gy (6 Gy per fraction) combined with external beam radiotherapy (EBRT). Treatment plans were created based on planning computed tomography with MRI as a reference. DVHs of the high-risk clinical target volume (HR CTV), intermediate-risk CTV (IR CTV), and the bladder and rectum were calculated. Dose values were biologically normalized to equivalent doses in 2-Gy fractions (EQD(2)). The median D90 (HR CTV) and D90 (IR CTV) per fraction were 6.8 Gy (range, 5.5-7.5) and 5.4 Gy (range, 4.2-6.3), respectively. The median V100 (HR CTV) and V100 (IR CTV) were 98.4% (range, 83-100) and 81.8% (range, 64-93.8), respectively. When the dose of EBRT was added, the median D90 and D100 of HR CTV were 80.6 Gy (range, 65.5-96.6) and 62.4 Gy (range, 49-83.2). The D(2cc) of the bladder was 62 Gy (range, 51.4-89) and of the rectum was 65.9 Gy (range, 48.9-76). Although the targets were advanced and difficult to treat effectively by ICBT, MRI-aided image-based ISBT showed favorable results for CTV and organs at risk compared with previously reported image-based ICBT results. (c) 2010 Elsevier Inc. All rights reserved.
Mid-infrared hyperspectral imaging for the detection of explosive compounds
NASA Astrophysics Data System (ADS)
Ruxton, K.; Robertson, G.; Miller, W.; Malcolm, G. P. A.; Maker, G. T.
2012-10-01
Active hyperspectral imaging is a valuable tool in a wide range of applications. A developing market is the detection and identification of energetic compounds through analysis of the resulting absorption spectrum. This work presents a selection of results from a prototype mid-infrared (MWIR) hyperspectral imaging instrument that has successfully been used for compound detection at a range of standoff distances. Active hyperspectral imaging utilises a broadly tunable laser source to illuminate the scene with light over a range of wavelengths. While there are a number of illumination methods, this work illuminates the scene by raster scanning the laser beam using a pair of galvanometric mirrors. The resulting backscattered light from the scene is collected by the same mirrors and directed and focussed onto a suitable single-point detector, where the image is constructed pixel by pixel. The imaging instrument that was developed in this work is based around a MWIR optical parametric oscillator (OPO) source with broad tunability, operating at 2.6 μm to 3.7 μm. Due to material handling procedures associated with explosive compounds, experimental work was undertaken initially using simulant compounds. A second set of compounds that was tested alongside the simulant compounds is a range of confusion compounds. By having the broad wavelength tunability of the OPO, extended absorption spectra of the compounds could be obtained to aid in compound identification. The prototype imager instrument has successfully been used to record the absorption spectra for a range of compounds from the simulant and confusion sets and current work is now investigating actual explosive compounds. The authors see a very promising outlook for the MWIR hyperspectral imager. From an applications point of view this format of imaging instrument could be used for a range of standoff, improvised explosive device (IED) detection applications and potential incident scene forensic investigation.
Exploitation of Microdoppler and Multiple Scattering Phenomena for Radar Target Recognition
2006-08-24
is tested with measurement data. The resulting GPR images demonstrate the effectiveness of the proposed algorithm. INTRODUCTION Subsurface imaging to...utilizes the fast Fourier . transform (FFT) to expedite the imaging GPR. Recently, we re- .... ported a fast and effective SAR-based subsurface ... imaging tech- nique that can provide good resolutions in both the range and cross-range domains I111. Our algorithm differs from Witten’s [91 and Hansen’s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foroudi, Farshad, E-mail: farshad.foroudi@petermac.org; Pham, Daniel; Bressel, Mathias
2013-05-01
Purpose: The use of image guidance protocols using soft tissue anatomy identification before treatment can reduce interfractional variation. This makes intrafraction clinical target volume (CTV) to planning target volume (PTV) changes more important, including those resulting from intrafraction bladder filling and motion. The purpose of this study was to investigate the required intrafraction margins for soft tissue image guidance from pretreatment and posttreatment volumetric imaging. Methods and Materials: Fifty patients with muscle-invasive bladder cancer (T2-T4) underwent an adaptive radiation therapy protocol using daily pretreatment cone beam computed tomography (CBCT) with weekly posttreatment CBCT. A total of 235 pairs of pretreatmentmore » and posttreatment CBCT images were retrospectively contoured by a single radiation oncologist (CBCT-CTV). The maximum bladder displacement was measured according to the patient's bony pelvis movement during treatment, intrafraction bladder filling, and bladder centroid motion. Results: The mean time between pretreatment and posttreatment CBCT was 13 minutes, 52 seconds (range, 7 min 52 sec to 30 min 56 sec). Taking into account patient motion, bladder centroid motion, and bladder filling, the required margins to cover intrafraction changes from pretreatment to posttreatment in the superior, inferior, right, left, anterior, and posterior were 1.25 cm (range, 1.19-1.50 cm), 0.67 cm (range, 0.58-1.12 cm), 0.74 cm (range, 0.59-0.94 cm), 0.73 cm (range, 0.51-1.00 cm), 1.20 cm (range, 0.85-1.32 cm), and 0.86 cm (range, 0.73-0.99), respectively. Small bladders on pretreatment imaging had relatively the largest increase in pretreatment to posttreatment volume. Conclusion: Intrafraction motion of the bladder based on pretreatment and posttreatment bladder imaging can be significant particularly in the anterior and superior directions. Patient motion, bladder centroid motion, and bladder filling all contribute to changes between pretreatment and posttreatment imaging. Asymmetric expansion of CTV to PTV should be considered. Care is required in using image-guided radiation therapy protocols that reduce CTV to PTV margins based only on daily pretreatment soft tissue position.« less
Full range line-field parallel swept source imaging utilizing digital refocusing
NASA Astrophysics Data System (ADS)
Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.
2015-12-01
We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
Poddar, Raju; Cortés, Dennis E.; Werner, John S.; Mannis, Mark J.
2013-01-01
Abstract. A high-speed (100 kHz A-scans/s) complex conjugate resolved 1 μm swept source optical coherence tomography (SS-OCT) system using coherence revival of the light source is suitable for dense three-dimensional (3-D) imaging of the anterior segment. The short acquisition time helps to minimize the influence of motion artifacts. The extended depth range of the SS-OCT system allows topographic analysis of clinically relevant images of the entire depth of the anterior segment of the eye. Patients with the type 1 Boston Keratoprosthesis (KPro) require evaluation of the full anterior segment depth. Current commercially available OCT systems are not suitable for this application due to limited acquisition speed, resolution, and axial imaging range. Moreover, most commonly used research grade and some clinical OCT systems implement a commercially available SS (Axsun) that offers only 3.7 mm imaging range (in air) in its standard configuration. We describe implementation of a common swept laser with built-in k-clock to allow phase stable imaging in both low range and high range, 3.7 and 11.5 mm in air, respectively, without the need to build an external MZI k-clock. As a result, 3-D morphology of the KPro position with respect to the surrounding tissue could be investigated in vivo both at high resolution and with large depth range to achieve noninvasive and precise evaluation of success of the surgical procedure. PMID:23912759
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging
Guo, Qijia; Wang, Jie; Chang, Tianying
2017-01-01
The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
A fast and automatic fusion algorithm for unregistered multi-exposure image sequence
NASA Astrophysics Data System (ADS)
Liu, Yan; Yu, Feihong
2014-09-01
Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.
NASA Astrophysics Data System (ADS)
Jaillon, Franck; Makita, Shuichi; Yasuno, Yoshiaki
2012-03-01
Ability of a new version of one-micrometer dual-beam optical coherence angiography (OCA) based on Doppler optical coherence tomography (OCT), is demonstrated for choroidal vasculature imaging. A particular feature of this system is the adjustable time delay between two probe beams. This allows changing the measurable velocity range of moving constituents such as blood without alteration of the scanning protocol. Since choroidal vasculature is made of vessels having blood flows with different velocities, this technique provides a way of discriminating vessels according to the velocity range of their inner flow. An example of choroid imaging of a normal emmetropic eye is here given. It is shown that combining images acquired with different velocity ranges provides an enhanced vasculature representation. This method may be then useful for pathological choroid characterization.
The Multidimensional Integrated Intelligent Imaging project (MI-3)
NASA Astrophysics Data System (ADS)
Allinson, N.; Anaxagoras, T.; Aveyard, J.; Arvanitis, C.; Bates, R.; Blue, A.; Bohndiek, S.; Cabello, J.; Chen, L.; Chen, S.; Clark, A.; Clayton, C.; Cook, E.; Cossins, A.; Crooks, J.; El-Gomati, M.; Evans, P. M.; Faruqi, W.; French, M.; Gow, J.; Greenshaw, T.; Greig, T.; Guerrini, N.; Harris, E. J.; Henderson, R.; Holland, A.; Jeyasundra, G.; Karadaglic, D.; Konstantinidis, A.; Liang, H. X.; Maini, K. M. S.; McMullen, G.; Olivo, A.; O'Shea, V.; Osmond, J.; Ott, R. J.; Prydderch, M.; Qiang, L.; Riley, G.; Royle, G.; Segneri, G.; Speller, R.; Symonds-Tayler, J. R. N.; Triger, S.; Turchetta, R.; Venanzi, C.; Wells, K.; Zha, X.; Zin, H.
2009-06-01
MI-3 is a consortium of 11 universities and research laboratories whose mission is to develop complementary metal-oxide semiconductor (CMOS) active pixel sensors (APS) and to apply these sensors to a range of imaging challenges. A range of sensors has been developed: On-Pixel Intelligent CMOS (OPIC)—designed for in-pixel intelligence; FPN—designed to develop novel techniques for reducing fixed pattern noise; HDR—designed to develop novel techniques for increasing dynamic range; Vanilla/PEAPS—with digital and analogue modes and regions of interest, which has also been back-thinned; Large Area Sensor (LAS)—a novel, stitched LAS; and eLeNA—which develops a range of low noise pixels. Applications being developed include autoradiography, a gamma camera system, radiotherapy verification, tissue diffraction imaging, X-ray phase-contrast imaging, DNA sequencing and electron microscopy.
NASA Astrophysics Data System (ADS)
Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.
2014-03-01
We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.
Range image registration based on hash map and moth-flame optimization
NASA Astrophysics Data System (ADS)
Zou, Li; Ge, Baozhen; Chen, Lei
2018-03-01
Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.
Range Image Flow using High-Order Polynomial Expansion
2013-09-01
included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library
NASA Astrophysics Data System (ADS)
Rangarajan, Swathi; Chou, Li-Dek; Coughlan, Carolyn; Sharma, Giriraj; Wong, Brian J. F.; Ramalingam, Tirunelveli S.
2016-02-01
Fourier domain optical coherence tomography (FD-OCT) is a noninvasive imaging modality that has previously been used to image the human larynx. However, differences in anatomical geometry and short imaging range of conventional OCT limits its application in a clinical setting. In order to address this issue, we have developed a gradient-index (GRIN) lens rod-based hand-held probe in conjunction with a long imaging range 200 kHz Vertical-Cavity Surface Emitting Lasers (VCSEL) swept-source optical coherence tomography (SS-OCT) system for high speed real-time imaging of the human larynx in an office setting. This hand-held probe is designed to have a long and dynamically tunable working distance to accommodate the differences in anatomical geometry of human test subjects. A nominal working distance (~6 cm) of the probe is selected to have a lateral resolution <100 um within a depth of focus of 6.4 mm, which covers more than half of the 12 mm imaging range of the VCSEL laser. The maximum lateral scanning range of the probe at 6 cm working distance is approximately 8.4 mm, and imaging an area of 8.5 mm by 8.5 mm is accomplished within a second. Using the above system, we will demonstrate real-time cross-sectional OCT imaging of larynx during phonation in vivo in human and ex-vivo in pig vocal folds.
Range data description based on multiple characteristics
NASA Technical Reports Server (NTRS)
Al-Hujazi, Ezzet; Sood, Arun
1988-01-01
An algorithm for describing range images based on Mean curvature (H) and Gaussian curvature (K) is presented. Range images are unique in that they directly approximate the physical surfaces of a real world 3-D scene. The curvature parameters are derived from the fundamental theorems of differential geometry and provides visible invariant pixel labels that can be used to characterize the scene. The sign of H and K can be used to classify each pixel into one of eight possible surface types. Due to the sensitivity of these parameters to noise the resulting HK-sing map does not directly identify surfaces in the range images and must be further processed. A region growing algorithm based on modeling the scene points with a Markov Random Field (MRF) of variable neighborhood size and edge models is suggested. This approach allows the integration of information from multiple characteristics in an efficient way. The performance of the proposed algorithm on a number of synthetic and real range images is discussed.
Lower-upper-threshold correlation for underwater range-gated imaging self-adaptive enhancement.
Sun, Liang; Wang, Xinwei; Liu, Xiaoquan; Ren, Pengdao; Lei, Pingshun; He, Jun; Fan, Songtao; Zhou, Yan; Liu, Yuliang
2016-10-10
In underwater range-gated imaging (URGI), enhancement of low-brightness and low-contrast images is critical for human observation. Traditional histogram equalizations over-enhance images, with the result of details being lost. To compress over-enhancement, a lower-upper-threshold correlation method is proposed for underwater range-gated imaging self-adaptive enhancement based on double-plateau histogram equalization. The lower threshold determines image details and compresses over-enhancement. It is correlated with the upper threshold. First, the upper threshold is updated by searching for the local maximum in real time, and then the lower threshold is calculated by the upper threshold and the number of nonzero units selected from a filtered histogram. With this method, the backgrounds of underwater images are constrained with enhanced details. Finally, the proof experiments are performed. Peak signal-to-noise-ratio, variance, contrast, and human visual properties are used to evaluate the objective quality of the global and regions of interest images. The evaluation results demonstrate that the proposed method adaptively selects the proper upper and lower thresholds under different conditions. The proposed method contributes to URGI with effective image enhancement for human eyes.
Target recognition of ladar range images using slice image: comparison of four improved algorithms
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang
2017-07-01
Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.
Use of the variable gain settings on SPOT
Chavez, P.S.
1989-01-01
Often the brightness or digital number (DN) range of satellite image data is less than optimal and uses only a portion of the available values (0 to 255) because the range of reflectance values is small. Most imaging systems have been designed with only two gain settings, normal and high. The SPOT High Resolution Visible (HRV) imaging system has the capability to collect image data using one of eight different gain settings. With the proper procedure this allows the brightness or reflectance resolution, which is directly related to the range of DN values recorded, to be optimized for any given site as compared to using a single set of gain settings everywhere. -from Author
Processing techniques for digital sonar images from GLORIA.
Chavez, P.S.
1986-01-01
Image processing techniques have been developed to handle data from one of the newest members of the remote sensing family of digital imaging systems. This paper discusses software to process data collected by the GLORIA (Geological Long Range Inclined Asdic) sonar imaging system, designed and built by the Institute of Oceanographic Sciences (IOS) in England, to correct for both geometric and radiometric distortions that exist in the original 'raw' data. Preprocessing algorithms that are GLORIA-specific include corrections for slant-range geometry, water column offset, aspect ratio distortion, changes in the ship's velocity, speckle noise, and shading problems caused by the power drop-off which occurs as a function of range.-from Author
An Accurate Co-registration Method for Airborne Repeat-pass InSAR
NASA Astrophysics Data System (ADS)
Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.
2017-10-01
Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.
NASA Astrophysics Data System (ADS)
Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan
2014-03-01
We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling.
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-07-14
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath.
Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling
Yu, Ze; Lin, Peng; Xiao, Peng; Kang, Lihong; Li, Chunsheng
2016-01-01
Compared with low-Earth orbit synthetic aperture radar (SAR), a geosynchronous (GEO) SAR can have a shorter revisit period and vaster coverage. However, relative motion between this SAR and targets is more complicated, which makes range cell migration (RCM) spatially variant along both range and azimuth. As a result, efficient and precise imaging becomes difficult. This paper analyzes and models spatial variance for GEO SAR in the time and frequency domains. A novel algorithm for GEO SAR imaging with a resolution of 2 m in both the ground cross-range and range directions is proposed, which is composed of five steps. The first is to eliminate linear azimuth variance through the first azimuth time scaling. The second is to achieve RCM correction and range compression. The third is to correct residual azimuth variance by the second azimuth time-frequency scaling. The fourth and final steps are to accomplish azimuth focusing and correct geometric distortion. The most important innovation of this algorithm is implementation of the time-frequency scaling to correct high-order azimuth variance. As demonstrated by simulation results, this algorithm can accomplish GEO SAR imaging with good and uniform imaging quality over the entire swath. PMID:27428974
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.
2011-03-01
We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.
A new compact, cost-efficient concept for underwater range-gated imaging: the UTOFIA project
NASA Astrophysics Data System (ADS)
Mariani, Patrizio; Quincoces, Iñaki; Galparsoro, Ibon; Bald, Juan; Gabiña, Gorka; Visser, Andy; Jónasdóttir, Sigrun; Haugholt, Karl Henrik; Thorstensen, Jostein; Risholm, Petter; Thielemann, Jens
2017-04-01
Underwater Time Of Flight Image Acquisition system (UTOFIA) is a recently launched H2020 project (H2020 - 633098) to develop a compact and cost-effective underwater imaging system especially suited for observations in turbid environments. The UTOFIA project targets technology that can overcome the limitations created by scattering, by introducing cost-efficient range-gated imaging for underwater applications. This technology relies on a image acquisition principle that can extends the imaging range of the cameras 2-3 times respect to other cameras. Moreover, the system will simultaneously capture 3D information of the observed objects. Today range-gated imaging is not widely used, as it relies on specialised optical components making systems large and costly. Recent technology developments have made it possible a significant (2-3 times) reduction in size, complexity and cost of underwater imaging systems, whilst addressing the scattering issues at the same time. By acquiring simultaneous 3D data, the system allows to accurately measure the absolute size of marine life and their spatial relationship to their habitat, enhancing the precision of fish stock monitoring and ecology assessment, hence supporting proper management of marine resources. Additionally, the larger observed volume and the improved image quality make the system suitable for cost-effective underwater surveillance operations in e.g. fish farms, underwater infrastructures. The system can be integrated into existing ocean observatories for real time acquisition and can greatly advance present efforts in developing species recognition algorithms, given the additional features provided, the improved image quality and the independent illumination source based on laser. First applications of the most recent prototype of the imaging system will be provided including inspection of underwater infrastructures and observations of marine life under different environmental conditions.
Ando, Koki; Yamaguchi, Mitsutaka; Yamamoto, Seiichi; Toshito, Toshiyuki; Kawachi, Naoki
2017-06-21
Imaging of secondary electron bremsstrahlung x-ray emitted during proton irradiation is a possible method for measurement of the proton beam distribution in phantom. However, it is not clear that the method is used for range estimation of protons. For this purpose, we developed a low-energy x-ray camera and conducted imaging of the bremsstrahlung x-ray produced during irradiation of proton beams. We used a 20 mm × 20 mm × 1 mm finely grooved GAGG scintillator that was optically coupled to a one-inch square high quantum efficiency (HQE)-type position-sensitive photomultiplier tube to form an imaging detector. The imaging detector was encased in a 2 cm-thick tungsten container, and a pinhole collimator was attached to its camera head. After performance of the camera was evaluated, secondary electron bremsstrahlung x-ray imaging was conducted during irradiation of the proton beams for three different proton energies, and the results were compared with Monte Carlo simulation as well as calculated value. The system spatial resolution and sensitivity of the developed x-ray camera with 1.5 mm-diameter pinhole collimator were estimated to be 32 mm FWHM and 5.2 × 10 -7 for ~35 keV x-ray photons at 100 cm from the collimator surface, respectively. We could image the proton beam tracks by measuring the secondary electron bremsstrahlung x-ray during irradiation of the proton beams, and the ranges for different proton energies could be estimated from the images. The measured ranges from the images were well matched with the Monte Carlo simulation, and slightly smaller than the calculated values. We confirmed that the imaging of the secondary electron bremsstrahlung x-ray emitted during proton irradiation with the developed x-ray camera has the potential to be a new tool for proton range estimations.
Imaging plates calibration to X-rays
NASA Astrophysics Data System (ADS)
Curcio, A.; Andreoli, P.; Cipriani, M.; Claps, G.; Consoli, F.; Cristofari, G.; De Angelis, R.; Giulietti, D.; Ingenito, F.; Pacella, D.
2016-05-01
The growing interest for the Imaging Plates, due to their high sensitivity range and versatility, has induced, in the last years, to detailed characterizations of their response function in different energy ranges and kind of radiation/particles. A calibration of the Imaging Plates BAS-MS, BAS-SR, BAS-TR has been performed at the ENEA-Frascati labs by exploiting the X-ray fluorescence of different targets (Ca, Cu, Pb, Mo, I, Ta) and the radioactivity of a BaCs source, in order to cover the X-ray range between few keV to 80 keV.
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Zhou, Rui; Sun, Jinping; Hu, Yuxin; Qi, Yaolong
2018-01-31
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm.
Zhou, Rui; Hu, Yuxin; Qi, Yaolong
2018-01-01
Synthetic aperture radar (SAR) equipped on the hypersonic air vehicle in near space has many advantages over the conventional airborne SAR. However, its high-speed maneuvering characteristics with curved trajectory result in serious range migration, and exacerbate the contradiction between the high resolution and wide swath. To solve this problem, this paper establishes the imaging geometrical model matched with the flight trajectory of the hypersonic platform and the multichannel azimuth sampling model based on the displaced phase center antenna (DPCA) technology. Furthermore, based on the multichannel signal reconstruction theory, a more efficient spectrum reconstruction model using discrete Fourier transform is proposed to obtain the azimuth uniform sampling data. Due to the high complexity of the slant range model, it is difficult to deduce the processing algorithm for SAR imaging. Thus, an approximate range model is derived based on the minimax criterion, and the optimal second-order approximate coefficients of cosine function are obtained using the two-population coevolutionary algorithm. On this basis, aiming at the problem that the traditional Omega-K algorithm cannot compensate the residual phase with the difficulty of Stolt mapping along the range frequency axis, this paper proposes an Exact Transfer Function (ETF) algorithm for SAR imaging, and presents a method of range division to achieve wide swath imaging. Simulation results verify the effectiveness of the ETF imaging algorithm. PMID:29385059
High resolution axicon-based endoscopic FD OCT imaging with a large depth range
NASA Astrophysics Data System (ADS)
Lee, Kye-Sung; Hurley, William; Deegan, John; Dean, Scott; Rolland, Jannick P.
2010-02-01
Endoscopic imaging in tubular structures, such as the tracheobronchial tree, could benefit from imaging optics with an extended depth of focus (DOF). This optics could accommodate for varying sizes of tubular structures across patients and along the tree within a single patient. In the paper, we demonstrate an extended DOF without sacrificing resolution showing rotational images in biological tubular samples with 2.5 μm axial resolution, 10 ìm lateral resolution, and > 4 mm depth range using a custom designed probe.
NASA Astrophysics Data System (ADS)
Chen, Hao; Zhang, Xinggan; Bai, Yechao; Tang, Lan
2017-01-01
In inverse synthetic aperture radar (ISAR) imaging, the migration through resolution cells (MTRCs) will occur when the rotation angle of the moving target is large, thereby degrading image resolution. To solve this problem, an ISAR imaging method based on segmented preprocessing is proposed. In this method, the echoes of large rotating target are divided into several small segments, and every segment can generate a low-resolution image without MTRCs. Then, each low-resolution image is rotated back to the original position. After image registration and phase compensation, a high-resolution image can be obtained. Simulation and real experiments show that the proposed algorithm can deal with the radar system with different range and cross-range resolutions and significantly compensate the MTRCs.
Improved proton CT imaging using a bismuth germanium oxide scintillator.
Tanaka, Sodai; Nishio, Teiji; Tsuneda, Masato; Matsushita, Keiichiro; Kabuki, Shigeto; Uesaka, Mitsuru
2018-02-02
Range uncertainty is among the most formidable challenges associated with the treatment planning of proton therapy. Proton imaging, which includes proton radiography and proton computed tomography (pCT), is a useful verification tool. We have developed a pCT detection system that uses a thick bismuth germanium oxide (BGO) scintillator and a CCD camera. The current method is based on a previous detection system that used a plastic scintillator, and implements improved image processing techniques. In the new system, the scintillation light intensity is integrated along the proton beam path by the BGO scintillator, and acquired as a two-dimensional distribution with the CCD camera. The range of a penetrating proton is derived from the integrated light intensity using a light-to-range conversion table, and a pCT image can be reconstructed. The proton range in the BGO scintillator is shorter than in the plastic scintillator, so errors due to extended proton ranges can be reduced. To demonstrate the feasibility of the pCT system, an experiment was performed using a 70 MeV proton beam created by the AVF930 cyclotron at the National Institute of Radiological Sciences. The accuracy of the light-to-range conversion table, which is susceptible to errors due to its spatial dependence, was investigated, and the errors in the acquired pixel values were less than 0.5 mm. Images of various materials were acquired, and the pixel-value errors were within 3.1%, which represents an improvement over previous results. We also obtained a pCT image of an edible chicken piece, the first of its kind for a biological material, and internal structures approximately one millimeter in size were clearly observed. This pCT imaging system is fast and simple, and based on these findings, we anticipate that we can acquire 200 MeV pCT images using the BGO scintillator system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, P; Schreibmann, E; Fox, T
2014-06-15
Purpose: Severe CT artifacts can impair our ability to accurately calculate proton range thereby resulting in a clinically unacceptable treatment plan. In this work, we investigated a novel CT artifact correction method based on a coregistered MRI and investigated its ability to estimate CT HU and proton range in the presence of severe CT artifacts. Methods: The proposed method corrects corrupted CT data using a coregistered MRI to guide the mapping of CT values from a nearby artifact-free region. First patient MRI and CT images were registered using 3D deformable image registration software based on B-spline and mutual information. Themore » CT slice with severe artifacts was selected as well as a nearby slice free of artifacts (e.g. 1cm away from the artifact). The two sets of paired MRI and CT images at different slice locations were further registered by applying 2D deformable image registration. Based on the artifact free paired MRI and CT images, a comprehensive geospatial analysis was performed to predict the correct CT HU of the CT image with severe artifact. For a proof of concept, a known artifact was introduced that changed the ground truth CT HU value up to 30% and up to 5cm error in proton range. The ability of the proposed method to recover the ground truth was quantified using a selected head and neck case. Results: A significant improvement in image quality was observed visually. Our proof of concept study showed that 90% of area that had 30% errors in CT HU was corrected to 3% of its ground truth value. Furthermore, the maximum proton range error up to 5cm was reduced to 4mm error. Conclusion: MRI based CT artifact correction method can improve CT image quality and proton range calculation for patients with severe CT artifacts.« less
Improved proton CT imaging using a bismuth germanium oxide scintillator
NASA Astrophysics Data System (ADS)
Tanaka, Sodai; Nishio, Teiji; Tsuneda, Masato; Matsushita, Keiichiro; Kabuki, Shigeto; Uesaka, Mitsuru
2018-02-01
Range uncertainty is among the most formidable challenges associated with the treatment planning of proton therapy. Proton imaging, which includes proton radiography and proton computed tomography (pCT), is a useful verification tool. We have developed a pCT detection system that uses a thick bismuth germanium oxide (BGO) scintillator and a CCD camera. The current method is based on a previous detection system that used a plastic scintillator, and implements improved image processing techniques. In the new system, the scintillation light intensity is integrated along the proton beam path by the BGO scintillator, and acquired as a two-dimensional distribution with the CCD camera. The range of a penetrating proton is derived from the integrated light intensity using a light-to-range conversion table, and a pCT image can be reconstructed. The proton range in the BGO scintillator is shorter than in the plastic scintillator, so errors due to extended proton ranges can be reduced. To demonstrate the feasibility of the pCT system, an experiment was performed using a 70 MeV proton beam created by the AVF930 cyclotron at the National Institute of Radiological Sciences. The accuracy of the light-to-range conversion table, which is susceptible to errors due to its spatial dependence, was investigated, and the errors in the acquired pixel values were less than 0.5 mm. Images of various materials were acquired, and the pixel-value errors were within 3.1%, which represents an improvement over previous results. We also obtained a pCT image of an edible chicken piece, the first of its kind for a biological material, and internal structures approximately one millimeter in size were clearly observed. This pCT imaging system is fast and simple, and based on these findings, we anticipate that we can acquire 200 MeV pCT images using the BGO scintillator system.
Long Range View of Melas Chasma
2002-12-07
This image is a mosaic of day and night infrared images of Melas Chasma taken by NASA Mars Odyssey spacecraft. The daytime temperatures range from approximately -35 degrees Celsius -31 degrees Fahrenheit to -5 degrees Celsius 23 degrees Fahrenheit.
Electrically optofluidic zoom system with a large zoom range and high-resolution image.
Li, Lei; Yuan, Rong-Ying; Wang, Jin-Hui; Wang, Qiong-Hua
2017-09-18
We report an electrically controlled optofluidic zoom system which can achieve a large continuous zoom change and high-resolution image. The zoom system consists of an optofluidic zoom objective and a switchable light path which are controlled by two liquid optical shutters. The proposed zoom system can achieve a large tunable focal length range from 36mm to 92mm. And in this tuning range, the zoom system can correct aberrations dynamically, thus the image resolution is high. Due to large zoom range, the proposed imaging system incorporates both camera configuration and telescope configuration into one system. In addition, the whole system is electrically controlled by three electrowetting liquid lenses and two liquid optical shutters, therefore, the proposed system is very compact and free of mechanical moving parts. The proposed zoom system has potential to take place of conventional zoom systems.
A Framework of Hyperspectral Image Compression using Neural Networks
Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...
2015-01-01
Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less
Yap, Timothy E; Archer, Timothy J; Gobbe, Marine; Reinstein, Dan Z
2016-02-01
To compare corneal thickness measurements between three imaging systems. In this retrospective study of 81 virgin and 58 post-laser refractive surgery corneas, central and minimum corneal thickness were measured using optical coherence tomography (OCT), very high-frequency digital ultrasound (VHF digital ultrasound), and a Scheimpflug imaging system. Agreement between methods was analyzed using mean differences (bias) (OCT - VHF digital ultrasound, OCT - Scheimpflug, VHF digital ultrasound - Scheimpflug) and Bland-Altman analysis with 95% limits of agreement (LoA). Virgin cornea mean central corneal thickness was 508.3 ± 33.2 µm (range: 434 to 588 µm) for OCT, 512.7 ± 32.2 µm (range: 440 to 587 µm) for VHF digital ultrasound, and 530.2 ± 32.6 µm (range: 463 to 612 µm) for Scheimpflug imaging. OCT and VHF digital ultrasound showed the closest agreement with a bias of -4.37 µm, 95% LoA ±12.6 µm. Least agreement was between OCT and Scheimpflug imaging with a bias of -21.9 µm, 95% LoA ±20.7 µm. Bias between VHF digital ultrasound and Scheimpflug imaging was -17.5 µm, 95% LoA ±19.0 µm. In post-laser refractive surgery corneas, mean central corneal thickness was 417.9 ± 47.1 µm (range: 342 to 557 µm) for OCT, 426.3 ± 47.1 µm (range: 363 to 563 µm) for VHF digital ultrasound, and 437.0 ± 48.5 µm (range: 359 to 571 µm) for Scheimpflug imaging. Closest agreement was between OCT and VHF digital ultrasound with a bias of -8.45 µm, 95% LoA ±13.2 µm. Least agreement was between OCT and Scheimpflug imaging with a bias of -19.2 µm, 95% LoA ±19.2 µm. Bias between VHF digital ultrasound and Scheimpflug imaging was -10.7 µm, 95% LoA ±20.0 µm. No relationship was observed between difference in central corneal thickness measurements and mean central corneal thickness. Results were similar for minimum corneal thickness. Central and minimum corneal thickness was measured thinnest by OCT and thickest by Scheimpflug imaging in both groups. A clinically significant bias existed between Scheimpflug imaging and the other two modalities. Copyright 2016, SLACK Incorporated.
Rosman, David A; Duszak, Richard; Wang, Wenyi; Hughes, Danny R; Rosenkrantz, Andrew B
2018-02-01
The objective of our study was to use a new modality and body region categorization system to assess changing utilization of noninvasive diagnostic imaging in the Medicare fee-for-service population over a recent 20-year period (1994-2013). All Medicare Part B Physician Fee Schedule services billed between 1994 and 2013 were identified using Physician/Supplier Procedure Summary master files. Billed codes for diagnostic imaging were classified using the Neiman Imaging Types of Service (NITOS) coding system by both modality and body region. Utilization rates per 1000 beneficiaries were calculated for families of services. Among all diagnostic imaging modalities, growth was greatest for MRI (+312%) and CT (+151%) and was lower for ultrasound, nuclear medicine, and radiography and fluoroscopy (range, +1% to +31%). Among body regions, service growth was greatest for brain (+126%) and spine (+74%) imaging; showed milder growth (range, +18% to +67%) for imaging of the head and neck, breast, abdomen and pelvis, and extremity; and showed slight declines (range, -2% to -7%) for cardiac and chest imaging overall. The following specific imaging service families showed massive (> +100%) growth: cardiac CT, cardiac MRI, and breast MRI. NITOS categorization permits identification of temporal shifts in noninvasive diagnostic imaging by specific modality- and region-focused families, providing a granular understanding and reproducible analysis of global changes in imaging overall. Service family-level perspectives may help inform ongoing policy efforts to optimize imaging utilization and appropriateness.
Sensor Management for Tactical Surveillance Operations
2007-11-01
active and passive sonar for submarine and tor- pedo detection, and mine avoidance. [range, bearing] range 1.8 km to 55 km Active or Passive AN/SLQ-501...finding (DF) unit [bearing, classification] maximum range 1100 km Passive Cameras (day- light/ night- vision) ( video & still) Record optical and...infrared still images or motion video of events for near-real time assessment or long term analysis and archiving. Range is limited by the image resolution
Thermal Texture Generation and 3d Model Reconstruction Using SFM and Gan
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Mizginov, V. A.
2018-05-01
Realistic 3D models with textures representing thermal emission of the object are widely used in such fields as dynamic scene analysis, autonomous driving, and video surveillance. Structure from Motion (SfM) methods provide a robust approach for the generation of textured 3D models in the visible range. Still, automatic generation of 3D models from the infrared imagery is challenging due to an absence of the feature points and low sensor resolution. Recent advances in Generative Adversarial Networks (GAN) have proved that they can perform complex image-to-image transformations such as a transformation of day to night and generation of imagery in a different spectral range. In this paper, we propose a novel method for generation of realistic 3D models with thermal textures using the SfM pipeline and GAN. The proposed method uses visible range images as an input. The images are processed in two ways. Firstly, they are used for point matching and dense point cloud generation. Secondly, the images are fed into a GAN that performs the transformation from the visible range to the thermal range. We evaluate the proposed method using real infrared imagery captured with a FLIR ONE PRO camera. We generated a dataset with 2000 pairs of real images captured in thermal and visible range. The dataset is used to train the GAN network and to generate 3D models using SfM. The evaluation of the generated 3D models and infrared textures proved that they are similar to the ground truth model in both thermal emissivity and geometrical shape.
Tie Points Extraction for SAR Images Based on Differential Constraints
NASA Astrophysics Data System (ADS)
Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.
2018-04-01
Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
Uncomfortable images in art and nature.
Fernandez, Dominic; Wilkins, Arnold J
2008-01-01
The ratings of discomfort from a wide variety of images can be predicted from the energy at different spatial scales in the image, as measured by the Fourier amplitude spectrum of the luminance. Whereas comfortable images show the regression of Fourier amplitude against spatial frequency common in natural scenes, uncomfortable images show a regression with disproportionately greater amplitude at spatial frequencies within two octaves of 3 cycles deg(-1). In six studies, the amplitude in this spatial frequency range relative to that elsewhere in the spectrum explains variance in judgments of discomfort from art, from images constructed from filtered noise, and from art in which the phase or amplitude spectra have been altered. Striped patterns with spatial frequency within the above range are known to be uncomfortable and capable of provoking headaches and seizures in susceptible persons. The present findings show for the first time that, even in more complex images, the energy in this spatial-frequency range is associated with aversion. We propose a simple measurement that can predict aversion to those works of art that have reached the national media because of negative public reaction.
NASA Astrophysics Data System (ADS)
Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.
2012-01-01
Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.
Ladar imaging detection of salient map based on PWVD and Rényi entropy
NASA Astrophysics Data System (ADS)
Xu, Yuannan; Zhao, Yuan; Deng, Rong; Dong, Yanbing
2013-10-01
Spatial-frequency information of a given image can be extracted by associating the grey-level spatial data with one of the well-known spatial/spatial-frequency distributions. The Wigner-Ville distribution (WVD) has a good characteristic that the images can be represented in spatial/spatial-frequency domains. For intensity and range images of ladar, through the pseudo Wigner-Ville distribution (PWVD) using one or two dimension window, the statistical property of Rényi entropy is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on PWVD and Rényi entropy is proposed. After that, target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.
Optical coherence tomography imaging based on non-harmonic analysis
NASA Astrophysics Data System (ADS)
Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya
2009-11-01
A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.
NASA Astrophysics Data System (ADS)
Coughlan, Carolyn A.; Chou, Li-Dek; Jing, Joseph C.; Chen, Jason J.; Rangarajan, Swathi; Chang, Theodore H.; Sharma, Giriraj K.; Cho, Kyoungrai; Lee, Donghoon; Goddard, Julie A.; Chen, Zhongping; Wong, Brian J. F.
2016-03-01
Diagnosis and treatment of vocal fold lesions has been a long-evolving science for the otolaryngologist. Contemporary practice requires biopsy of a glottal lesion in the operating room under general anesthesia for diagnosis. Current in-office technology is limited to visualizing the surface of the vocal folds with fiber-optic or rigid endoscopy and using stroboscopic or high-speed video to infer information about submucosal processes. Previous efforts using optical coherence tomography (OCT) have been limited by small working distances and imaging ranges. Here we report the first full field, high-speed, and long-range OCT images of awake patients’ vocal folds as well as cross-sectional video and Doppler analysis of their vocal fold motions during phonation. These vertical-cavity surface-emitting laser source (VCSEL) OCT images offer depth resolved, high-resolution, high-speed, and panoramic images of both the true and false vocal folds. This technology has the potential to revolutionize in-office imaging of the larynx.
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
Coherent X-ray imaging across length scales
NASA Astrophysics Data System (ADS)
Munro, P. R. T.
2017-04-01
Contemporary X-ray imaging techniques span a uniquely wide range of spatial resolutions, covering five orders of magnitude. The evolution of X-ray sources, from the earliest laboratory sources through to highly brilliant and coherent free-electron lasers, has been key to the development of these imaging techniques. This review surveys the predominant coherent X-ray imaging techniques with fields of view ranging from that of entire biological organs, down to that of biomolecules. We introduce the fundamental principles necessary to understand the image formation for each technique as well as briefly reviewing coherent X-ray source development. We present example images acquired using a selection of techniques, by leaders in the field.
Non-Cooperative Target Imaging and Parameter Estimation with Narrowband Radar Echoes.
Yeh, Chun-mao; Zhou, Wei; Lu, Yao-bing; Yang, Jian
2016-01-20
This study focuses on the rotating target imaging and parameter estimation with narrowband radar echoes, which is essential for radar target recognition. First, a two-dimensional (2D) imaging model with narrowband echoes is established in this paper, and two images of the target are formed on the velocity-acceleration plane at two neighboring coherent processing intervals (CPIs). Then, the rotating velocity (RV) is proposed to be estimated by utilizing the relationship between the positions of the scattering centers among two images. Finally, the target image is rescaled to the range-cross-range plane with the estimated rotational parameter. The validity of the proposed approach is confirmed using numerical simulations.
Scene segmentation of natural images using texture measures and back-propagation
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Phatak, Anil; Chatterji, Gano
1993-01-01
Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.
Elschot, Mattijs; Nijsen, Johannes F W; Lam, Marnix G E H; Smits, Maarten L J; Prince, Jip F; Viergever, Max A; van den Bosch, Maurice A A J; Zonnenberg, Bernard A; de Jong, Hugo W A M
2014-10-01
Radiation pneumonitis is a rare but serious complication of radioembolic therapy of liver tumours. Estimation of the mean absorbed dose to the lungs based on pretreatment diagnostic (99m)Tc-macroaggregated albumin ((99m)Tc-MAA) imaging should prevent this, with administered activities adjusted accordingly. The accuracy of (99m)Tc-MAA-based lung absorbed dose estimates was evaluated and compared to absorbed dose estimates based on pretreatment diagnostic (166)Ho-microsphere imaging and to the actual lung absorbed doses after (166)Ho radioembolization. This prospective clinical study included 14 patients with chemorefractory, unresectable liver metastases treated with (166)Ho radioembolization. (99m)Tc-MAA-based and (166)Ho-microsphere-based estimation of lung absorbed doses was performed on pretreatment diagnostic planar scintigraphic and SPECT/CT images. The clinical analysis was preceded by an anthropomorphic torso phantom study with simulated lung shunt fractions of 0 to 30 % to determine the accuracy of the image-based lung absorbed dose estimates after (166)Ho radioembolization. In the phantom study, (166)Ho SPECT/CT-based lung absorbed dose estimates were more accurate (absolute error range 0.1 to -4.4 Gy) than (166)Ho planar scintigraphy-based lung absorbed dose estimates (absolute error range 9.5 to 12.1 Gy). Clinically, the actual median lung absorbed dose was 0.02 Gy (range 0.0 to 0.7 Gy) based on posttreatment (166)Ho-microsphere SPECT/CT imaging. Lung absorbed doses estimated on the basis of pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging (median 0.02 Gy, range 0.0 to 0.4 Gy) were significantly better predictors of the actual lung absorbed doses than doses estimated on the basis of (166)Ho-microsphere planar scintigraphy (median 10.4 Gy, range 4.0 to 17.3 Gy; p < 0.001), (99m)Tc-MAA SPECT/CT imaging (median 2.5 Gy, range 1.2 to 12.3 Gy; p < 0.001), and (99m)Tc-MAA planar scintigraphy (median 5.5 Gy, range 2.3 to 18.2 Gy; p < 0.001). In clinical practice, lung absorbed doses are significantly overestimated by pretreatment diagnostic (99m)Tc-MAA imaging. Pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging accurately predicts lung absorbed doses after (166)Ho radioembolization.
Forward and backward tone mapping of high dynamic range images based on subband architecture
NASA Astrophysics Data System (ADS)
Bouzidi, Ines; Ouled Zaid, Azza
2015-01-01
This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita
2017-11-27
We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.
Penrose high-dynamic-range imaging
NASA Astrophysics Data System (ADS)
Li, Jia; Bai, Chenyan; Lin, Zhouchen; Yu, Jian
2016-05-01
High-dynamic-range (HDR) imaging is becoming increasingly popular and widespread. The most common multishot HDR approach, based on multiple low-dynamic-range images captured with different exposures, has difficulties in handling camera and object movements. The spatially varying exposures (SVE) technology provides a solution to overcome this limitation by obtaining multiple exposures of the scene in only one shot but suffers from a loss in spatial resolution of the captured image. While aperiodic assignment of exposures has been shown to be advantageous during reconstruction in alleviating resolution loss, almost all the existing imaging sensors use the square pixel layout, which is a periodic tiling of square pixels. We propose the Penrose pixel layout, using pixels in aperiodic rhombus Penrose tiling, for HDR imaging. With the SVE technology, Penrose pixel layout has both exposure and pixel aperiodicities. To investigate its performance, we have to reconstruct HDR images in square pixel layout from Penrose raw images with SVE. Since the two pixel layouts are different, the traditional HDR reconstruction methods are not applicable. We develop a reconstruction method for Penrose pixel layout using a Gaussian mixture model for regularization. Both quantitative and qualitative results show the superiority of Penrose pixel layout over square pixel layout.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soufi, M; Arimura, H; Toyofuku, F
Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less
Cash, David M; Sinha, Tuhin K; Chapman, William C; Terawaki, Hiromi; Dawant, Benoit M; Galloway, Robert L; Miga, Michael I
2003-07-01
As image guided surgical procedures become increasingly diverse, there will be more scenarios where point-based fiducials cannot be accurately localized for registration and rigid body assumptions no longer hold. As a result, procedures will rely more frequently on anatomical surfaces for the basis of image alignment and will require intraoperative geometric data to measure and compensate for tissue deformation in the organ. In this paper we outline methods for which a laser range scanner may be used to accomplish these tasks intraoperatively. A laser range scanner based on the optical principle of triangulation acquires a dense set of three-dimensional point data in a very rapid, noncontact fashion. Phantom studies were performed to test the ability to link range scan data with traditional modes of image-guided surgery data through localization, registration, and tracking in physical space. The experiments demonstrate that the scanner is capable of localizing point-based fiducials to within 0.2 mm and capable of achieving point and surface based registrations with target registration error of less than 2.0 mm. Tracking points in physical space with the range scanning system yields an error of 1.4 +/- 0.8 mm. Surface deformation studies were performed with the range scanner in order to determine if this device was capable of acquiring enough information for compensation algorithms. In the surface deformation studies, the range scanner was able to detect changes in surface shape due to deformation comparable to those detected by tomographic image studies. Use of the range scanner has been approved for clinical trials, and an initial intraoperative range scan experiment is presented. In all of these studies, the primary source of error in range scan data is deterministically related to the position and orientation of the surface within the scanner's field of view. However, this systematic error can be corrected, allowing the range scanner to provide a rapid, robust method of acquiring anatomical surfaces intraoperatively.
Automatic segmentation of cortical vessels in pre- and post-tumor resection laser range scan images
NASA Astrophysics Data System (ADS)
Ding, Siyi; Miga, Michael I.; Thompson, Reid C.; Garg, Ishita; Dawant, Benoit M.
2009-02-01
Measurement of intra-operative cortical brain movement is necessary to drive mechanical models developed to predict sub-cortical shift. At our institution, this is done with a tracked laser range scanner. This device acquires both 3D range data and 2D photographic images. 3D cortical brain movement can be estimated if 2D photographic images acquired over time can be registered. Previously, we have developed a method, which permits this registration using vessels visible in the images. But, vessel segmentation required the localization of starting and ending points for each vessel segment. Here, we propose a method, which automates the segmentation process further. This method involves several steps: (1) correction of lighting artifacts, (2) vessel enhancement, and (3) vessels' centerline extraction. Result obtained on 5 images obtained in the operating room suggests that our method is robust and is able to segment vessels reliably.
High-speed upper-airway imaging using full-range optical coherence tomography
NASA Astrophysics Data System (ADS)
Jing, Joseph; Zhang, Jun; Loy, Anthony Chin; Wong, Brian J. F.; Chen, Zhongping
2012-11-01
Obstruction in the upper airway can often cause reductions in breathing or gas exchange efficiency and lead to rest disorders such as sleep apnea. Imaging diagnosis of the obstruction region has been accomplished using computed tomography (CT) and magnetic resonance imaging (MRI). However CT requires the use of ionizing radiation, and MRI typically requires sedation of the patient to prevent motion artifacts. Long-range optical coherence tomography (OCT) has the potential to provide high-speed three-dimensional tomographic images with high resolution and without the use of ionizing radiation. In this paper, we present work on the development of a long-range OCT endoscopic probe with 1.2 mm OD and 20 mm working distance used in conjunction with a modified Fourier domain swept source OCT system to acquire structural and anatomical datasets of the human airway. Imaging from the bottom of the larynx to the end of the nasal cavity is completed within 40 s.
Cyclops: single-pixel imaging lidar system based on compressive sensing
NASA Astrophysics Data System (ADS)
Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.
2017-11-01
Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged images of the target surface. The implemented prototype demonstrated a frame rate of 30 mHz for 16x16 pixels images, a transversal (xy) resolution of 2 cm at 10 m for images with 64x64 pixels and the range (z) resolution proved to be better than 1 cm. The experimental results obtained for the "3D imaging" mode of operation demonstrated that it was possible to reconstruct spherical smooth surfaces. The proposed solution demonstrates a great potential for: miniaturization; increase spatial resolution without using large format detector arrays; eliminate the need for scanning mechanisms; implementing simple and robust configurations.
Wang, Jianji; Zheng, Nanning
2013-09-01
Fractal image compression (FIC) is an image coding technology based on the local similarity of image structure. It is widely used in many fields such as image retrieval, image denoising, image authentication, and encryption. FIC, however, suffers from the high computational complexity in encoding. Although many schemes are published to speed up encoding, they do not easily satisfy the encoding time or the reconstructed image quality requirements. In this paper, a new FIC scheme is proposed based on the fact that the affine similarity between two blocks in FIC is equivalent to the absolute value of Pearson's correlation coefficient (APCC) between them. First, all blocks in the range and domain pools are chosen and classified using an APCC-based block classification method to increase the matching probability. Second, by sorting the domain blocks with respect to APCCs between these domain blocks and a preset block in each class, the matching domain block for a range block can be searched in the selected domain set in which these APCCs are closer to APCC between the range block and the preset block. Experimental results show that the proposed scheme can significantly speed up the encoding process in FIC while preserving the reconstructed image quality well.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Park, Jinhyoung; Li, Xiang; Zhou, Qifa; Shung, K. Kirk
2013-01-01
The application of chirp coded excitation to pulse inversion tissue harmonic imaging can increase signal to noise ratio. On the other hand, the elevation of range side lobe level, caused by leakages of the fundamental signal, has been problematic in mechanical scanners which are still the most prevalent in high frequency intravascular ultrasound imaging. Fundamental chirp coded excitation imaging can achieve range side lobe levels lower than –60 dB with Hanning window, but it yields higher side lobes level than pulse inversion chirp coded tissue harmonic imaging (PI-CTHI). Therefore, in this paper a combined pulse inversion chirp coded tissue harmonic and fundamental imaging mode (CPI-CTHI) is proposed to retain the advantages of both chirp coded harmonic and fundamental imaging modes by demonstrating 20–60 MHz phantom and ex vivo results. A simulation study shows that the range side lobe level of CPI-CTHI is 16 dB lower than PI-CTHI, assuming that the transducer translates incident positions by 50 μm when two beamlines of pulse inversion pair are acquired. CPI-CTHI is implemented for a proto-typed intravascular ultrasound scanner capable of combined data acquisition in real-time. A wire phantom study shows that CPI-CTHI has a 12 dB lower range side lobe level and a 7 dB higher echo signal to noise ratio than PI-CTHI, while the lateral resolution and side lobe level are 50 μm finer and –3 dB less than fundamental chirp coded excitation imaging respectively. Ex vivo scanning of a rabbit trachea demonstrates that CPI-CTHI is capable of visualizing blood vessels as small as 200 μm in diameter with 6 dB better tissue contrast than either PI-CTHI or fundamental chirp coded excitation imaging. These results clearly indicate that CPI-CTHI may enhance tissue contrast with less range side lobe level than PI-CTHI. PMID:22871273
Laser range profiling for small target recognition
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Tulldahl, Michael
2016-05-01
The detection and classification of small surface and airborne targets at long ranges is a growing need for naval security. Long range ID or ID at closer range of small targets has its limitations in imaging due to the demand on very high transverse sensor resolution. It is therefore motivated to look for 1D laser techniques for target ID. These include vibrometry, and laser range profiling. Vibrometry can give good results but is also sensitive to certain vibrating parts on the target being in the field of view. Laser range profiling is attractive because the maximum range can be substantial, especially for a small laser beam width. A range profiler can also be used in a scanning mode to detect targets within a certain sector. The same laser can also be used for active imaging when the target comes closer and is angular resolved. The present paper will show both experimental and simulated results for laser range profiling of small boats out to 6-7 km range and a UAV mockup at close range (1.3 km). We obtained good results with the profiling system both for target detection and recognition. Comparison of experimental and simulated range waveforms based on CAD models of the target support the idea of having a profiling system as a first recognition sensor and thus narrowing the search space for the automatic target recognition based on imaging at close ranges. The naval experiments took place in the Baltic Sea with many other active and passive EO sensors beside the profiling system. Discussion of data fusion between laser profiling and imaging systems will be given. The UAV experiments were made from the rooftop laboratory at FOI.
Towards a robust HDR imaging system
NASA Astrophysics Data System (ADS)
Long, Xin; Zeng, Xiangrong; Huangpeng, Qizi; Zhou, Jinglun; Feng, Jing
2016-07-01
High dynamic range (HDR) images can show more details and luminance information in general display device than low dynamic image (LDR) images. We present a robust HDR imaging system which can deal with blurry LDR images, overcoming the limitations of most existing HDR methods. Experiments on real images show the effectiveness and competitiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Mir, J. A.; Plackett, R.; Shipsey, I.; dos Santos, J. M. F.
2018-01-01
The paper "Using the Medipix3 detector for direct electron imaging in the range 60keV to 200keV in electron microscopy" by J.A. Mir, R. Plackett, I. Shipsey and J.M.F. dos Santos has been retracted following the authors' request on the basis of the existence of a disagreement about the ownership of the data, to prevent conflict between collaborators.
Westbury, Chris F.; Shaoul, Cyrus; Hollis, Geoff; Smithson, Lisa; Briesemeister, Benny B.; Hofmann, Markus J.; Jacobs, Arthur M.
2013-01-01
Many studies have shown that behavioral measures are affected by manipulating the imageability of words. Though imageability is usually measured by human judgment, little is known about what factors underlie those judgments. We demonstrate that imageability judgments can be largely or entirely accounted for by two computable measures that have previously been associated with imageability, the size and density of a word's context and the emotional associations of the word. We outline an algorithmic method for predicting imageability judgments using co-occurrence distances in a large corpus. Our computed judgments account for 58% of the variance in a set of nearly two thousand imageability judgments, for words that span the entire range of imageability. The two factors account for 43% of the variance in lexical decision reaction times (LDRTs) that is attributable to imageability in a large database of 3697 LDRTs spanning the range of imageability. We document variances in the distribution of our measures across the range of imageability that suggest that they will account for more variance at the extremes, from which most imageability-manipulating stimulus sets are drawn. The two predictors account for 100% of the variance that is attributable to imageability in newly-collected LDRTs using a previously-published stimulus set of 100 items. We argue that our model of imageability is neurobiologically plausible by showing it is consistent with brain imaging data. The evidence we present suggests that behavioral effects in the lexical decision task that are usually attributed to the abstract/concrete distinction between words can be wholly explained by objective characteristics of the word that are not directly related to the semantic distinction. We provide computed imageability estimates for over 29,000 words. PMID:24421777
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-12
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-01
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210
Method and apparatus for reflection mode imaging
NASA Technical Reports Server (NTRS)
Heyser, Richard C. (Inventor); Rooney, James A. (Inventor)
1989-01-01
A volume is scanned with a raster scan about a center of rotation using a transmitter/receiver at a selected range while gating a range window on the receiver with a selected range differential. The received signals are then demodulated to obtain signals representative of a property within the volume being scanned such as the density of a tumor. The range is varied until the entire volume has been scanned at all ranges to be displayed. An imaging display is synchronously scanned together with the raster scan to display variations of the property on the display. A second transmitter/receiver with associated equipment may be offset from the first and variations displayed from each of the transmitter/receivers on its separate display. The displays may then be combined stereoscopically to provide a three-dimensional image representative of variations of the property.
SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, J; University of Sydney, Sydney, NSW; Sykes, J
Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less
Neural networks application to divergence-based passive ranging
NASA Technical Reports Server (NTRS)
Barniv, Yair
1992-01-01
The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery.
SAD5 Stereo Correlation Line-Striping in an FPGA
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Morfopoulos, Arin C.
2011-01-01
High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms in addition to SAD5 to be run on the same FPGA.
Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.
Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou
2017-05-10
Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.
Comparing methods for analysis of biomedical hyperspectral image data
NASA Astrophysics Data System (ADS)
Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.
2017-02-01
Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.
Improved real-time imaging spectrometer
NASA Technical Reports Server (NTRS)
Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)
1993-01-01
An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.
Single-frequency 3D synthetic aperture imaging with dynamic metasurface antennas.
Boyarsky, Michael; Sleasman, Timothy; Pulido-Mancera, Laura; Diebold, Aaron V; Imani, Mohammadreza F; Smith, David R
2018-05-20
Through aperture synthesis, an electrically small antenna can be used to form a high-resolution imaging system capable of reconstructing three-dimensional (3D) scenes. However, the large spectral bandwidth typically required in synthetic aperture radar systems to resolve objects in range often requires costly and complex RF components. We present here an alternative approach based on a hybrid imaging system that combines a dynamically reconfigurable aperture with synthetic aperture techniques, demonstrating the capability to resolve objects in three dimensions (3D), with measurements taken at a single frequency. At the core of our imaging system are two metasurface apertures, both of which consist of a linear array of metamaterial irises that couple to a common waveguide feed. Each metamaterial iris has integrated within it a diode that can be biased so as to switch the element on (radiating) or off (non-radiating), such that the metasurface antenna can produce distinct radiation profiles corresponding to different on/off patterns of the metamaterial element array. The electrically large size of the metasurface apertures enables resolution in range and one cross-range dimension, while aperture synthesis provides resolution in the other cross-range dimension. The demonstrated imaging capabilities of this system represent a step forward in the development of low-cost, high-performance 3D microwave imaging systems.
Two-color temporal focusing multiphoton excitation imaging with tunable-wavelength excitation
NASA Astrophysics Data System (ADS)
Lien, Chi-Hsiang; Abrigo, Gerald; Chen, Pei-Hsuan; Chien, Fan-Ching
2017-02-01
Wavelength tunable temporal focusing multiphoton excitation microscopy (TFMPEM) is conducted to visualize optical sectioning images of multiple fluorophore-labeled specimens through the optimal two-photon excitation (TPE) of each type of fluorophore. The tunable range of excitation wavelength was determined by the groove density of the grating, the diffraction angle, the focal length of lenses, and the shifting distance of the first lens in the beam expander. Based on a consideration of the trade-off between the tunable-wavelength range and axial resolution of temporal focusing multiphoton excitation imaging, the presented system demonstrated a tunable-wavelength range from 770 to 920 nm using a diffraction grating with groove density of 830 lines/mm. TPE fluorescence imaging examination of a fluorescent thin film indicated that the width of the axial confined excitation was 3.0±0.7 μm and the shifting distance of the temporal focal plane was less than 0.95 μm within the presented wavelength tunable range. Fast different wavelength excitation and three-dimensionally rendered imaging of Hela cell mitochondria and cytoskeletons and mouse muscle fibers were demonstrated. Significantly, the proposed system can improve the quality of two-color TFMPEM images through different excitation wavelengths to obtain higher-quality fluorescent signals in multiple-fluorophore measurements.
High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging
Persoons, Tim; O’Donovan, Tadhg S.
2011-01-01
The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods. PMID:22346564
Proceedings of the NASA Workshop on Registration and Rectification
NASA Technical Reports Server (NTRS)
Bryant, N. A. (Editor)
1982-01-01
Issues associated with the registration and rectification of remotely sensed data. Near and long range applications research tasks and some medium range technology augmentation research areas are recommended. Image sharpness, feature extraction, inter-image mapping, error analysis, and verification methods are addressed.
Thermal infrared panoramic imaging sensor
NASA Astrophysics Data System (ADS)
Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey
2006-05-01
Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.
Prediction of Viking lander camera image quality
NASA Technical Reports Server (NTRS)
Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.
1976-01-01
Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.
NASA Astrophysics Data System (ADS)
Shramenko, Mikhail V.; Chamorovskiy, Alexander; Lyu, Hong-Chou; Lobintsov, Andrei A.; Karnowski, Karol; Yakubovich, Sergei D.; Wojtkowski, Maciej
2015-03-01
Tunable semiconductor laser for 1025-1095 nm spectral range is developed based on the InGaAs semiconductor optical amplifier and a narrow band-pass acousto-optic tunable filter in a fiber ring cavity. Mode-hop-free sweeping with tuning speeds of up to 104 nm/s was demonstrated. Instantaneous linewidth is in the range of 0.06-0.15 nm, side-mode suppression is up to 50 dB and polarization extinction ratio exceeds 18 dB. Optical power in output single mode fiber reaches 20 mW. The laser was used in OCT system for imaging a contact lens immersed in a 0.5% intra-lipid solution. The cross-section image provided the imaging depth of more than 5mm.
Acoustic superlens using Helmholtz-resonator-based metamaterials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xishan; Yin, Jing; Yu, Gaokun, E-mail: gkyu@ouc.edu.cn
2015-11-09
Acoustic superlens provides a way to overcome the diffraction limit with respect to the wavelength of the bulk wave in air. However, the operating frequency range of subwavelength imaging is quite narrow. Here, an acoustic superlens is designed using Helmholtz-resonator-based metamaterials to broaden the bandwidth of super-resolution. An experiment is carried out to verify subwavelength imaging of double slits, the imaging of which can be well resolved in the frequency range from 570 to 650 Hz. Different from previous works based on the Fabry-Pérot resonance, the corresponding mechanism of subwavelength imaging is the Fano resonance, and the strong coupling between themore » neighbouring Helmholtz resonators separated at the subwavelength interval leads to the enhanced sound transmission over a relatively wide frequency range.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Peter C.; Schreibmann, Eduard; Roper, Justin
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR.more » Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.« less
Resolving hot spot microstructure using x-ray penumbral imaging (invited)
NASA Astrophysics Data System (ADS)
Bachmann, B.; Hilsabeck, T.; Field, J.; Masters, N.; Reed, C.; Pardini, T.; Rygg, J. R.; Alexander, N.; Benedetti, L. R.; Döppner, T.; Forsman, A.; Izumi, N.; LePape, S.; Ma, T.; MacPhee, A. G.; Nagel, S.; Patel, P.; Spears, B.; Landen, O. L.
2016-11-01
We have developed and fielded x-ray penumbral imaging on the National Ignition Facility in order to enable sub-10 μm resolution imaging of stagnated plasma cores (hot spots) of spherically shock compressed spheres and shell implosion targets. By utilizing circular tungsten and tantalum apertures with diameters ranging from 20 μm to 2 mm, in combination with image plate and gated x-ray detectors as well as imaging magnifications ranging from 4 to 64, we have demonstrated high-resolution imaging of hot spot plasmas at x-ray energies above 5 keV. Here we give an overview of the experimental design criteria involved and demonstrate the most relevant influences on the reconstruction of x-ray penumbral images, as well as mitigation strategies of image degrading effects like over-exposed pixels, artifacts, and photon limited source emission. We describe experimental results showing the advantages of x-ray penumbral imaging over conventional Fraunhofer and photon limited pinhole imaging and showcase how internal hot spot microstructures can be resolved.
Resolving hot spot microstructure using x-ray penumbral imaging (invited).
Bachmann, B; Hilsabeck, T; Field, J; Masters, N; Reed, C; Pardini, T; Rygg, J R; Alexander, N; Benedetti, L R; Döppner, T; Forsman, A; Izumi, N; LePape, S; Ma, T; MacPhee, A G; Nagel, S; Patel, P; Spears, B; Landen, O L
2016-11-01
We have developed and fielded x-ray penumbral imaging on the National Ignition Facility in order to enable sub-10 μm resolution imaging of stagnated plasma cores (hot spots) of spherically shock compressed spheres and shell implosion targets. By utilizing circular tungsten and tantalum apertures with diameters ranging from 20 μm to 2 mm, in combination with image plate and gated x-ray detectors as well as imaging magnifications ranging from 4 to 64, we have demonstrated high-resolution imaging of hot spot plasmas at x-ray energies above 5 keV. Here we give an overview of the experimental design criteria involved and demonstrate the most relevant influences on the reconstruction of x-ray penumbral images, as well as mitigation strategies of image degrading effects like over-exposed pixels, artifacts, and photon limited source emission. We describe experimental results showing the advantages of x-ray penumbral imaging over conventional Fraunhofer and photon limited pinhole imaging and showcase how internal hot spot microstructures can be resolved.
Nankivil, Derek; Waterman, Gar; LaRocca, Francesco; Keller, Brenton; Kuo, Anthony N.; Izatt, Joseph A.
2015-01-01
We describe the first handheld, swept source optical coherence tomography (SSOCT) system capable of imaging both the anterior and posterior segments of the eye in rapid succession. A single 2D microelectromechanical systems (MEMS) scanner was utilized for both imaging modes, and the optical paths for each imaging mode were optimized for their respective application using a combination of commercial and custom optics. The system has a working distance of 26.1 mm and a measured axial resolution of 8 μm (in air). In posterior segment mode, the design has a lateral resolution of 9 μm, 7.4 mm imaging depth range (in air), 4.9 mm 6dB fall-off range (in air), and peak sensitivity of 103 dB over a 22° field of view (FOV). In anterior segment mode, the design has a lateral resolution of 24 μm, imaging depth range of 7.4 mm (in air), 6dB fall-off range of 4.5 mm (in air), depth-of-focus of 3.6 mm, and a peak sensitivity of 99 dB over a 17.5 mm FOV. In addition, the probe includes a wide-field iris imaging system to simplify alignment. A fold mirror assembly actuated by a bi-stable rotary solenoid was used to switch between anterior and posterior segment imaging modes, and a miniature motorized translation stage was used to adjust the objective lens position to correct for patient refraction between −12.6 and + 9.9 D. The entire probe weighs less than 630 g with a form factor of 20.3 x 9.5 x 8.8 cm. Healthy volunteers were imaged to illustrate imaging performance. PMID:26601014
In vivo optical imaging and dynamic contrast methods for biomedical research
Hillman, Elizabeth M. C.; Amoozegar, Cyrus B.; Wang, Tracy; McCaslin, Addason F. H.; Bouchard, Matthew B.; Mansfield, James; Levenson, Richard M.
2011-01-01
This paper provides an overview of optical imaging methods commonly applied to basic research applications. Optical imaging is well suited for non-clinical use, since it can exploit an enormous range of endogenous and exogenous forms of contrast that provide information about the structure and function of tissues ranging from single cells to entire organisms. An additional benefit of optical imaging that is often under-exploited is its ability to acquire data at high speeds; a feature that enables it to not only observe static distributions of contrast, but to probe and characterize dynamic events related to physiology, disease progression and acute interventions in real time. The benefits and limitations of in vivo optical imaging for biomedical research applications are described, followed by a perspective on future applications of optical imaging for basic research centred on a recently introduced real-time imaging technique called dynamic contrast-enhanced small animal molecular imaging (DyCE). PMID:22006910
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Color sensitivity of the multi-exposure HDR imaging process
NASA Astrophysics Data System (ADS)
Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.
2013-04-01
Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.
2009. Rob's areas of expertise are daylighting, physically based lighting simulation, the integration of lighting simulation with whole-building energy simulations, and high-dynamic range imaging. He has simulation, and high-dynamic range imaging. Rob is an advisory member of the Illuminating Engineering Society
Jarvis, Sam; Danza, Rosanna; Moriarty, Philip
2012-01-01
Summary Background: Noncontact atomic force microscopy (NC-AFM) now regularly produces atomic-resolution images on a wide range of surfaces, and has demonstrated the capability for atomic manipulation solely using chemical forces. Nonetheless, the role of the tip apex in both imaging and manipulation remains poorly understood and is an active area of research both experimentally and theoretically. Recent work employing specially functionalised tips has provided additional impetus to elucidating the role of the tip apex in the observed contrast. Results: We present an analysis of the influence of the tip apex during imaging of the Si(100) substrate in ultra-high vacuum (UHV) at 5 K using a qPlus sensor for noncontact atomic force microscopy (NC-AFM). Data demonstrating stable imaging with a range of tip apexes, each with a characteristic imaging signature, have been acquired. By imaging at close to zero applied bias we eliminate the influence of tunnel current on the force between tip and surface, and also the tunnel-current-induced excitation of silicon dimers, which is a key issue in scanning probe studies of Si(100). Conclusion: A wide range of novel imaging mechanisms are demonstrated on the Si(100) surface, which can only be explained by variations in the precise structural configuration at the apex of the tip. Such images provide a valuable resource for theoreticians working on the development of realistic tip structures for NC-AFM simulations. Force spectroscopy measurements show that the tip termination critically affects both the short-range force and dissipated energy. PMID:22428093
Influence of long-range Coulomb interaction in velocity map imaging.
Barillot, T; Brédy, R; Celep, G; Cohen, S; Compagnon, I; Concina, B; Constant, E; Danakas, S; Kalaitzis, P; Karras, G; Lépine, F; Loriot, V; Marciniak, A; Predelus-Renois, G; Schindler, B; Bordas, C
2017-07-07
The standard velocity-map imaging (VMI) analysis relies on the simple approximation that the residual Coulomb field experienced by the photoelectron ejected from a neutral or ion system may be neglected. Under this almost universal approximation, the photoelectrons follow ballistic (parabolic) trajectories in the externally applied electric field, and the recorded image may be considered as a 2D projection of the initial photoelectron velocity distribution. There are, however, several circumstances where this approximation is not justified and the influence of long-range forces must absolutely be taken into account for the interpretation and analysis of the recorded images. The aim of this paper is to illustrate this influence by discussing two different situations involving isolated atoms or molecules where the analysis of experimental images cannot be performed without considering long-range Coulomb interactions. The first situation occurs when slow (meV) photoelectrons are photoionized from a neutral system and strongly interact with the attractive Coulomb potential of the residual ion. The result of this interaction is the formation of a more complex structure in the image, as well as the appearance of an intense glory at the center of the image. The second situation, observed also at low energy, occurs in the photodetachment from a multiply charged anion and it is characterized by the presence of a long-range repulsive potential. Then, while the standard VMI approximation is still valid, the very specific features exhibited by the recorded images can be explained only by taking into consideration tunnel detachment through the repulsive Coulomb barrier.
Yang, Wei; Chen, Jie; Zeng, Hong Cheng; Wang, Peng Bo; Liu, Wei
2016-01-01
Based on the terrain observation by progressive scans (TOPS) mode, an efficient full-aperture image formation algorithm for focusing wide-swath spaceborne TOPS data is proposed. First, to overcome the Doppler frequency spectrum aliasing caused by azimuth antenna steering, the range-independent derotation operation is adopted, and the signal properties after derotation are derived in detail. Then, the azimuth deramp operation is performed to resolve image folding in azimuth. The traditional dermap function will introduce a time shift, resulting in appearance of ghost targets and azimuth resolution reduction at the scene edge, especially in the wide-swath coverage case. To avoid this, a novel solution is provided using a modified range-dependent deramp function combined with the chirp-z transform. Moreover, range scaling and azimuth scaling are performed to provide the same azimuth and range sampling interval for all sub-swaths, instead of the interpolation operation for the sub-swath image mosaic. Simulation results are provided to validate the proposed algorithm. PMID:27941706
Passive range estimation for rotorcraft low-altitude flight
NASA Technical Reports Server (NTRS)
Sridhar, B.; Suorsa, R.; Hussien, B.
1991-01-01
The automation of rotorcraft low-altitude flight presents challenging problems in control, computer vision and image understanding. A critical element in this problem is the ability to detect and locate obstacles, using on-board sensors, and modify the nominal trajectory. This requirement is also necessary for the safe landing of an autonomous lander on Mars. This paper examines some of the issues in the location of objects using a sequence of images from a passive sensor, and describes a Kalman filter approach to estimate the range to obstacles. The Kalman filter is also used to track features in the images leading to a significant reduction of search effort in the feature extraction step of the algorithm. The method can compute range for both straight line and curvilinear motion of the sensor. A laboratory experiment was designed to acquire a sequence of images along with sensor motion parameters under conditions similar to helicopter flight. Range estimation results using this imagery are presented.
Evaluation of image quality in terahertz pulsed imaging using test objects.
Fitzgerald, A J; Berry, E; Miles, R E; Zinovev, N N; Smith, M A; Chamberlain, J M
2002-11-07
As with other imaging modalities, the performance of terahertz (THz) imaging systems is limited by factors of spatial resolution, contrast and noise. The purpose of this paper is to introduce test objects and image analysis methods to evaluate and compare THz image quality in a quantitative and objective way, so that alternative terahertz imaging system configurations and acquisition techniques can be compared, and the range of image parameters can be assessed. Two test objects were designed and manufactured, one to determine the modulation transfer functions (MTF) and the other to derive image signal to noise ratio (SNR) at a range of contrasts. As expected the higher THz frequencies had larger MTFs, and better spatial resolution as determined by the spatial frequency at which the MTF dropped below the 20% threshold. Image SNR was compared for time domain and frequency domain image parameters and time delay based images consistently demonstrated higher SNR than intensity based parameters such as relative transmittance because the latter are more strongly affected by the sources of noise in the THz system such as laser fluctuations and detector shot noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Wesley; Sattarivand, Mike
Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknessesmore » range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.« less
High dynamic range imaging by pupil single-mode filtering and remapping
NASA Astrophysics Data System (ADS)
Perrin, G.; Lacour, S.; Woillez, J.; Thiébaut, É.
2006-12-01
Because of atmospheric turbulence, obtaining high angular resolution images with a high dynamic range is difficult even in the near-infrared domain of wavelengths. We propose a novel technique to overcome this issue. The fundamental idea is to apply techniques developed for long baseline interferometry to the case of a single-aperture telescope. The pupil of the telescope is broken down into coherent subapertures each feeding a single-mode fibre. A remapping of the exit pupil allows interfering all subapertures non-redundantly. A diffraction-limited image with very high dynamic range is reconstructed from the fringe pattern analysis with aperture synthesis techniques, free of speckle noise. The performances of the technique are demonstrated with simulations in the visible range with an 8-m telescope. Raw dynamic ranges of 1:106 can be obtained in only a few tens of seconds of integration time for bright objects.
Biologically relevant photoacoustic imaging phantoms with tunable optical and acoustic properties
Vogt, William C.; Jia, Congxian; Wear, Keith A.; Garra, Brian S.; Joshua Pfefer, T.
2016-01-01
Abstract. Established medical imaging technologies such as magnetic resonance imaging and computed tomography rely on well-validated tissue-simulating phantoms for standardized testing of device image quality. The availability of high-quality phantoms for optical-acoustic diagnostics such as photoacoustic tomography (PAT) will facilitate standardization and clinical translation of these emerging approaches. Materials used in prior PAT phantoms do not provide a suitable combination of long-term stability and realistic acoustic and optical properties. Therefore, we have investigated the use of custom polyvinyl chloride plastisol (PVCP) formulations for imaging phantoms and identified a dual-plasticizer approach that provides biologically relevant ranges of relevant properties. Speed of sound and acoustic attenuation were determined over a frequency range of 4 to 9 MHz and optical absorption and scattering over a wavelength range of 400 to 1100 nm. We present characterization of several PVCP formulations, including one designed to mimic breast tissue. This material is used to construct a phantom comprised of an array of cylindrical, hemoglobin-filled inclusions for evaluation of penetration depth. Measurements with a custom near-infrared PAT imager provide quantitative and qualitative comparisons of phantom and tissue images. Results indicate that our PVCP material is uniquely suitable for PAT system image quality evaluation and may provide a practical tool for device validation and intercomparison. PMID:26886681
Longmire, Michelle R.; Ogawa, Mikako; Choyke, Peter L.
2012-01-01
In recent years, numerous in vivo molecular imaging probes have been developed. As a consequence, much has been published on the design and synthesis of molecular imaging probes focusing on each modality, each type of material, or each target disease. More recently, second generation molecular imaging probes with unique, multi-functional, or multiplexed characteristics have been designed. This critical review focuses on (i) molecular imaging using combinations of modalities and signals that employ the full range of the electromagnetic spectra, (ii) optimized chemical design of molecular imaging probes for in vivo kinetics based on biology and physiology across a range of physical sizes, (iii) practical examples of second generation molecular imaging probes designed to extract complementary data from targets using multiple modalities, color, and comprehensive signals (277 references). PMID:21607237
NASA Astrophysics Data System (ADS)
Golnik, C.; Bemmerer, D.; Enghardt, W.; Fiedler, F.; Hueso-González, F.; Pausch, G.; Römer, K.; Rohling, H.; Schöne, S.; Wagner, L.; Kormoll, T.
2016-06-01
The finite range of a proton beam in tissue opens new vistas for the delivery of a highly conformal dose distribution in radiotherapy. However, the actual particle range, and therefore the accurate dose deposition, is sensitive to the tissue composition in the proton path. Range uncertainties, resulting from limited knowledge of this tissue composition or positioning errors, are accounted for in the form of safety margins. Thus, the unverified particle range constrains the principle benefit of proton therapy. Detecting prompt γ-rays, a side product of proton-tissue interaction, aims at an on-line and non-invasive monitoring of the particle range, and therefore towards exploiting the potential of proton therapy. Compton imaging of the spatial prompt γ-ray emission is a promising measurement approach. Prompt γ-rays exhibit emission energies of several MeV. Hence, common radioactive sources cannot provide the energy range a prompt γ-ray imaging device must be designed for. In this work a benchmark measurement-setup for the production of a localized, monoenergetic 4.44 MeV γ-ray source is introduced. At the Tandetron accelerator at the HZDR, the proton-capture resonance reaction 15N(p,α γ4.439)12C is utilized. This reaction provides the same nuclear de-excitation (and γ-ray emission) occurrent as an intense prompt γ-ray line in proton therapy. The emission yield is quantitatively described. A two-stage Compton imaging device, dedicated for prompt γ-ray imaging, is tested at the setup exemplarily. Besides successful imaging tests, the detection efficiency of the prototype at 4.44 MeV is derived from the measured data. Combining this efficiency with the emission yield for prompt γ-rays, the number of valid Compton events, induced by γ-rays in the energy region around 4.44 MeV, is estimated for the prototype being implemented in a therapeutic treatment scenario. As a consequence, the detection efficiency turns out to be a key parameter for prompt γ-rays Compton imaging limiting the applicability of the prototype in its current realization.
NASA Astrophysics Data System (ADS)
Ai, Lingyu; Kim, Eun-Soo
2018-03-01
We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.
Crowdsourcing-based evaluation of privacy in HDR images
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Nemoto, Hiromi; Skodras, Athanassios; Ebrahimi, Touradj
2014-05-01
The ability of High Dynamic Range imaging (HDRi) to capture details in high-contrast environments, making both dark and bright regions clearly visible, has a strong implication on privacy. However, the extent to which HDRi affects privacy when it is used instead of typical Standard Dynamic Range imaging (SDRi) is not yet clear. In this paper, we investigate the effect of HDRi on privacy via crowdsourcing evaluation using the Microworkers platform. Due to the lack of HDRi standard privacy evaluation dataset, we have created such dataset containing people of varying gender, race, and age, shot indoor and outdoor and under large range of lighting conditions. We evaluate the tone-mapped versions of these images, obtained by several representative tone-mapping algorithms, using subjective privacy evaluation methodology. Evaluation was performed using crowdsourcing-based framework, because it is a popular and effective alternative to traditional lab-based assessment. The results of the experiments demonstrate a significant loss of privacy when even tone-mapped versions of HDR images are used compared to typical SDR images shot with a standard exposure.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Quantum cascade lasers (QCL) for active hyperspectral imaging
NASA Astrophysics Data System (ADS)
Yang, Quankui; Fuchs, Frank; Wagner, Joachim
2014-04-01
There is an increasing demand for wavelength agile laser sources covering the mid-infrared (MIR, 3.5-12 µm) wavelength range, among others in active imaging. The MIR range comprises a particularly interesting part of the electromagnetic spectrum for active hyperspectral imaging applications, due to the fact that the characteristic `fingerprint' absorption spectra of many chemical compounds lie in that range. Conventional semiconductor diode laser technology runs out of steam at such long wavelengths. For many applications, MIR coherent light sources based on solid state lasers in combination with optical parametric oscillators are too complex and thus bulky and expensive. In contrast, quantum cascade lasers (QCLs) constitute a class of very compact and robust semiconductor-based lasers, which are able to cover the mentioned wavelength range using the same semiconductor material system. In this tutorial, a brief review will be given on the state-of-the-art of QCL technology. Special emphasis will be addressed on QCL variants with well-defined spectral properties and spectral tunability. As an example for the use of wavelength agile QCL for active hyperspectral imaging, stand-off detection of explosives based on imaging backscattering laser spectroscopy will be discussed.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Pulsed holographic system for imaging through spatially extended scattering media
NASA Astrophysics Data System (ADS)
Kanaev, A. V.; Judd, K. P.; Lebow, P.; Watnik, A. T.; Novak, K. M.; Lindle, J. R.
2017-10-01
Imaging through scattering media is a highly sought capability for military, industrial, and medical applications. Unfortunately, nearly all recent progress was achieved in microscopic light propagation and/or light propagation through thin or weak scatterers which is mostly pertinent to medical research field. Sensing at long ranges through extended scattering media, for example turbid water or dense fog, still represents significant challenge and the best results are demonstrated using conventional approaches of time- or range-gating. The imaging range of such systems is constrained by their ability to distinguish a few ballistic photons that reach the detector from the background, scattered, and ambient photons, as well as from detector noise. Holography can potentially enhance time-gating by taking advantage of extra signal filtering based on coherence properties of the ballistic photons as well as by employing coherent addition of multiple frames. In a holographic imaging scheme ballistic photons of the imaging pulse are reflected from a target and interfered with the reference pulse at the detector creating a hologram. Related approaches were demonstrated previously in one-way imaging through thin biological samples and other microscopic scale scatterers. In this work, we investigate performance of holographic imaging systems under conditions of extreme scattering (less than one signal photon per pixel signal), demonstrate advantages of coherent addition of images recovered from holograms, and discuss image quality dependence on the ratio of the signal and reference beam power.
A 3D camera for improved facial recognition
NASA Astrophysics Data System (ADS)
Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim
2004-12-01
We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.
Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.
Inchang Choi; Seung-Hwan Baek; Kim, Min H
2017-11-01
For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.
Ren, Jiayin; Zhou, Zhongwei; Li, Peng; Tang, Wei; Guo, Jixiang; Wang, Hu; Tian, Weidong
2016-09-01
This study aimed to evaluate an innovative workflow for maxillofacial fracture surgery planning and surgical splint designing. The maxillofacial multislice computerized tomography (MSCT) data and dental cone beam computerized tomography (CBCT) data both were obtained from 40 normal adults and 58 adults who suffered fractures. The each part of the CBCT dentition image was registered into MSCT image by the use of the iterative closest point algorithm. Volume evaluation of the virtual splints that were designed by the registered MSCT images and MSCT images of the same object was performed. Eighteen patients (group 1) were operated without any splint. Twenty-one (group 2) and 19 patients (group 3) used the splints designed according to the MSCT images and registered MSCT images, respectively. The authors' results showed that the mean errors between the 2 models ranged from 0.53 to 0.92 mm and the RMS errors ranged from 0.38 to 0.69 mm in fracture patients. The mean errors between the 2 models ranged from 0.47 to 0.85 mm and the RMS errors ranged from 0.33 to 0.71 mm in normal adults. 72.22% patients in group 1 recovered occlusion. 85.71% patients in group 2, and 94.73% patients in group 3 reconstructed occlusion. There was a statistically significant difference between the MSCT images based splints' volume and the registered MSCT splints' volume in patients (P <0.05). The MSCT images based splints' volume was statistically significantly distinct from the registered MSCT splints' volume in normal adults (P <0.05). There was a statistically significant difference between the MSCT images based splints' volume and the registered MSCT splints' volume in patients and normal adults (P <0.05). The occlusion recovery rate of group 3 was better than that of group 1 and group 2. The way of integrating CBCT images into MSCT images for splints designing was feasible. The volume of the splints designed by MSCT images tended to be smaller than the splints designed by the integrated MSCT images. The patients operated with splints tended to regain occlusion. The patients who were operated with the splints which were designed according to registered MSCT images tended to get occlusal recovered.
AmeriFlux US-Prr Poker Flat Research Range Black Spruce Forest
Suzuki, Rikie [Japan Agency for Marine-Earth Science and Technology
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-Prr Poker Flat Research Range Black Spruce Forest. Site Description - This site is located in a blackspruce forest within the property of the Poker Flat Research Range, University of Alaska, Fairbanks. Time-lapse image of the canopy is measured at the same time to relate flux data to satellite images.
Three-dimensional Radar Imaging of a Building
2012-12-01
spotlight configuration and H-V ( cross ) polarization as seen from two different aspect angles. The feature colors correspond to their brightness... cross - ranges but at different heights. This effect may create significant confusion in image interpretation and result in missed target detections...over a range of azimuth angles ( centered at = 0°) and elevation angles ( centered at 0), creating cross -range and height resolution, while
Plastic fiber scintillator response to fast neutrons
NASA Astrophysics Data System (ADS)
Danly, C. R.; Sjue, S.; Wilde, C. H.; Merrill, F. E.; Haight, R. C.
2014-11-01
The Neutron Imaging System at NIF uses an array of plastic scintillator fibers in conjunction with a time-gated imaging system to form an image of the neutron emission from the imploded capsule. By gating on neutrons that have scattered from the 14.1 MeV DT energy to lower energy ranges, an image of the dense, cold fuel around the hotspot is also obtained. An unmoderated spallation neutron beamline at the Weapons Neutron Research facility at Los Alamos was used in conjunction with a time-gated imaging system to measure the yield of a scintillating fiber array over several energy bands ranging from 1 to 15 MeV. The results and comparison to simulation are presented.
Plastic fiber scintillator response to fast neutrons.
Danly, C R; Sjue, S; Wilde, C H; Merrill, F E; Haight, R C
2014-11-01
The Neutron Imaging System at NIF uses an array of plastic scintillator fibers in conjunction with a time-gated imaging system to form an image of the neutron emission from the imploded capsule. By gating on neutrons that have scattered from the 14.1 MeV DT energy to lower energy ranges, an image of the dense, cold fuel around the hotspot is also obtained. An unmoderated spallation neutron beamline at the Weapons Neutron Research facility at Los Alamos was used in conjunction with a time-gated imaging system to measure the yield of a scintillating fiber array over several energy bands ranging from 1 to 15 MeV. The results and comparison to simulation are presented.
High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre
2017-06-03
Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.
Molina, David; Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M
2017-01-01
Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images.
Multi-beam range imager for autonomous operations
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.
1993-01-01
For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.
Optimization of a dual mode Rowland mount spectrometer used in the 120-950 nm wavelength range
NASA Astrophysics Data System (ADS)
McDowell, M. W.; Bouwer, H. K.
In a recent article, several configurations were described whereby a Rowland mount spectrometer could be modified to cover a wavelength range of 120-950 nm. In one of these configurations, large additional image aberration is introduced which severely limits the spectral resolving power. In the present article, the theoretical imaging properties of this configuration are considered and a simple method is proposed to reduce this aberration. The optimized system possesses an image quality similar to the conventional Rowland mount with the image surface slightly displaced from the Rowland circle but concentric to it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C.
Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a referencemore » beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the quantum noise correction factor reduced the difference from the model to the real NPS to generally within 4%. The use of the quantum noise correction improved the conversion of ASEh image to CRc image but had no difference for the conversion to CSI images. Conclusions: A practical method for estimating the NPS at any dose and over a range of beam qualities for mammography has been demonstrated. The noise model was incorporated into a methodology for converting an image to appear as if acquired on a different detector. The method can now be extended to work for a wide range of beam qualities and can be applied to the conversion of mammograms.« less
Validation of a low dose simulation technique for computed tomography images.
Muenzel, Daniela; Koehler, Thomas; Brown, Kevin; Zabić, Stanislav; Fingerle, Alexander A; Waldt, Simone; Bendik, Edgar; Zahel, Tina; Schneider, Armin; Dobritz, Martin; Rummeny, Ernst J; Noël, Peter B
2014-01-01
Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT) images from an original higher dose scan. Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV) of a swine were acquired (approved by the regional governmental commission for animal protection). Simulations of CT acquisition with a lower dose (simulated 10-80 mAs) were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2%) and -0.2% (range -8.2% to 3.2%), respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise) and 1.9-13.4 HU (CT values), without significant differences (p>0.05). Subjective observer evaluation of image appearance showed no visually detectable difference. Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.
NASA Astrophysics Data System (ADS)
Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung
2017-03-01
Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachmann, B., E-mail: bachmann2@llnl.gov; Field, J.; Masters, N.
We have developed and fielded x-ray penumbral imaging on the National Ignition Facility in order to enable sub-10 μm resolution imaging of stagnated plasma cores (hot spots) of spherically shock compressed spheres and shell implosion targets. By utilizing circular tungsten and tantalum apertures with diameters ranging from 20 μm to 2 mm, in combination with image plate and gated x-ray detectors as well as imaging magnifications ranging from 4 to 64, we have demonstrated high-resolution imaging of hot spot plasmas at x-ray energies above 5 keV. Here we give an overview of the experimental design criteria involved and demonstrate themore » most relevant influences on the reconstruction of x-ray penumbral images, as well as mitigation strategies of image degrading effects like over-exposed pixels, artifacts, and photon limited source emission. We describe experimental results showing the advantages of x-ray penumbral imaging over conventional Fraunhofer and photon limited pinhole imaging and showcase how internal hot spot microstructures can be resolved.« less
Neural-net-based image matching
NASA Astrophysics Data System (ADS)
Jerebko, Anna K.; Barabanov, Nikita E.; Luciv, Vadim R.; Allinson, Nigel M.
2000-04-01
The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.
Three-dimensional ghost imaging lidar via sparsity constraint
NASA Astrophysics Data System (ADS)
Gong, Wenlin; Zhao, Chengqiang; Yu, Hong; Chen, Mingliang; Xu, Wendong; Han, Shensheng
2016-05-01
Three-dimensional (3D) remote imaging attracts increasing attentions in capturing a target’s characteristics. Although great progress for 3D remote imaging has been made with methods such as scanning imaging lidar and pulsed floodlight-illumination imaging lidar, either the detection range or application mode are limited by present methods. Ghost imaging via sparsity constraint (GISC), enables the reconstruction of a two-dimensional N-pixel image from much fewer than N measurements. By GISC technique and the depth information of targets captured with time-resolved measurements, we report a 3D GISC lidar system and experimentally show that a 3D scene at about 1.0 km range can be stably reconstructed with global measurements even below the Nyquist limit. Compared with existing 3D optical imaging methods, 3D GISC has the capability of both high efficiency in information extraction and high sensitivity in detection. This approach can be generalized in nonvisible wavebands and applied to other 3D imaging areas.
Kozachuk, Madalena S; Sham, Tsun-Kong; Martin, Ronald R; Nelson, Andrew J; Coulthard, Ian; McElhone, John P
2018-06-22
A daguerreotype image, the first commercialized photographic process, is composed of silver-mercury, and often silver-mercury-gold amalgam particles on the surface of a silver-coated copper plate. Specular and diffuse reflectance of light from these image particles produces the range of gray tones that typify these 19 th century images. By mapping the mercury distribution with rapid-scanning, synchrotron-based micro-X-ray fluorescence (μ-XRF) imaging, full portraits, which to the naked eye are obscured entirely by extensive corrosion, can be retrieved in a non-invasive, non-contact, and non-destructive manner. This work furthers the chemical understanding regarding the production of these images and suggests that mercury is retained in the image particles despite surface degradation. Most importantly, μ-XRF imaging provides curators with an image recovery method for degraded daguerreotypes, even if the artifact's condition is beyond traditional conservation treatments.
Quantum efficiency and dark current evaluation of a backside illuminated CMOS image sensor
NASA Astrophysics Data System (ADS)
Vereecke, Bart; Cavaco, Celso; De Munck, Koen; Haspeslagh, Luc; Minoglou, Kyriaki; Moore, George; Sabuncuoglu, Deniz; Tack, Klaas; Wu, Bob; Osman, Haris
2015-04-01
We report on the development and characterization of monolithic backside illuminated (BSI) imagers at imec. Different surface passivation, anti-reflective coatings (ARCs), and anneal conditions were implemented and their effect on dark current (DC) and quantum efficiency (QE) are analyzed. Two different single layer ARC materials were developed for visible light and near UV applications, respectively. QE above 75% over the entire visible spectrum range from 400 to 700 nm is measured. In the spectral range from 260 to 400 nm wavelength, QE values above 50% over the entire range are achieved. A new technique, high pressure hydrogen anneal at 20 atm, was applied on photodiodes and improvement in DC of 30% for the BSI imager with HfO2 as ARC as well as for the front side imager was observed. The entire BSI process was developed 200 mm wafers and evaluated on test diode structures. The knowhow is then transferred to real imager sensors arrays.
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
Fast, High-Resolution Terahertz Radar Imaging at 25 Meters
NASA Technical Reports Server (NTRS)
Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Talukder, Ashit; Panangadan, Anand V.; Peay, Chris S.; Siegel, Peter H.
2010-01-01
We report improvements in the scanning speed and standoff range of an ultra-wide bandwidth terahertz (THz) imaging radar for person-borne concealed object detection. Fast beam scanning of the single-transceiver radar is accomplished by rapidly deflecting a flat, light-weight subreflector in a confocal Gregorian optical geometry. With RF back-end improvements also implemented, the radar imaging rate has increased by a factor of about 30 compared to that achieved previously in a 4 m standoff prototype instrument. In addition, a new 100 cm diameter ellipsoidal aluminum reflector yields beam spot diameters of approximately 1 cm over a 50x50 cm field of view at a range of 25 m, although some aberrations are observed that probably arise from misaligned optics. Through-clothes images of a concealed threat at 25 m range, acquired in 5 seconds, are presented, and the impact of reduced signal-to-noise from an even faster frame rate is analyzed. These results inform the system requirements for eventually achieving sub-second or video-rate THz radar imaging.
NASA Astrophysics Data System (ADS)
Marchand, Paul J.; Szlag, Daniel; Bouwens, Arno; Lasser, Theo
2018-03-01
Visible light optical coherence tomography has shown great interest in recent years for spectroscopic and high-resolution retinal and cerebral imaging. Here, we present an extended-focus optical coherence microscopy system operating from the visible to the near-infrared wavelength range for high axial and lateral resolution imaging of cortical structures in vivo. The system exploits an ultrabroad illumination spectrum centered in the visible wavelength range (λc = 650 nm, Δλ ˜ 250 nm) offering a submicron axial resolution (˜0.85 μm in water) and an extended-focus configuration providing a high lateral resolution of ˜1.4 μm maintained over ˜150 μm in depth in water. The system's axial and lateral resolution are first characterized using phantoms, and its imaging performance is then demonstrated by imaging the vasculature, myelinated axons, and neuronal cells in the first layers of the somatosensory cortex of mice in vivo.
SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications
NASA Astrophysics Data System (ADS)
Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.
2005-08-01
A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.
An automated and universal method for measuring mean grain size from a digital image of sediment
Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.
2010-01-01
Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.
Silicon Nanoparticles as Hyperpolarized Magnetic Resonance Imaging Agents
Aptekar, Jacob W.; Cassidy, Maja C.; Johnson, Alexander C.; Barton, Robert A.; Lee, Menyoung; Ogier, Alexander C.; Vo, Chinh; Anahtar, Melis N.; Ren, Yin; Bhatia, Sangeeta N.; Ramanathan, Chandrasekhar; Cory, David G.; Hill, Alison L.; Mair, Ross W.; Rosen, Matthew S.; Walsworth, Ronald L.
2014-01-01
Magnetic resonance imaging of hyperpolarized nuclei provides high image contrast with little or no background signal. To date, in-vivo applications of pre-hyperpolarized materials have been limited by relatively short nuclear spin relaxation times. Here, we investigate silicon nanoparticles as a new type of hyperpolarized magnetic resonance imaging agent. Nuclear spin relaxation times for a variety of Si nanoparticles are found to be remarkably long, ranging from many minutes to hours at room temperature, allowing hyperpolarized nanoparticles to be transported, administered, and imaged on practical time scales. Additionally, we demonstrate that Si nanoparticles can be surface functionalized using techniques common to other biologically targeted nanoparticle systems. These results suggest that Si nanoparticles can be used as a targetable, hyperpolarized magnetic resonance imaging agent with a large range of potential applications. PMID:19950973
Silicon nanoparticles as hyperpolarized magnetic resonance imaging agents.
Aptekar, Jacob W; Cassidy, Maja C; Johnson, Alexander C; Barton, Robert A; Lee, Menyoung; Ogier, Alexander C; Vo, Chinh; Anahtar, Melis N; Ren, Yin; Bhatia, Sangeeta N; Ramanathan, Chandrasekhar; Cory, David G; Hill, Alison L; Mair, Ross W; Rosen, Matthew S; Walsworth, Ronald L; Marcus, Charles M
2009-12-22
Magnetic resonance imaging of hyperpolarized nuclei provides high image contrast with little or no background signal. To date, in vivo applications of prehyperpolarized materials have been limited by relatively short nuclear spin relaxation times. Here, we investigate silicon nanoparticles as a new type of hyperpolarized magnetic resonance imaging agent. Nuclear spin relaxation times for a variety of Si nanoparticles are found to be remarkably long, ranging from many minutes to hours at room temperature, allowing hyperpolarized nanoparticles to be transported, administered, and imaged on practical time scales. Additionally, we demonstrate that Si nanoparticles can be surface functionalized using techniques common to other biologically targeted nanoparticle systems. These results suggest that Si nanoparticles can be used as a targetable, hyperpolarized magnetic resonance imaging agent with a large range of potential applications.
Tracking moving radar targets with parallel, velocity-tuned filters
Bickel, Douglas L.; Harmony, David W.; Bielek, Timothy P.; Hollowell, Jeff A.; Murray, Margaret S.; Martinez, Ana
2013-04-30
Radar data associated with radar illumination of a movable target is processed to monitor motion of the target. A plurality of filter operations are performed in parallel on the radar data so that each filter operation produces target image information. The filter operations are defined to have respectively corresponding velocity ranges that differ from one another. The target image information produced by one of the filter operations represents the target more accurately than the target image information produced by the remainder of the filter operations when a current velocity of the target is within the velocity range associated with the one filter operation. In response to the current velocity of the target being within the velocity range associated with the one filter operation, motion of the target is tracked based on the target image information produced by the one filter operation.
An HDR imaging method with DTDI technology for push-broom cameras
NASA Astrophysics Data System (ADS)
Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin
2018-03-01
Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.
In vivo imaging of the rodent eye with swept source/Fourier domain OCT
Liu, Jonathan J.; Grulkowski, Ireneusz; Kraus, Martin F.; Potsaid, Benjamin; Lu, Chen D.; Baumann, Bernhard; Duker, Jay S.; Hornegger, Joachim; Fujimoto, James G.
2013-01-01
Swept source/Fourier domain OCT is demonstrated for in vivo imaging of the rodent eye. Using commercial swept laser technology, we developed a prototype OCT imaging system for small animal ocular imaging operating in the 1050 nm wavelength range at an axial scan rate of 100 kHz with ~6 µm axial resolution. The high imaging speed enables volumetric imaging with high axial scan densities, measuring high flow velocities in vessels, and repeated volumetric imaging over time. The 1050 nm wavelength light provides increased penetration into tissue compared to standard commercial OCT systems at 850 nm. The long imaging range enables multiple operating modes for imaging the retina, posterior eye, as well as anterior eye and full eye length. A registration algorithm using orthogonally scanned OCT volumetric data sets which can correct motion on a per A-scan basis is applied to compensate motion and merge motion corrected volumetric data for enhanced OCT image quality. Ultrahigh speed swept source OCT is a promising technique for imaging the rodent eye, proving comprehensive information on the cornea, anterior segment, lens, vitreous, posterior segment, retina and choroid. PMID:23412778
Objective for EUV microscopy, EUV lithography, and x-ray imaging
Bitter, Manfred; Hill, Kenneth W.; Efthimion, Philip
2016-05-03
Disclosed is an imaging apparatus for EUV spectroscopy, EUV microscopy, EUV lithography, and x-ray imaging. This new imaging apparatus could, in particular, make significant contributions to EUV lithography at wavelengths in the range from 10 to 15 nm, which is presently being developed for the manufacturing of the next-generation integrated circuits. The disclosure provides a novel adjustable imaging apparatus that allows for the production of stigmatic images in x-ray imaging, EUV imaging, and EUVL. The imaging apparatus of the present invention incorporates additional properties compared to previously described objectives. The use of a pair of spherical reflectors containing a concave and convex arrangement has been applied to a EUV imaging system to allow for the image and optics to all be placed on the same side of a vacuum chamber. Additionally, the two spherical reflector segments previously described have been replaced by two full spheres or, more precisely, two spherical annuli, so that the total photon throughput is largely increased. Finally, the range of permissible Bragg angles and possible magnifications of the objective has been largely increased.
High-dynamic-range scene compression in humans
NASA Astrophysics Data System (ADS)
McCann, John J.
2006-02-01
Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.
Automatic detection of blurred images in UAV image sets
NASA Astrophysics Data System (ADS)
Sieberth, Till; Wackrow, Rene; Chandler, Jim H.
2016-12-01
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.
Characterization of modulated time-of-flight range image sensors
NASA Astrophysics Data System (ADS)
Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.
2009-01-01
A number of full field image sensors have been developed that are capable of simultaneously measuring intensity and distance (range) for every pixel in a given scene using an indirect time-of-flight measurement technique. A light source is intensity modulated at a frequency between 10-100 MHz, and an image sensor is modulated at the same frequency, synchronously sampling light reflected from objects in the scene (homodyne detection). The time of flight is manifested as a phase shift in the illumination modulation envelope, which can be determined from the sampled data simultaneously for each pixel in the scene. This paper presents a method of characterizing the high frequency modulation response of these image sensors, using a pico-second laser pulser. The characterization results allow the optimal operating parameters, such as the modulation frequency, to be identified in order to maximize the range measurement precision for a given sensor. A number of potential sources of error exist when using these sensors, including deficiencies in the modulation waveform shape, duty cycle, or phase, resulting in contamination of the resultant range data. From the characterization data these parameters can be identified and compensated for by modifying the sensor hardware or through post processing of the acquired range measurements.
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Habitable Exoplanet Imager Optical Telescope Concept Design
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2017-01-01
Habitable Exoplanet Imaging Mission (HabEx) is a concept for a mission to directly image and characterize planetary systems around Sun-like stars. In addition to the search for life on Earth-like exoplanets, HabEx will enable a broad range of general astrophysics science enabled by 100 to 2500 nm spectral range and 3 x 3 arc-minute FOV. HabEx is one of four mission concepts currently being studied for the 2020 Astrophysics Decadal Survey.
Habitable Exoplanet Imager: Optical Telescope Structural Design and Performance Prediction
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2017-01-01
Habitable Exoplanet Imaging Mission (HabEx) is a concept for a mission to directly image and characterize planetary systems around Sun-like stars. In addition to the search for life on Earth-like exoplanets, HabExwill enable a broad range of general astrophysics science enabled by 100 to 2500 nm spectral range and 3 x 3 arc-minute FOV. HabExis one of four mission concepts currently being studied for the 2020 Astrophysics Decadal Survey.
Dynamic granularity of imaging systems
Geissel, Matthias; Smith, Ian C.; Shores, Jonathon E.; ...
2015-11-04
Imaging systems that include a specific source, imaging concept, geometry, and detector have unique properties such as signal-to-noise ratio, dynamic range, spatial resolution, distortions, and contrast. Some of these properties are inherently connected, particularly dynamic range and spatial resolution. It must be emphasized that spatial resolution is not a single number but must be seen in the context of dynamic range and consequently is better described by a function or distribution. We introduce the “dynamic granularity” G dyn as a standardized, objective relation between a detector’s spatial resolution (granularity) and dynamic range for complex imaging systems in a given environmentmore » rather than the widely found characterization of detectors such as cameras or films by themselves. We found that this relation can partly be explained through consideration of the signal’s photon statistics, background noise, and detector sensitivity, but a comprehensive description including some unpredictable data such as dust, damages, or an unknown spectral distribution will ultimately have to be based on measurements. Measured dynamic granularities can be objectively used to assess the limits of an imaging system’s performance including all contributing noise sources and to qualify the influence of alternative components within an imaging system. Our article explains the construction criteria to formulate a dynamic granularity and compares measured dynamic granularities for different detectors used in the X-ray backlighting scheme employed at Sandia’s Z-Backlighter facility.« less
Atmospheric Science Data Center
2014-05-15
article title: Front Range of the Rockies View ... north and east. Denver is situated just east of the Front Range of the Rocky Mountains, located in the lower right of the images. The ... of erosion. Scattered cumulus clouds floating above the mountain peaks are visible in these images, and stand out most dramatically in ...
NASA Astrophysics Data System (ADS)
Karunamuni, R.; Maidment, A. D. A.
2014-08-01
Contrast-enhanced (CE) dual-energy (DE) x-ray breast imaging uses a low- and high-energy x-ray spectral pair to eliminate soft-tissue signal variation and thereby increase the detectability of exogenous imaging agents. Currently, CEDE breast imaging is performed with iodinated contrast agents. These compounds are limited by several deficiencies, including rapid clearance and poor tumor targeting ability. The purpose of this work is to identify novel contrast materials whose contrast-to-noise ratio (CNR) is comparable or superior to that of iodine in the mammographic energy range. A monoenergetic DE subtraction framework was developed to calculate the DE signal intensity resulting from the logarithmic subtraction of the low- and high-energy signal intensities. A weighting factor is calculated to remove the dependence of the DE signal on the glandularity of the breast tissue. Using the DE signal intensity and weighting factor, the CNR for materials with atomic numbers (Z) ranging from 1 to 79 are computed for energy pairs between 10 and 50 keV. A group of materials with atomic numbers ranging from 42 to 63 were identified to exhibit the highest levels of CNR in the mammographic energy range. Several of these materials have been formulated as nanoparticles for various applications but none, apart from iodine, have been investigated as CEDE breast imaging agents. Within this group of materials, the necessary dose fraction to the LE image decreases as the atomic number increases. By reducing the dose to the LE image, the DE subtraction technique will not provide an anatomical image of sufficient quality to accompany the contrast information. Therefore, materials with Z from 42 to 52 provide nearly optimal values of CNR with energy pairs and dose fractions that provide good anatomical images. This work is intended to inspire further research into new materials for optimized CEDE breast functional imaging.
Karunamuni, R; Maidment, A D A
2014-08-07
Contrast-enhanced (CE) dual-energy (DE) x-ray breast imaging uses a low- and high-energy x-ray spectral pair to eliminate soft-tissue signal variation and thereby increase the detectability of exogenous imaging agents. Currently, CEDE breast imaging is performed with iodinated contrast agents. These compounds are limited by several deficiencies, including rapid clearance and poor tumor targeting ability. The purpose of this work is to identify novel contrast materials whose contrast-to-noise ratio (CNR) is comparable or superior to that of iodine in the mammographic energy range. A monoenergetic DE subtraction framework was developed to calculate the DE signal intensity resulting from the logarithmic subtraction of the low- and high-energy signal intensities. A weighting factor is calculated to remove the dependence of the DE signal on the glandularity of the breast tissue. Using the DE signal intensity and weighting factor, the CNR for materials with atomic numbers (Z) ranging from 1 to 79 are computed for energy pairs between 10 and 50 keV. A group of materials with atomic numbers ranging from 42 to 63 were identified to exhibit the highest levels of CNR in the mammographic energy range. Several of these materials have been formulated as nanoparticles for various applications but none, apart from iodine, have been investigated as CEDE breast imaging agents. Within this group of materials, the necessary dose fraction to the LE image decreases as the atomic number increases. By reducing the dose to the LE image, the DE subtraction technique will not provide an anatomical image of sufficient quality to accompany the contrast information. Therefore, materials with Z from 42 to 52 provide nearly optimal values of CNR with energy pairs and dose fractions that provide good anatomical images. This work is intended to inspire further research into new materials for optimized CEDE breast functional imaging.
Tone mapping infrared images using conditional filtering-based multi-scale retinex
NASA Astrophysics Data System (ADS)
Luo, Haibo; Xu, Lingyun; Hui, Bin; Chang, Zheng
2015-10-01
Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing "halo" effect and provides a good rendition of visual effect.
Uji, Akihito; Abdelfattah, Nizar Saleh; Boyer, David S.; Balasubramanian, Siva; Lei, Jianqin; Sadda, SriniVas R.
2017-01-01
Purpose To investigate the level of inaccuracy of retinal thickness measurements in tilted and axially stretched optical coherence tomography (OCT) images. Methods A consecutive series of 50 eyes of 50 patients with age-related macular degeneration were included in this study, and Cirrus HD-OCT images through the foveal center were used for the analysis. The foveal thickness was measured in three ways: (1) parallel to the orientation of the A-scan (Tx), (2) perpendicular to the retinal pigment epithelium (RPE) surface in the instrument-displayed aspect ratio image (Ty), and (3) thickness measured perpendicular to the RPE surface in a native aspect ratio image (Tz). Mathematical modeling was performed to estimate the measurement error. Results The measurement error was larger in tilted images with a greater angle of tilt. In the simulation, with axial stretching by a factor of 2, Ty/Tz ratio was > 1.05 at a tilt angle between 13° to 18° and 72° to 77°, > 1.10 at a tilt angle between 19° to 31° and 59° to 71°, and > 1.20 at an angle ranging from 32° to 58°. Of note with even more axial stretching, the Ty/Tz ratio is even larger. Tx/Tz ratio was smaller than the Ty/Tz ratio at angles ranging from 0° to 54°. The actual patient data showed good agreement with the simulation. The Ty/Tz ratio was greater than 1.05 (5% error) at angles ranging from 13° to 18° and 72° to 77°, greater than 1.10 (10% error) angles ranging from 19° to 31° and 59° to 71°, and greater than 1.20 (20% error) angles ranging from 32° to 58° in the images axially stretched by a factor of 2 (b/a = 2), which is typical of most OCT instrument displays. Conclusions Retinal thickness measurements obtained perpendicular to the RPE surface were overestimated when using tilted and axially stretched OCT images. Translational Relevance If accurate measurements are to be obtained, images with a native aspect ratio similar to microscopy must be used. PMID:28299239
Image registration reveals central lens thickness minimally increases during accommodation
Schachar, Ronald A; Mani, Majid; Schachar, Ira H
2017-01-01
Purpose To evaluate anterior chamber depth, central crystalline lens thickness and lens curvature during accommodation. Setting California Retina Associates, El Centro, CA, USA. Design Healthy volunteer, prospective, clinical research swept-source optical coherence biometric image registration study of accommodation. Methods Ten subjects (4 females and 6 males) with an average age of 22.5 years (range: 20–26 years) participated in the study. A 45° beam splitter attached to a Zeiss IOLMaster 700 (Carl Zeiss Meditec Inc., Jena, Germany) biometer enabled simultaneous imaging of the cornea, anterior chamber, entire central crystalline lens and fovea in the dilated right eyes of subjects before, and during focus on a target 11 cm from the cornea. Images with superimposable foveal images, obtained before and during accommodation, that met all of the predetermined alignment criteria were selected for comparison. This registration requirement assured that changes in anterior chamber depth and central lens thickness could be accurately and reliably measured. The lens radii of curvatures were measured with a pixel stick circle. Results Images from only 3 of 10 subjects met the predetermined criteria for registration. Mean anterior chamber depth decreased, −67 μm (range: −0.40 to −110 μm), and mean central lens thickness increased, 117 μm (range: 100–130 μm). The lens surfaces steepened, anterior greater than posterior, while the lens, itself, did not move or shift its position as appeared from the lack of movement of the lens nucleus, during 7.8 diopters of accommodation, (range: 6.6–9.7 diopters). Conclusion Image registration, with stable invariant references for image correspondence, reveals that during accommodation a large increase in lens surface curvatures is associated with only a small increase in central lens thickness and no change in lens position. PMID:28979092
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
Evaluation of a novel collimator for molecular breast tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon
Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less
Evaluation of a novel collimator for molecular breast tomosynthesis.
Gilland, David R; Welch, Benjamin L; Lee, Seungjoon; Kross, Brian; Weisenberger, Andrew G
2017-11-01
This study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelated (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (-25° to 25°) using 99m Tc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging. © 2017 American Association of Physicists in Medicine.
Evaluation of a novel collimator for molecular breast tomosynthesis
Gilland, David R.; Welch, Benjamin L.; Lee, Seungjoon; ...
2017-09-06
Here, this study investigated a novel gamma camera for molecular breast tomosynthesis (MBT), which is a nuclear breast imaging method that uses limited angle tomography. The camera is equipped with a variable angle, slant-hole (VASH) collimator that allows the camera to remain close to the breast throughout the acquisition. The goal of this study was to evaluate the spatial resolution and count sensitivity of this camera and to compare contrast and contrast-to-noise ratio (CNR) with conventional planar imaging using an experimental breast phantom. Methods The VASH collimator mounts to a commercial gamma camera for breast imaging that uses a pixelatedmore » (3.2 mm), 15 × 20 cm NaI crystal. Spatial resolution was measured in planar images over a range of distances from the collimator (30-100 mm) and a range of slant angles (–25° to 25°) using 99mTc line sources. Spatial resolution was also measured in reconstructed MBT images including in the depth dimension. The images were reconstructed from data acquired over the -25° to 25° angular range using an iterative algorithm adapted to the slant-hole geometry. Sensitivity was measured over the range of slant angles using a disk source. Measured spatial resolution and sensitivity were compared to theoretical values. Contrast and CNR were measured using a breast phantom containing spherical lesions (6.2 mm and 7.8 mm diameter) and positioned over a range of depths in the phantom. The MBT and planar methods had equal scan time, and the count density in the breast phantom data was similar to that in clinical nuclear breast imaging. The MBT method used an iterative reconstruction algorithm combined with a postreconstruction Metz filter. Results The measured spatial resolution in planar images agreed well with theoretical calculations over the range of distances and slant angles. The measured FWHM was 9.7 mm at 50 mm distance. In reconstructed MBT images, the spatial resolution in the depth dimension was approximately 2.2 mm greater than the other two dimensions due to the limited angle data. The measured count sensitivity agreed closely with theory over all slant angles when using a wide energy window. At 0° slant angle, measured sensitivity was 19.7 counts sec -1 μCi -1 with the open energy window and 11.2 counts sec -1 μCi -1 with a 20% wide photopeak window (126 to 154 keV). The measured CNR in the MBT images was significantly greater than in the planar images for all but the lowest CNR cases where the lesion detectability was extremely low for both MBT and planar. The 7.8 mm lesion at 37 mm depth was marginally detectable in the planar image but easily visible in the MBT image. The improved CNR with MBT was due to a large improvement in contrast, which out-weighed the increase in image noise. Conclusion The spatial resolution and count sensitivity measurements with the prototype MBT system matched theoretical calculations, and the measured CNR in breast phantom images was generally greater with the MBT system compared to conventional planar imaging. These results demonstrate the potential of the proposed MBT system to improve lesion detection in nuclear breast imaging.« less
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
NASA Astrophysics Data System (ADS)
Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin
2013-11-01
The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.
Imaging based refractometer for hyperspectral refractive index detection
Baba, Justin S.; Boudreaux, Philip R.
2015-11-24
Refractometers for simultaneously measuring refractive index of a sample over a range of wavelengths of light include dispersive and focusing optical systems. An optical beam including the range of wavelengths is spectrally spread along a first axis and focused along a second axis so as to be incident to an interface between the sample and a prism at a range of angles of incidence including a critical angle for at least one wavelength. An imaging detector is situated to receive the spectrally spread and focused light from the interface and form an image corresponding to angle of incidence as a function of wavelength. One or more critical angles are identified and corresponding refractive indices are determined.
Digital enhancement of multispectral MSS data for maximum image visibility
NASA Technical Reports Server (NTRS)
Algazi, V. R.
1973-01-01
A systematic approach to the enhancement of images has been developed. This approach exploits two principal features involved in the observation of images: the properties of human vision and the statistics of the images being observed. The rationale of the enhancement procedure is as follows: in the observation of some features of interest in an image, the range of objective luminance-chrominance values being displayed is generally limited and does not use the whole perceptual range of vision of the observer. The purpose of the enhancement technique is to expand and distort in a systematic way the grey scale values of each of the multispectral bands making up a color composite, to enhance the average visibility of the features being observed.
Variability in imaging utilization in U.S. pediatric hospitals.
Arnold, Ryan W; Graham, Dionne A; Melvin, Patrice R; Taylor, George A
2011-07-01
Use of medical imaging is under scrutiny because of rising costs and radiation exposure. We compare imaging utilization and costs across pediatric hospitals to determine their variability and potential determinants. Data were extracted from the Pediatric Health Information System (PHIS) database for all inpatient encounters from 40 U.S. children's hospitals. Imaging utilization and costs were compared by insurance type, geographical region, hospital size, severity of illness, length of stay and type of imaging, all among specific diagnoses. The hospital with the highest utilization performed more than twice as many imaging studies per patient as the hospital with the lowest utilization. Similarly, imaging costs ranged from $154 to $671/patient. Median imaging-utilization rate was 1.7 exams/patient on the ward and increased significantly in the PICU (11.8 exams/patient) and in the NICU (17.7 exams per patient, (P < 0.001). Considerable variability in imaging utilization persisted despite adjustment for case mix index (CMI, range in variation 16.6-25%). We found a significant correlation between imaging utilization and both CMI and length of stay, P < 0.0001). However, only 36% of the variation in imaging utilization could be explained by CMI. Diagnostic imaging utilization and costs vary widely in pediatric hospitals.
Is there a preference for linearity when viewing natural images?
NASA Astrophysics Data System (ADS)
Kane, David; Bertamío, Marcelo
2015-01-01
The system gamma of the imaging pipeline, defined as the product of the encoding and decoding gammas, is typically greater than one and is stronger for images viewed with a dark background (e.g. cinema) than those viewed in lighter conditions (e.g. office displays).1-3 However, for high dynamic range (HDR) images reproduced on a low dynamic range (LDR) monitor, subjects often prefer a system gamma of less than one,4 presumably reflecting the greater need for histogram equalization in HDR images. In this study we ask subjects to rate the perceived quality of images presented on a LDR monitor using various levels of system gamma. We reveal that the optimal system gamma is below one for images with a HDR and approaches or exceeds one for images with a LDR. Additionally, the highest quality scores occur for images where a system gamma of one is optimal, suggesting a preference for linearity (where possible). We find that subjective image quality scores can be predicted by computing the degree of histogram equalization of the lightness distribution. Accordingly, an optimal, image dependent system gamma can be computed that maximizes perceived image quality.
Classification of yeast cells from image features to evaluate pathogen conditions
NASA Astrophysics Data System (ADS)
van der Putten, Peter; Bertens, Laura; Liu, Jinshuo; Hagen, Ferry; Boekhout, Teun; Verbeek, Fons J.
2007-01-01
Morphometrics from images, image analysis, may reveal differences between classes of objects present in the images. We have performed an image-features-based classification for the pathogenic yeast Cryptococcus neoformans. Building and analyzing image collections from the yeast under different environmental or genetic conditions may help to diagnose a new "unseen" situation. Diagnosis here means that retrieval of the relevant information from the image collection is at hand each time a new "sample" is presented. The basidiomycetous yeast Cryptococcus neoformans can cause infections such as meningitis or pneumonia. The presence of an extra-cellular capsule is known to be related to virulence. This paper reports on the approach towards developing classifiers for detecting potentially more or less virulent cells in a sample, i.e. an image, by using a range of features derived from the shape or density distribution. The classifier can henceforth be used for automating screening and annotating existing image collections. In addition we will present our methods for creating samples, collecting images, image preprocessing, identifying "yeast cells" and creating feature extraction from the images. We compare various expertise based and fully automated methods of feature selection and benchmark a range of classification algorithms and illustrate successful application to this particular domain.
NASA Astrophysics Data System (ADS)
Bender, Edward J.; Wood, Michael V.; Hart, Steve; Heim, Gerald B.; Torgerson, John A.
2004-10-01
Image intensifiers (I2) have gained wide acceptance throughout the Army as the premier nighttime mobility sensor for the individual soldier, with over 200,000 fielded systems. There is increasing need, however, for such a sensor with a video output, so that it can be utilized in remote vehicle platforms, and/or can be electronically fused with other sensors. The image-intensified television (I2TV), typically consisting of an image intensifier tube coupled via fiber optic to a solid-state imaging array, has been the primary solution to this need. I2TV platforms in vehicles, however, can generate high internal heat loads and must operate in high-temperature environments. Intensifier tube dark current, called "Equivalent Background Input" or "EBI", is not a significant factor at room temperature, but can seriously degrade image contrast and intra-scene dynamic range at such high temperatures. Cooling of the intensifier's photocathode is the only practical solution to this problem. The US Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate (NVESD) and Ball Aerospace have collaborated in the reported effort to more rigorously characterize intensifier EBI versus temperature. NVESD performed non-imaging EBI measurements of Generation 2 and 3 tube modules over a large range of ambient temperature, while Ball performed an imaging evaluation of Generation 3 I2TVs over a similar temperature range. The findings and conclusions of this effort are presented.
Giewekemeyer, Klaus; Philipp, Hugh T.; Wilke, Robin N.; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W.; Shanks, Katherine S.; Zozulya, Alexey V.; Salditt, Tim; Gruner, Sol M.; Mancuso, Adrian P.
2014-01-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 108 8-keV photons pixel−1 s−1, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 1010 photons µm−2 s−1 within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described. PMID:25178008
Giewekemeyer, Klaus; Philipp, Hugh T; Wilke, Robin N; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W; Shanks, Katherine S; Zozulya, Alexey V; Salditt, Tim; Gruner, Sol M; Mancuso, Adrian P
2014-09-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10(8) 8-keV photons pixel(-1) s(-1), and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10(10) photons µm(-2) s(-1) within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while `still' images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.
NASA Astrophysics Data System (ADS)
Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.
2017-02-01
Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.
Phase calibration target for quantitative phase imaging with ptychography.
Godden, T M; Muñiz-Piniella, A; Claverley, J D; Yacoot, A; Humphry, M J
2016-04-04
Quantitative phase imaging (QPI) utilizes refractive index and thickness variations that lead to optical phase shifts. This gives contrast to images of transparent objects. In quantitative biology, phase images are used to accurately segment cells and calculate properties such as dry mass, volume and proliferation rate. The fidelity of the measured phase shifts is of critical importance in this field. However to date, there has been no standardized method for characterizing the performance of phase imaging systems. Consequently, there is an increasing need for protocols to test the performance of phase imaging systems using well-defined phase calibration and resolution targets. In this work, we present a candidate for a standardized phase resolution target, and measurement protocol for the determination of the transfer of spatial frequencies, and sensitivity of a phase imaging system. The target has been carefully designed to contain well-defined depth variations over a broadband range of spatial frequencies. In order to demonstrate the utility of the target, we measure quantitative phase images on a ptychographic microscope, and compare the measured optical phase shifts with Atomic Force Microscopy (AFM) topography maps and surface profile measurements from coherence scanning interferometry. The results show that ptychography has fully quantitative nanometer sensitivity in optical path differences over a broadband range of spatial frequencies for feature sizes ranging from micrometers to hundreds of micrometers.
Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation
NASA Astrophysics Data System (ADS)
Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas
2013-03-01
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao
2016-11-25
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, Ehsan, E-mail: samei@duke.edu; Lin, Yuan; Choudhury, Kingshuk R.
Purpose: The authors previously proposed an image-based technique [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] to assess the perceptual quality of clinical chest radiographs. In this study, an observer study was designed and conducted to validate the output of the program against rankings by expert radiologists and to establish the ranges of the output values that reflect the acceptable image appearance so the program output can be used for image quality optimization and tracking. Methods: Using an IRB-approved protocol, 2500 clinical chest radiographs (PA/AP) were collected from our clinical operation. The images were processed through our perceptual qualitymore » assessment program to measure their appearance in terms of ten metrics of perceptual image quality: lung gray level, lung detail, lung noise, rib–lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm–lung contrast, and subdiaphragm area. From the results, for each targeted appearance attribute/metric, 18 images were selected such that the images presented a relatively constant appearance with respect to all metrics except the targeted one. The images were then incorporated into a graphical user interface, which displayed them into three panels of six in a random order. Using a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions, each of five participating attending chest radiologists was tasked to spatially order the images based only on the targeted appearance attribute regardless of the other qualities. Once ordered, the observer also indicated the range of image appearances that he/she considered clinically acceptable. The observer data were analyzed in terms of the correlations between the observer and algorithmic rankings and interobserver variability. An observer-averaged acceptable image appearance was also statistically derived for each quality attribute based on the collected individual acceptable ranges. Results: The observer study indicated that, for each image quality attribute, the averaged observer ranking strongly correlated with the algorithmic ranking (linear correlation coefficient R > 0.92), with highest correlation (R = 1) for lung gray level and the lowest (R = 0.92) for mediastinum noise. There was a strong concordance between the observers in terms of their rankings (i.e., Kendall’s tau agreement > 0.84). The observers also generally indicated similar tolerance and preference levels in terms of acceptable ranges, as 85% of the values were close to the overall tolerance or preference levels and the differences were smaller than 0.15. Conclusions: The observer study indicates that the previously proposed technique provides a robust reflection of the perceptual image quality in clinical images. The results established the range of algorithmic outputs for each metric that can be used to quantitatively assess and qualify the appearance quality of clinical chest radiographs.« less
Sampson, Danuta M; Gong, Peijun; An, Di; Menghini, Moreno; Hansen, Alex; Mackey, David A; Sampson, David D; Chen, Fred K
2017-06-01
To evaluate the impact of image magnification correction on superficial retinal vessel density (SRVD) and foveal avascular zone area (FAZA) measurements using optical coherence tomography angiography (OCTA). Participants with healthy retinas were recruited for ocular biometry, refraction, and RTVue XR Avanti OCTA imaging with the 3 × 3-mm protocol. The foveal and parafoveal SRVD and FAZA were quantified with custom software before and after correction for magnification error using the Littman and the modified Bennett formulae. Relative changes between corrected and uncorrected SRVD and FAZA were calculated. Forty subjects were enrolled and the median (range) age of the participants was 30 (18-74) years. The mean (range) spherical equivalent refractive error was -1.65 (-8.00 to +4.88) diopters and mean (range) axial length was 24.42 mm (21.27-28.85). Images from 13 eyes were excluded due to poor image quality leaving 67 for analysis. Relative changes in foveal and parafoveal SRVD and FAZA after correction ranged from -20% to +10%, -3% to +2%, and -20% to +51%, respectively. Image size correction in measurements of foveal SRVD and FAZA was greater than 5% in 51% and 74% of eyes, respectively. In contrast, 100% of eyes had less than 5% correction in measurements of parafoveal SRVD. Ocular biometry should be performed with OCTA to correct image magnification error induced by axial length variation. We advise caution when interpreting interocular and interindividual comparisons of SRVD and FAZA derived from OCTA without image size correction.
Development of a High-Throughput Microwave Imaging System for Concealed Weapons Detection
2016-07-15
hardware. Index Terms—Microwave imaging, multistatic radar, Fast Fourier Transform (FFT). I. INTRODUCTION Near-field microwave imaging is a non-ionizing...configuration, but its computational demands are extreme. Fast Fourier Transform (FFT) imaging has long been used to efficiently construct images sampled with...Simulated image of 25 point scatterers imaged at range 1.5m, with array layout depicted in Fig. 3. Left: image formed with Equation (5) ( Fourier
2005-09-06
This Tempel 1 image was built up from scaling images from NASA Deep Impact to 5 meters/pixel and aligned to fixed points. Each image at closer range replaced equivalent locations observed at a greater distance.
Tunable filters for multispectral imaging of aeronomical features
NASA Astrophysics Data System (ADS)
Goenka, C.; Semeter, J. L.; Noto, J.; Dahlgren, H.; Marshall, R.; Baumgardner, J.; Riccobono, J.; Migliozzi, M.
2013-10-01
Multispectral imaging of optical emissions in the Earth's upper atmosphere unravels vital information about dynamic phenomena in the Earth-space environment. Wavelength tunable filters allow us to accomplish this without using filter wheels or multiple imaging setups, but with identifiable caveats and trade-offs. We evaluate one such filter, a liquid crystal Fabry-Perot etalon, as a potential candidate for the next generation of imagers for aeronomy. The tunability of such a filter can be exploited in imaging features such as the 6300-6364 Å oxygen emission doublet, or studying the rotational temperature of N2+ in the 4200-4300 Å range, observations which typically require multiple instruments. We further discuss the use of this filter in an optical instrument, called the Liquid Crystal Hyperspectral Imager (LiCHI), which will be developed to make simultaneous measurements in various wavelength ranges.
2015-10-29
In addition to transmitting new high-resolution images and other data on the familiar close-approach hemispheres of Pluto and Charon, NASA's New Horizons spacecraft is also returning images -- such as this one -- to improve maps of other regions. This image was taken by the New Horizons Long Range Reconnaissance Imager (LORRI) on the morning of July 13, 2015, from a range of 1.03 million miles (1.7 million kilometers) and has a resolution of 5.1 miles (8.3 kilometers) per pixel. It provides fascinating new details to help the science team map the informally named Krun Macula (the prominent dark spot at the bottom of the image) and the complex terrain east and northeast of Pluto's "heart" (Tombaugh Regio). Pluto's north pole is on the planet's disk at the 12 o'clock position of this image. http://photojournal.jpl.nasa.gov/catalog/PIA20037
NASA Astrophysics Data System (ADS)
Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang
2018-05-01
Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.
NASA Astrophysics Data System (ADS)
Polito, C.; Pani, R.; Trigila, C.; Cinti, M. N.; Fabbri, A.; Frantellizzi, V.; De Vincentis, G.; Pellegrini, R.; Pani, R.
2017-01-01
The growing interest for new scintillation crystals with outstanding imaging performances (i.e. resolution and efficiency) has suggested the study of recently discovered scintillators named CRY018 and CRY019. The crystals under investigation are monolithic and have shown enhanced characteristics both for gamma ray spectrometry and for Nuclear Medicine imaging applications such as the dual isotope imaging. Moreover, the non-hygroscopic nature and the absence of afterglow make these scintillators even more attractive for the potential improvement in a wide range of applications. These scintillation crystals show a high energy resolution in the energy range involved in Nuclear Medicine, allowing the discrimination between very close energy values. Moreover, in order to prove their suitability of being powerful imaging systems, the imaging performances like the position linearity and the intrinsic spatial resolution have been evaluated obtaining satisfactory results thanks to the implementation of an optimized algorithm for the images reconstruction.
Shkirkova, Kristina; Akam, Eftitan Y; Huang, Josephine; Sheth, Sunil A; Nour, May; Liang, Conrad W; McManus, Michael; Trinh, Van; Duckwiler, Gary; Tarpley, Jason; Vinuela, Fernando; Saver, Jeffrey L
2017-12-01
Background Rapid dissemination and coordination of clinical and imaging data among multidisciplinary team members are essential for optimal acute stroke care. Aim To characterize the feasibility and utility of the Synapse Emergency Room mobile (Synapse ERm) informatics system. Methods We implemented the Synapse ERm system for integration of clinical data, computerized tomography, magnetic resonance, and catheter angiographic imaging, and real-time stroke team communications, in consecutive acute neurovascular patients at a Comprehensive Stroke Center. Results From May 2014 to October 2014, the Synapse ERm application was used by 33 stroke team members in 84 Code Stroke alerts. Patient age was 69.6 (±17.1), with 41.5% female. Final diagnosis was: ischemic stroke 64.6%, transient ischemic attack 7.3%, intracerebral hemorrhage 6.1%, and cerebrovascular-mimic 22.0%. Each patient Synapse ERm record was viewed by a median of 10 (interquartile range 6-18) times by a median of 3 (interquartile range 2-4) team members. The most used feature was computerized tomography, magnetic resonance, and catheter angiography image display. In-app tweet team, communications were sent by median 1 (interquartile range 0-1, range 0-13) users per case and viewed by median 1 (interquartile range 0-3, range 0-44) team members. Use of the system was associated with rapid treatment times, faster than national guidelines, including median door-to-needle 51.0 min (interquartile range 40.5-69.5) and median door-to-groin 94.5 min (interquartile range 85.5-121.3). In user surveys, the mobile information platform was judged easy to employ in 91% (95% confidence interval 65%-99%) of uses and of added help in stroke management in 50% (95% confidence interval 22%-78%). Conclusion The Synapse ERm mobile platform for stroke team distribution and integration of clinical and imaging data was feasible to implement, showed high ease of use, and moderate perceived added utility in therapeutic management.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Time-resolved multispectral imaging of combustion reactions
NASA Astrophysics Data System (ADS)
Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Frédérick
2015-10-01
Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. These allow to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases, such as carbon dioxide (CO2), selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge of spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using a Telops MS-IR MW camera, which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profiles derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.
Time-resolved multispectral imaging of combustion reaction
NASA Astrophysics Data System (ADS)
Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Fréderick
2015-05-01
Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. This allows to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases such as carbon dioxide (CO2) selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge about spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using Telops MS-IR MW camera which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profile derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.
Radiation dose and magnification in pelvic X-ray: EOS™ imaging system versus plain radiographs.
Chiron, P; Demoulin, L; Wytrykowski, K; Cavaignac, E; Reina, N; Murgier, J
2017-12-01
In plain pelvic X-ray, magnification makes measurement unreliable. The EOS™ (EOS Imaging, Paris France) imaging system is reputed to reproduce patient anatomy exactly, with a lower radiation dose. This, however, has not been assessed according to patient weight, although both magnification and irradiation are known to vary with weight. We therefore conducted a prospective comparative study, to compare: (1) image magnification and (2) radiation dose between the EOS imaging system and plain X-ray. The EOS imaging system reproduces patient anatomy exactly, regardless of weight, unlike plain X-ray. A single-center comparative study of plain pelvic X-ray and 2D EOS radiography was performed in 183 patients: 186 arthroplasties; 104 male, 81 female; mean age 61.3±13.7years (range, 24-87years). Magnification and radiation dose (dose-area product [DAP]) were compared between the two systems in 186 hips in patients with a mean body-mass index (BMI) of 27.1±5.3kg/m 2 (range, 17.6-42.3kg/m 2 ), including 7 with morbid obesity. Mean magnification was zero using the EOS system, regardless of patient weight, compared to 1.15±0.05 (range, 1-1.32) on plain X-ray (P<10 -5 ). In patients with BMI<25, mean magnification on plain X-ray was 1.15±0.05 (range, 1-1.25) and, in patients with morbid obesity, 1.22±0.06 (range, 1.18-1.32). The mean radiation dose was 8.19±2.63dGy/cm 2 (range, 1.77-14.24) with the EOS system, versus 19.38±12.37dGy/cm 2 (range, 4.77-81.75) with plain X-ray (P<10 -4 ). For BMI >40, mean radiation dose was 9.36±2.57dGy/cm 2 (range, 7.4-14.2) with the EOS system, versus 44.76±22.21 (range, 25.2-81.7) with plain X-ray. Radiation dose increased by 0.20dGy with each extra BMI point for the EOS system, versus 0.74dGy for plain X-ray. Magnification did not vary with patient weight using the EOS system, unlike plain X-ray, and radiation dose was 2.5-fold lower. 3, prospective case-control study. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dinwiddie, Ralph Barton; Parris, Larkin S.; Lindal, John M.
This paper explores the temperature range extension of long-wavelength infrared (LWIR) cameras by placing an aperture in front of the lens. An aperture smaller than the lens will reduce the radiance to the sensor, allowing the camera to image targets much hotter than typically allowable. These higher temperatures were accurately determined after developing a correction factor which was applied to the built-in temperature calibration. The relationship between aperture diameter and temperature range is linear. The effect of pre-lens apertures on the image uniformity is a form of anti-vignetting, meaning the corners appear brighter (hotter) than the rest of the image.more » An example of using this technique to measure temperatures of high melting point polymers during 3D printing provide valuable information of the time required for the weld-line temperature to fall below the glass transition temperature.« less
Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid
2016-06-13
Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.
3D range-gated super-resolution imaging based on stereo matching for moving platforms and targets
NASA Astrophysics Data System (ADS)
Sun, Liang; Wang, Xinwei; Zhou, Yan
2017-11-01
3D range-gated superresolution imaging is a novel 3D reconstruction technique for target detection and recognition with good real-time performance. However, for moving targets or platforms such as airborne, shipborne, remote operated vehicle and autonomous vehicle, 3D reconstruction has a large error or failure. In order to overcome this drawback, we propose a method of stereo matching for 3D range-gated superresolution reconstruction algorithm. In experiment, the target is a doll of Mario with a height of 38cm at the location of 34m, and we obtain two successive frame images of the Mario. To confirm our method is effective, we transform the original images with translation, rotation, scale and perspective, respectively. The experimental result shows that our method has a good result of 3D reconstruction for moving targets or platforms.
NASA Technical Reports Server (NTRS)
Lee, T-H.; Burnside, W. D.
1992-01-01
Inverse Synthetic Aperture Radar (ISAR) images for a 32 in long and 19 in wide model aircraft are documented. Both backscattered and bistatic scattered fields of this model aircraft were measured in the OSU-ESL compact range to obtain these images. The scattered fields of the target were measured for frequencies from 2 to 18 GHz with a 10 MHz increment and for full 360 deg azimuth rotation angles with a 0.2 deg step. For the bistatic scattering measurement, the compact range was used as the transmitting antenna; while, a broad band AEL double ridge horn was used as the receiving antenna. Bistatic angles of 90 deg and 135 deg were measured. Due to the size of the chamber and target, the receiving antenna was in the near field of the target; nevertheless, the image processing algorithm was valid for this case.
Gamut mapping in a high-dynamic-range color space
NASA Astrophysics Data System (ADS)
Preiss, Jens; Fairchild, Mark D.; Ferwerda, James A.; Urban, Philipp
2014-01-01
In this paper, we present a novel approach of tone mapping as gamut mapping in a high-dynamic-range (HDR) color space. High- and low-dynamic-range (LDR) images as well as device gamut boundaries can simultaneously be represented within such a color space. This enables a unified transformation of the HDR image into the gamut of an output device (in this paper called HDR gamut mapping). An additional aim of this paper is to investigate the suitability of a specific HDR color space to serve as a working color space for the proposed HDR gamut mapping. For the HDR gamut mapping, we use a recent approach that iteratively minimizes an image-difference metric subject to in-gamut images. A psychophysical experiment on an HDR display shows that the standard reproduction workflow of two subsequent transformations - tone mapping and then gamut mapping - may be improved by HDR gamut mapping.
Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images
NASA Astrophysics Data System (ADS)
Liang, Y.; Sheng, Y. H.
2018-04-01
To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.
A number of existing and new remote sensing data provide images of areas ranging from small communities to continents. These images provide views on a wide range of physical features in the landscape, including vegetation, road infrastructure, urban areas, geology, soils, and wa...
Sierra Nevada Ecosystem Project
C. I. Millar
1996-01-01
Sierra Nevada Ecosystems. The Sierra Nevada evokes images particular to each individual's experience of the range. These images take on the quality of immutability, and we expect to find the range basically unchanged from one year to the next. The Sierra Nevada, however, including its rocky foundations and the plants and animals that inhabit it, changes...
Method and apparatus of high dynamic range image sensor with individual pixel reset
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)
2001-01-01
A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.
Evaluating carotenoid changes in tomatoes during postharvest ripening using Raman chemical imaging.
USDA-ARS?s Scientific Manuscript database
During the postharvest ripening of tomato fruits, the increasing presence of lycopene in the tomatoe samples spanning a range of fruit maturity. In this study, Raman chemical images were acquired of tomato samples spanning a range of fruit maturity stages, and were evaluated for the presence and di...
Reflective afocal broadband adaptive optics scanning ophthalmoscope
Dubra, Alfredo; Sulai, Yusufu
2011-01-01
A broadband adaptive optics scanning ophthalmoscope (BAOSO) consisting of four afocal telescopes, formed by pairs of off-axis spherical mirrors in a non-planar arrangement, is presented. The non-planar folding of the telescopes is used to simultaneously reduce pupil and image plane astigmatism. The former improves the adaptive optics performance by reducing the root-mean-square (RMS) of the wavefront and the beam wandering due to optical scanning. The latter provides diffraction limited performance over a 3 diopter (D) vergence range. This vergence range allows for the use of any broadband light source(s) in the 450-850 nm wavelength range to simultaneously image any combination of retinal layers. Imaging modalities that could benefit from such a large vergence range are optical coherence tomography (OCT), multi- and hyper-spectral imaging, single- and multi-photon fluorescence. The benefits of the non-planar telescopes in the BAOSO are illustrated by resolving the human foveal photoreceptor mosaic in reflectance using two different superluminescent diodes with 680 and 796 nm peak wavelengths, reaching the eye with a vergence of 0.76 D relative to each other. PMID:21698035
Reflective afocal broadband adaptive optics scanning ophthalmoscope.
Dubra, Alfredo; Sulai, Yusufu
2011-06-01
A broadband adaptive optics scanning ophthalmoscope (BAOSO) consisting of four afocal telescopes, formed by pairs of off-axis spherical mirrors in a non-planar arrangement, is presented. The non-planar folding of the telescopes is used to simultaneously reduce pupil and image plane astigmatism. The former improves the adaptive optics performance by reducing the root-mean-square (RMS) of the wavefront and the beam wandering due to optical scanning. The latter provides diffraction limited performance over a 3 diopter (D) vergence range. This vergence range allows for the use of any broadband light source(s) in the 450-850 nm wavelength range to simultaneously image any combination of retinal layers. Imaging modalities that could benefit from such a large vergence range are optical coherence tomography (OCT), multi- and hyper-spectral imaging, single- and multi-photon fluorescence. The benefits of the non-planar telescopes in the BAOSO are illustrated by resolving the human foveal photoreceptor mosaic in reflectance using two different superluminescent diodes with 680 and 796 nm peak wavelengths, reaching the eye with a vergence of 0.76 D relative to each other.
Real-time, continuous-wave terahertz imaging using a microbolometer focal-plane array
NASA Technical Reports Server (NTRS)
Hu, Qing (Inventor); Min Lee, Alan W. (Inventor)
2010-01-01
The present invention generally provides a terahertz (THz) imaging system that includes a source for generating radiation (e.g., a quantum cascade laser) having one or more frequencies in a range of about 0.1 THz to about 10 THz, and a two-dimensional detector array comprising a plurality of radiation detecting elements that are capable of detecting radiation in that frequency range. An optical system directs radiation from the source to an object to be imaged. The detector array detects at least a portion of the radiation transmitted through the object (or reflected by the object) so as to form a THz image of that object.
Modeling a color-rendering operator for high dynamic range images using a cone-response function
NASA Astrophysics Data System (ADS)
Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju
2015-09-01
Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.
High dynamic range hyperspectral imaging for camouflage performance test and evaluation
NASA Astrophysics Data System (ADS)
Pearce, D.; Feenan, J.
2016-10-01
This paper demonstrates the use of high dynamic range processing applied to the specific technique of hyper-spectral imaging with linescan spectrometers. The technique provides an improvement in signal to noise for reflectance estimation. This is demonstrated for field measurements of rural imagery collected from a ground-based linescan spectrometer of rural scenes. Once fully developed, the specific application is expected to improve the colour estimation approaches and consequently the test and evaluation accuracy of camouflage performance tests. Data are presented on both field and laboratory experiments that have been used to evaluate the improvements granted by the adoption of high dynamic range data acquisition in the field of hyperspectral imaging. High dynamic ranging imaging is well suited to the hyperspectral domain due to the large variation in solar irradiance across the visible and short wave infra-red (SWIR) spectrum coupled with the wavelength dependence of the nominal silicon detector response. Under field measurement conditions it is generally impractical to provide artificial illumination; consequently, an adaptation of the hyperspectral imaging and re ectance estimation process has been developed to accommodate the solar spectrum. This is shown to improve the signal to noise ratio for the re ectance estimation process of scene materials in the 400-500 nm and 700-900 nm regions.
Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data
NASA Astrophysics Data System (ADS)
Seitel, Alexander; dos Santos, Thiago R.; Mersmann, Sven; Penne, Jochen; Groch, Anja; Yung, Kwong; Tetzlaff, Ralf; Meinzer, Hans-Peter; Maier-Hein, Lena
2011-03-01
Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.
Kilovoltage energy imaging with a radiotherapy linac with a continuously variable energy range.
Roberts, D A; Hansen, V N; Thompson, M G; Poludniowski, G; Niven, A; Seco, J; Evans, P M
2012-03-01
In this paper, the effect on image quality of significantly reducing the primary electron energy of a radiotherapy accelerator is investigated using a novel waveguide test piece. The waveguide contains a novel variable coupling device (rotovane), allowing for a wide continuously variable energy range of between 1.4 and 9 MeV suitable for both imaging and therapy. Imaging at linac accelerating potentials close to 1 MV was investigated experimentally and via Monte Carlo simulations. An imaging beam line was designed, and planar and cone beam computed tomography images were obtained to enable qualitative and quantitative comparisons with kilovoltage and megavoltage imaging systems. The imaging beam had an electron energy of 1.4 MeV, which was incident on a water cooled electron window consisting of stainless steel, a 5 mm carbon electron absorber and 2.5 mm aluminium filtration. Images were acquired with an amorphous silicon detector sensitive to diagnostic x-ray energies. The x-ray beam had an average energy of 220 keV and half value layer of 5.9 mm of copper. Cone beam CT images with the same contrast to noise ratio as a gantry mounted kilovoltage imaging system were obtained with doses as low as 2 cGy. This dose is equivalent to a single 6 MV portal image. While 12 times higher than a 100 kVp CBCT system (Elekta XVI), this dose is 140 times lower than a 6 MV cone beam imaging system and 6 times lower than previously published LowZ imaging beams operating at higher (4-5 MeV) energies. The novel coupling device provides for a wide range of electron energies that are suitable for kilovoltage quality imaging and therapy. The imaging system provides high contrast images from the therapy portal at low dose, approaching that of gantry mounted kilovoltage x-ray systems. Additionally, the system provides low dose imaging directly from the therapy portal, potentially allowing for target tracking during radiotherapy treatment. There is the scope with such a tuneable system for further energy reduction and subsequent improvement in image quality.
Validation of a Low Dose Simulation Technique for Computed Tomography Images
Muenzel, Daniela; Koehler, Thomas; Brown, Kevin; Žabić, Stanislav; Fingerle, Alexander A.; Waldt, Simone; Bendik, Edgar; Zahel, Tina; Schneider, Armin; Dobritz, Martin; Rummeny, Ernst J.; Noël, Peter B.
2014-01-01
Purpose Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT) images from an original higher dose scan. Materials and Methods Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV) of a swine were acquired (approved by the regional governmental commission for animal protection). Simulations of CT acquisition with a lower dose (simulated 10–80 mAs) were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. Results Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was −1.2% (range −9% to 3.2%) and −0.2% (range −8.2% to 3.2%), respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9–10.2 HU (noise) and 1.9–13.4 HU (CT values), without significant differences (p>0.05). Subjective observer evaluation of image appearance showed no visually detectable difference. Conclusion Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques. PMID:25247422
Automated Registration of Multimodal Optic Disc Images: Clinical Assessment of Alignment Accuracy.
Ng, Wai Siene; Legg, Phil; Avadhanam, Venkat; Aye, Kyaw; Evans, Steffan H P; North, Rachel V; Marshall, Andrew D; Rosin, Paul; Morgan, James E
2016-04-01
To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: "Fail" (no alignment of vessels with no vessel contact), "Weak" (vessels have slight contact), "Good" (vessels with <50% contact), "Very Good" (vessels with >50% contact), and "Excellent" (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of "Good" or better in >95% of the image sets. NRFNMI had the highest percentage of "Excellent" (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images.
Infrared imaging of subcutaneous veins.
Zharov, Vladimir P; Ferguson, Scott; Eidt, John F; Howard, Paul C; Fink, Louis M; Waner, Milton
2004-01-01
Imaging of subcutaneous veins is important in many applications, such as gaining venous access and vascular surgery. Despite a long history of medical infrared (IR) photography and imaging, this technique is not widely used for this purpose. Here we revisited and explored the capability of near-IR imaging to visualize subcutaneous structures, with a focus on diagnostics of superficial veins. An IR device comprising a head-mounted IR LED array (880 nm), a small conventional CCD camera (Toshiba Ik-mui, Tokyo, Japan), virtual-reality optics, polarizers, filters, and diffusers was used in vivo to obtain images of different subcutaneous structures. The same device was used to estimate the IR image quality as a function of wavelength produced by a tunable xenon lamp-based monochrometer in the range of 500-1,000 nm and continuous-wave Nd:YAG (1.06 microm) and diode (805 nm) lasers. The various modes of optical illumination were compared in vivo. Contrast of the IR images in the reflectance mode was measured in the near-IR spectral range of 650-1,060 nm. Using the LED array, various IR images were obtained in vivo, including images of vein structure in a pigmented, fatty forearm, varicose leg veins, and vascular lesions of the tongue. Imaging in the near-IR range (880-930 nm) provides relatively good contrast of subcutaneous veins, underscoring its value for diagnosis. This technique has the potential for the diagnosis of varicose veins with a diameter of 0.5-2 mm at a depth of 1-3 mm, guidance of venous access, podiatry, phlebotomy, injection sclerotherapy, and control of laser interstitial therapy. Copyright 2004 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Woo, Sungsoo; Kang, Sungsam; Yoon, Changhyeong; Choi, Wonshik
2016-03-01
With the advancement of 3D display technology, 3D imaging of macroscopic objects has drawn much attention as they provide the contents to display. The most widely used imaging methods include a depth camera, which measures time of flight for the depth discrimination, and various structured illumination techniques. However, these existing methods have poor depth resolution, which makes imaging complicated structures a difficult task. In order to resolve this issue, we propose an imaging system based upon low-coherence interferometry and off-axis digital holographic imaging. By using light source with coherence length of 200 micro, we achieved the depth resolution of 100 micro. In order to map the macroscopic objects with this high axial resolution, we installed a pair of prisms in the reference beam path for the long-range scanning of the optical path length. Specifically, one prism was fixed in position, and the other prism was mounted on a translation stage and translated in parallel to the first prism. Due to the multiple internal reflections between the two prisms, the overall path length was elongated by a factor of 50. In this way, we could cover a depth range more than 1 meter. In addition, we employed multiple speckle illuminations and incoherent averaging of the acquired holographic images for reducing the specular reflections from the target surface. Using this newly developed system, we performed imaging targets with multiple different layers and demonstrated imaging targets hidden behind the scattering layers. The method was also applied to imaging targets located around the corner.
NASA Astrophysics Data System (ADS)
Conard, S. J.; Weaver, H. A.; Núñez, J. I.; Taylor, H. W.; Hayes, J. R.; Cheng, A. F.; Rodgers, D. J.
2017-09-01
The Long-Range Reconnaissance Imager (LORRI) is a high-resolution imaging instrument on the New Horizons spacecraft. LORRI collected over 5000 images during the approach and fly-by of the Pluto system in 2015, including the highest resolution images of Pluto and Charon and the four much smaller satellites (Styx, Nix, Kerberos, and Hydra) near the time of closest approach on 14 July 2015. LORRI is a narrow field of view (0.29°), Ritchey-Chrétien telescope with a 20.8 cm diameter primary mirror and a three-lens field flattener. The telescope has an effective focal length of 262 cm. The focal plane unit consists of a 1024 × 1024 pixel charge-coupled device (CCD) detector operating in frame transfer mode. LORRI provides panchromatic imaging over a bandpass that extends approximately from 350 nm to 850 nm. The instrument operates in an extreme thermal environment, viewing space from within the warm spacecraft. For this reason, LORRI has a silicon carbide optical system with passive thermal control, designed to maintain focus without adjustment over a wide temperature range from -100 C to +50 C. LORRI operated flawlessly throughout the encounter period, providing both science and navigation imaging of the Pluto system. We describe the preparations for the Pluto system encounter, including pre-encounter rehearsals, calibrations, and navigation imaging. In addition, we describe LORRI operations during the encounter, and the resulting imaging performance. Finally, we also briefly describe the post-Pluto encounter imaging of other Kuiper belt objects and the plans for the upcoming encounter with KBO 2014 MU69.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less
Thermal and range fusion for a planetary rover
NASA Technical Reports Server (NTRS)
Caillas, Claude
1992-01-01
This paper describes how fusion between thermal and range imaging allows us to discriminate different types of materials in outdoor scenes. First, we analyze how pure vision segmentation algorithms applied to thermal images allow discriminating materials such as rock and sand. Second, we show how combining thermal and range information allows us to better discriminate rocks from sand. Third, as an application, we examine how an autonomous legged robot can use these techniques to explore other planets.
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2017-10-01
Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with slightly different gate positions, 3D information within a part of the range gate can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 20 Hz, the time difference between the two required GV images is 50 ms which may be too long in a dynamic scenario with moving objects. Therefore, we have applied this approach to the reset and signal level images of a new short-wave infrared (SWIR) GV camera whose read-out integrated circuit supports correlated double sampling (CDS) actually intended for the reduction of kTC noise (reset noise). These images are extracted from only one single laser pulse with a marginal time difference in between. The SWIR GV camera consists of 640 x 512 avalanche photodiodes based on mercury cadmium telluride with a pixel pitch of 15 μm. A Q-switched, flash lamp pumped solid-state laser with 1.57 μm wavelength (OPO), 52 mJ pulse energy after beam shaping, 7 ns pulse length and 20 Hz pulse repetition frequency is used for flash illumination. In this paper, the experimental set-up is described and the operating principle of CDS is explained. The method of deriving super-resolution depth information from a GV system by using CDS is introduced and optimized. Further, the range accuracy is estimated from measured image data.
NASA Astrophysics Data System (ADS)
Patch, S. K.; Kireeff Covo, M.; Jackson, A.; Qadadha, Y. M.; Campbell, K. S.; Albright, R. A.; Bloemhard, P.; Donoghue, A. P.; Siero, C. R.; Gimpel, T. L.; Small, S. M.; Ninemire, B. F.; Johnson, M. B.; Phair, L.
2016-08-01
The potential of particle therapy due to focused dose deposition in the Bragg peak has not yet been fully realized due to inaccuracies in range verification. The purpose of this work was to correlate the Bragg peak location with target structure, by overlaying the location of the Bragg peak onto a standard ultrasound image. Pulsed delivery of 50 MeV protons was accomplished by a fast chopper installed between the ion source and the cyclotron inflector. The chopper limited the train of bunches so that 2 Gy were delivered in 2 μ \\text{s} . The ion pulse generated thermoacoustic pulses that were detected by a cardiac ultrasound array, which also produced a grayscale ultrasound image. A filtered backprojection algorithm focused the received signal to the Bragg peak location with perfect co-registration to the ultrasound images. Data was collected in a room temperature water bath and gelatin phantom with a cavity designed to mimic the intestine, in which gas pockets can displace the Bragg peak. Phantom experiments performed with the cavity both empty and filled with olive oil confirmed that displacement of the Bragg peak due to anatomical change could be detected. Thermoacoustic range measurements in the waterbath agreed with Monte Carlo simulation within 1.2 mm. In the phantom, thermoacoustic range estimates and first-order range estimates from CT images agreed to within 1.5 mm.
Pérez-Beteta, Julián; Martínez-González, Alicia; Martino, Juan; Velasquez, Carlos; Arana, Estanislao; Pérez-García, Víctor M.
2017-01-01
Purpose Textural measures have been widely explored as imaging biomarkers in cancer. However, their robustness under dynamic range and spatial resolution changes in brain 3D magnetic resonance images (MRI) has not been assessed. The aim of this work was to study potential variations of textural measures due to changes in MRI protocols. Materials and methods Twenty patients harboring glioblastoma with pretreatment 3D T1-weighted MRIs were included in the study. Four different spatial resolution combinations and three dynamic ranges were studied for each patient. Sixteen three-dimensional textural heterogeneity measures were computed for each patient and configuration including co-occurrence matrices (CM) features and run-length matrices (RLM) features. The coefficient of variation was used to assess the robustness of the measures in two series of experiments corresponding to (i) changing the dynamic range and (ii) changing the matrix size. Results No textural measures were robust under dynamic range changes. Entropy was the only textural feature robust under spatial resolution changes (coefficient of variation under 10% in all cases). Conclusion Textural measures of three-dimensional brain tumor images are not robust neither under dynamic range nor under matrix size changes. Standards should be harmonized to use textural features as imaging biomarkers in radiomic-based studies. The implications of this work go beyond the specific tumor type studied here and pose the need for standardization in textural feature calculation of oncological images. PMID:28586353
NASA Astrophysics Data System (ADS)
Jing, Joseph C.; Chou, Lidek; Su, Erica; Wong, Brian J. F.; Chen, Zhongping
2016-12-01
The upper airway is a complex tissue structure that is prone to collapse. Current methods for studying airway obstruction are inadequate in safety, cost, or availability, such as CT or MRI, or only provide localized qualitative information such as flexible endoscopy. Long range optical coherence tomography (OCT) has been used to visualize the human airway in vivo, however the limited imaging range has prevented full delineation of the various shapes and sizes of the lumen. We present a new long range OCT system that integrates high speed imaging with a real-time position tracker to allow for the acquisition of an accurate 3D anatomical structure in vivo. The new system can achieve an imaging range of 30 mm at a frame rate of 200 Hz. The system is capable of generating a rapid and complete visualization and quantification of the airway, which can then be used in computational simulations to determine obstruction sites.
Prospective feasibility analysis of a novel off-line approach for MR-guided radiotherapy.
Bostel, Tilman; Pfaffenberger, Asja; Delorme, Stefan; Dreher, Constantin; Echner, Gernot; Haering, Peter; Lang, Clemens; Splinter, Mona; Laun, Frederik; Müller, Marco; Jäkel, Oliver; Debus, Jürgen; Huber, Peter E; Sterzing, Florian; Nicolay, Nils H
2018-05-01
The present work aimed to analyze the feasibility of a shuttle-based MRI-guided radiation therapy (MRgRT) in the treatment of pelvic malignancies. 20 patients with pelvic malignancies were included in this prospective feasibility analysis. Patients underwent daily MRI in treatment position prior to radiotherapy at the German Cancer Research Center. Positional inaccuracies, time and patient compliance were assessed for the application of off-line MRgRT. In 78% of applied radiation fractions, MR imaging for position verification could be performed without problems. Additionally, treatment-related side effects and reduced patient compliance were only responsible for omission of MRI in 9% of radiation fractions. The study workflow took a median time of 61 min (range 47-99 min); duration for radiotherapy alone was 13 min (range 7-26 min). Patient positioning, MR imaging and CT imaging including patient repositioning and the shuttle transfer required median times of 10 min (range 7-14 min), 26 min (range 15-60 min), 5 min (range 3-8 min) and 8 min (range 2-36 min), respectively. To assess feasibility of shuttle-based MRgRT, the reference point coordinates for the x, y and z axis were determined for the MR images and CT obtained prior to the first treatment fraction and correlated with the coordinates of the planning CT. In our dataset, the median positional difference between MR imaging and CT-based imaging based on fiducial matching between MR and CT imaging was equal to or less than 2 mm in all spatial directions. The limited space in the MR scanner influenced patient selection, as the bore of the scanner had to accommodate the immobilization device and the constructed stereotactic frame. Therefore, obese, extremely muscular or very tall patients could not be included in this trial in addition to patients for whom exposure to MRI was generally judged inappropriate. This trial demonstrated for the first time the feasibility and patient compliance of a shuttle-based off-line approach to MRgRT of pelvic malignancies.
Swap intensified WDR CMOS module for I2/LWIR fusion
NASA Astrophysics Data System (ADS)
Ni, Yang; Noguier, Vincent
2015-05-01
The combination of high resolution visible-near-infrared low light sensor and moderate resolution uncooled thermal sensor provides an efficient way for multi-task night vision. Tremendous progress has been made on uncooled thermal sensors (a-Si, VOx, etc.). It's possible to make a miniature uncooled thermal camera module in a tiny 1cm3 cube with <1W power consumption. For silicon based solid-state low light CCD/CMOS sensors have observed also a constant progress in terms of readout noise, dark current, resolution and frame rate. In contrast to thermal sensing which is intrinsic day&night operational, the silicon based solid-state sensors are not yet capable to do the night vision performance required by defense and critical surveillance applications. Readout noise, dark current are 2 major obstacles. The low dynamic range at high sensitivity mode of silicon sensors is also an important limiting factor, which leads to recognition failure due to local or global saturations & blooming. In this context, the image intensifier based solution is still attractive for the following reasons: 1) high gain and ultra-low dark current; 2) wide dynamic range and 3) ultra-low power consumption. With high electron gain and ultra low dark current of image intensifier, the only requirement on the silicon image pickup device are resolution, dynamic range and power consumption. In this paper, we present a SWAP intensified Wide Dynamic Range CMOS module for night vision applications, especially for I2/LWIR fusion. This module is based on a dedicated CMOS image sensor using solar-cell mode photodiode logarithmic pixel design which covers a huge dynamic range (> 140dB) without saturation and blooming. The ultra-wide dynamic range image from this new generation logarithmic sensor can be used directly without any image processing and provide an instant light accommodation. The complete module is slightly bigger than a simple ANVIS format I2 tube with <500mW power consumption.
Han, Jun; Won, Seok-Hyung; Kim, Jung-Taek; Hahn, Myung-Hoon
2018-01-01
Purpose Femoroacetabular impingement (FAI) is considered an important cause of early degenerative arthritis development. Although three-dimensional (3D) imaging such as computed tomography (CT) and magnetic resonance imaging are considered precise imaging modalities for 3D morphology of FAI, they are associated with several limitations when used in out-patient clinics. The paucity of FAI morphologic data in Koreans makes it difficult to select the most effective radiographical method when screening for general orthopedic problems. We postulate that there might be an individual variation in the distribution of cam deformity in the asymptomatic Korean population. Materials and Methods From January 2011 to December 2015, CT images of the hips of 100 subjects without any history of hip joint ailments were evaluated. A computer program which generates 3D models from CT scans was used to provide sectional images which cross the central axis of the femoral head and neck. Alpha angles were measured in each sectional images. Alpha angles above 55° were regarded as cam deformity. Results The mean alpha angle was 43.5°, range 34.7–56.1°(3 o'clock); 51.24°, range 39.5–58.8°(2 o'clock); 52.45°, range 43.3–65.5°(1 o'clock); 44.09°, range 36.8–49.8°(12 o'clock); 40.71, range 33.5–45.8°(11 o'clock); and 39.21°, range 34.1–44.6°(10 o'clock). Alpha angle in 1 and 2 o'clock was significantly larger than other locations (P<0.01). The prevalence of cam deformity was 18.0% and 19.0% in 1 and 2 o'clock, respectively. Conclusion Cam deformity of FAI was observed in 31% of asymptomatic hips. The most common region of cam deformity was antero-superior area of femoral head-neck junction (1 and 2 o'clock). PMID:29564291
Direct-conversion flat-panel imager with avalanche gain: Feasibility investigation for HARP-AMFPI
Wronski, M. M.; Rowlands, J. A.
2008-01-01
The authors are investigating the concept of a direct-conversion flat-panel imager with avalanche gain for low-dose x-ray imaging. It consists of an amorphous selenium (a-Se) photoconductor partitioned into a thick drift region for x-ray-to-charge conversion and a relatively thin region called high-gain avalanche rushing photoconductor (HARP) in which the charge undergoes avalanche multiplication. An active matrix of thin film transistors is used to read out the electronic image. The authors call the proposed imager HARP active matrix flat panel imager (HARP-AMFPI). The key advantages of HARP-AMFPI are its high spatial resolution, owing to the direct-conversion a-Se layer, and its programmable avalanche gain, which can be enabled during low dose fluoroscopy to overcome electronic noise and disabled during high dose radiography to prevent saturation of the detector elements. This article investigates key design considerations for HARP-AMFPI. The effects of electronic noise on the imaging performance of HARP-AMFPI were modeled theoretically and system parameters were optimized for radiography and fluoroscopy. The following imager properties were determined as a function of avalanche gain: (1) the spatial frequency dependent detective quantum efficiency; (2) fill factor; (3) dynamic range and linearity; and (4) gain nonuniformities resulting from electric field strength nonuniformities. The authors results showed that avalanche gains of 5 and 20 enable x-ray quantum noise limited performance throughout the entire exposure range in radiography and fluoroscopy, respectively. It was shown that HARP-AMFPI can provide the required gain while maintaining a 100% effective fill factor and a piecewise dynamic range over five orders of magnitude (10−7–10−2 R∕frame). The authors have also shown that imaging performance is not significantly affected by the following: electric field strength nonuniformities, avalanche noise for x-ray energies above 1 keV and direct interaction of x rays in the gain region. Thus, HARP-AMFPI is a promising flat-panel imager structure that enables high-resolution fully quantum noise limited x-ray imaging over a wide exposure range. PMID:19175080
Direct-conversion flat-panel imager with avalanche gain: Feasibility investigation for HARP-AMFPI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wronski, M. M.; Rowlands, J. A.
2008-12-15
The authors are investigating the concept of a direct-conversion flat-panel imager with avalanche gain for low-dose x-ray imaging. It consists of an amorphous selenium (a-Se) photoconductor partitioned into a thick drift region for x-ray-to-charge conversion and a relatively thin region called high-gain avalanche rushing photoconductor (HARP) in which the charge undergoes avalanche multiplication. An active matrix of thin film transistors is used to read out the electronic image. The authors call the proposed imager HARP active matrix flat panel imager (HARP-AMFPI). The key advantages of HARP-AMFPI are its high spatial resolution, owing to the direct-conversion a-Se layer, and its programmablemore » avalanche gain, which can be enabled during low dose fluoroscopy to overcome electronic noise and disabled during high dose radiography to prevent saturation of the detector elements. This article investigates key design considerations for HARP-AMFPI. The effects of electronic noise on the imaging performance of HARP-AMFPI were modeled theoretically and system parameters were optimized for radiography and fluoroscopy. The following imager properties were determined as a function of avalanche gain: (1) the spatial frequency dependent detective quantum efficiency; (2) fill factor; (3) dynamic range and linearity; and (4) gain nonuniformities resulting from electric field strength nonuniformities. The authors results showed that avalanche gains of 5 and 20 enable x-ray quantum noise limited performance throughout the entire exposure range in radiography and fluoroscopy, respectively. It was shown that HARP-AMFPI can provide the required gain while maintaining a 100% effective fill factor and a piecewise dynamic range over five orders of magnitude (10{sup -7}-10{sup -2} R/frame). The authors have also shown that imaging performance is not significantly affected by the following: electric field strength nonuniformities, avalanche noise for x-ray energies above 1 keV and direct interaction of x rays in the gain region. Thus, HARP-AMFPI is a promising flat-panel imager structure that enables high-resolution fully quantum noise limited x-ray imaging over a wide exposure range.« less
Chen, Liang; Carlton Jones, Anoma Lalani; Mair, Grant; Patel, Rajiv; Gontsarova, Anastasia; Ganesalingam, Jeban; Math, Nikhil; Dawson, Angela; Aweid, Basaam; Cohen, David; Mehta, Amrish; Wardlaw, Joanna; Rueckert, Daniel; Bentley, Paul
2018-05-15
Purpose To validate a random forest method for segmenting cerebral white matter lesions (WMLs) on computed tomographic (CT) images in a multicenter cohort of patients with acute ischemic stroke, by comparison with fluid-attenuated recovery (FLAIR) magnetic resonance (MR) images and expert consensus. Materials and Methods A retrospective sample of 1082 acute ischemic stroke cases was obtained that was composed of unselected patients who were treated with thrombolysis or who were undergoing contemporaneous MR imaging and CT, and a subset of International Stroke Thrombolysis-3 trial participants. Automated delineations of WML on images were validated relative to experts' manual tracings on CT images, and co-registered FLAIR MR imaging, and ratings were performed by using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between automated and expert ratings. Results Automated WML volumes correlated strongly with expert-delineated WML volumes at MR imaging and CT (r 2 = 0.85 and 0.71 respectively; P < .001). Spatial-similarity of automated maps, relative to WML MR imaging, was not significantly different to that of expert WML tracings on CT images. Individual expert WML volumes at CT correlated well with each other (r 2 = 0.85), but varied widely (range, 91% of mean estimate; median estimate, 11 mL; range of estimated ranges, 0.2-68 mL). Agreements (κ) between automated ratings and consensus ratings were 0.60 (Wahlund system) and 0.64 (van Swieten system) compared with agreements between individual pairs of experts of 0.51 and 0.67, respectively, for the two rating systems (P < .01 for Wahlund system comparison of agreements). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (P > .05). Automated preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total automated processing time averaged 109 seconds (range, 79-140 seconds). Conclusion An automated method for quantifying CT cerebral white matter lesions achieves a similar accuracy to experts in unselected and multicenter cohorts. © RSNA, 2018 Online supplemental material is available for this article.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation.
Ritschl, Ludwig; Kuntz, Jan; Fleischmann, Christof; Kachelrieß, Marc
2016-05-01
In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled data set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.
The rotate-plus-shift C-arm trajectory. Part I. Complete data with less than 180° rotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritschl, Ludwig; Fleischmann, Christof; Kuntz, Jan, E-mail: j.kuntz@dkfz.de
Purpose: In the last decade, C-arm-based cone-beam CT became a widely used modality for intraoperative imaging. Typically a C-arm CT scan is performed using a circular or elliptical trajectory around a region of interest. Therefore, an angular range of at least 180° plus fan angle must be covered to ensure a completely sampled data set. However, mobile C-arms designed with a focus on classical 2D applications like fluoroscopy may be limited to a mechanical rotation range of less than 180° to improve handling and usability. The method proposed in this paper allows for the acquisition of a fully sampled datamore » set with a system limited to a mechanical rotation range of at least 180° minus fan angle using a new trajectory design. This enables CT like 3D imaging with a wide range of C-arm devices which are mainly designed for 2D imaging. Methods: The proposed trajectory extends the mechanical rotation range of the C-arm system with two additional linear shifts. Due to the divergent character of the fan-beam geometry, these two shifts lead to an additional angular range of half of the fan angle. Combining one shift at the beginning of the scan followed by a rotation and a second shift, the resulting rotate-plus-shift trajectory enables the acquisition of a completely sampled data set using only 180° minus fan angle of rotation. The shifts can be performed using, e.g., the two orthogonal positioning axes of a fully motorized C-arm system. The trajectory was evaluated in phantom and cadaver examinations using two prototype C-arm systems. Results: The proposed trajectory leads to reconstructions without limited angle artifacts. Compared to the limited angle reconstructions of 180° minus fan angle, image quality increased dramatically. Details in the rotate-plus-shift reconstructions were clearly depicted, whereas they are dominated by artifacts in the limited angle scan. Conclusions: The method proposed here employs 3D imaging using C-arms with less than 180° rotation range adding full 3D functionality to a C-arm device retaining both handling comfort and the usability of 2D imaging. This method has a clear potential for clinical use especially to meet the increasing demand for an intraoperative 3D imaging.« less
Limited-angle tomography for analyzer-based phase-contrast X-ray imaging
Majidi, Keivan; Wernick, Miles N; Li, Jun; Muehleman, Carol; Brankov, Jovan G
2014-01-01
Multiple-Image Radiography (MIR) is an analyzer-based phase-contrast X-ray imaging method (ABI), which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume. PMID:24898008
Limited-angle tomography for analyzer-based phase-contrast x-ray imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Wernick, Miles N.; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-07-01
Multiple-image radiography (MIR) is an analyzer-based phase-contrast x-ray imaging method, which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume.
Kuzmak, P. M.; Dayhoff, R. E.
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner. PMID:1482906
Kuzmak, P M; Dayhoff, R E
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner.
Venkatesh, Pradeep; Sharma, Reetika; Vashist, Nagender; Vohra, Rajpal; Garg, Satpal
2015-10-01
Red-free light allows better detection of vascular lesions as this wavelength is absorbed by hemoglobin; however, the current gold standard for the detection and grading of diabetic retinopathy remains 7-field color fundus photography. The goal of this study was to compare the ability of 7-field fundus photography using red-free light to detect retinopathy lesions with corresponding images captured using standard 7-field color photography. Non-stereoscopic standard 7-field 30° digital color fundus photography and 7-field 30° digital red-free fundus photography were performed in 200 eyes of 103 patients with various grades of diabetic retinopathy ranging from mild to moderate non-proliferative diabetic retinopathy to proliferative diabetic retinopathy. The color images (n = 1,400) were studied with corresponding red-free images (n = 1,400) by one retina consultant (PV) and two senior residents training in retina. The various retinal lesions [microaneurysms, hemorrhages, hard exudates, soft exudates, intra-retinal microvascular anomalies (IRMA), neovascularization of the retina elsewhere (NVE), and neovascularization of the disc (NVD)] detected by all three observers in each of the photographs were noted followed by determination of agreement scores using κ values (range 0-1). Kappa coefficient was categorized as poor (≤0), slight (0.01-0.20), fair (0.2 -0.40), moderate (0.41-0.60), substantial (0.61-0.80), and almost perfect (0.81-1). The number of lesions detected by red-free images alone was higher for all observers and all abnormalities except hard exudates. Detection of IRMA was especially higher for all observers with red-free images. Between image pairs, there was substantial agreement for detection of hard exudates (average κ = 0.62, range 0.60-0.65) and moderate agreement for detection of hemorrhages (average κ = 0.52, range 0.45-0.58), soft exudates (average κ = 0.51, range 0.42-0.61), NVE (average κ = 0.47, range 0.39-0.53), and NVD (average κ = 0.51, range 0.45-0.54). Fair agreement was noted for detection of microaneurysms (average κ = 0.29, range 0.20-0.39) and IRMA (average κ = 0.23, range 0.23-0.24). Inter-observer agreement with color images was substantial for hemorrhages (average κ = 0.72), soft exudates (average κ = 0.65), and NVD (average κ = 0.65); moderate for microaneurysms (average κ = 0.42), NVE (average κ = 0.44), and hard exudates (average κ = 0.59) and fair for IRMA (average κ = 0.21). Inter-observer agreement with red-free images was substantial for hard exudates (average κ = 0.63) and moderate for detection of hemorrhages (average κ = 0.56), SE (average κ = 0.60), IRMA (average κ = 0.50), NVE (average κ = 0.44), and NVD (average κ = 0.45). Digital red-free photography has a higher level of detection ability for all retinal lesions of diabetic retinopathy. More advanced grades of retinopathy are likely to be detected earlier with red-free imaging because of its better ability to detect IRMA, NVE, and NVD. Red-free monochromatic imaging of the retina is a more effective and less costly alternative for detection of vision-threatening diabetic retinopathy.
Multispectral Imaging in Cultural Heritage Conservation
NASA Astrophysics Data System (ADS)
Del Pozo, S.; Rodríguez-Gonzálvez, P.; Sánchez-Aparicio, L. J.; Muñoz-Nieto, A.; Hernández-López, D.; Felipe-García, B.; González-Aguilera, D.
2017-08-01
This paper sums up the main contribution derived from the thesis entitled "Multispectral imaging for the analysis of materials and pathologies in civil engineering, constructions and natural spaces" awarded by CIPA-ICOMOS for its connection with the preservation of Cultural Heritage. This thesis is framed within close-range remote sensing approaches by the fusion of sensors operating in the optical domain (visible to shortwave infrared spectrum). In the field of heritage preservation, multispectral imaging is a suitable technique due to its non-destructive nature and its versatility. It combines imaging and spectroscopy to analyse materials and land covers and enables the use of a variety of different geomatic sensors for this purpose. These sensors collect both spatial and spectral information for a given scenario and a specific spectral range, so that, their smaller storage units save the spectral properties of the radiation reflected by the surface of interest. The main goal of this research work is to characterise different construction materials as well as the main pathologies of Cultural Heritage elements by combining active and passive sensors recording data in different ranges. Conclusions about the suitability of each type of sensor and spectral range are drawn in relation to each particular case study and damage. It should be emphasised that results are not limited to images, since 3D intensity data from laser scanners can be integrated with 2D data from passive sensors obtaining high quality products due to the added value that metric brings to multispectral images.
NASA Astrophysics Data System (ADS)
Hennessy, Ricky; Koo, Chiwan; Ton, Phuc; Han, Arum; Righetti, Raffaella; Maitland, Kristen C.
2011-03-01
Ultrasound poroelastography can quantify structural and mechanical properties of tissues such as stiffness, compressibility, and fluid flow rate. This novel ultrasound technique is being explored to detect tissue changes associated with lymphatic disease. We have constructed a macroscopic fluorescence imaging system to validate ultrasonic fluid flow measurements and to provide high resolution imaging of microfluidic phantoms. The optical imaging system is composed of a white light source, excitation and emission filters, and a camera with a zoom lens. The field of view can be adjusted from 100 mm x 75 mm to 10 mm x 7.5 mm. The microfluidic device is made of polydimethylsiloxane (PDMS) and has 9 channels, each 40 μm deep with widths ranging from 30 μm to 200 μm. A syringe pump was used to propel water containing 15 μm diameter fluorescent microspheres through the microchannels, with flow rates ranging from 0.5 μl/min to 10 μl/min. Video was captured at a rate of 25 frames/sec. The velocity of the microspheres in the microchannels was calculated using an algorithm that tracked the movement of the fluorescent microspheres. The imaging system was able to measure particle velocities ranging from 0.2 mm/sec to 10 mm/sec. The range of flow velocities of interest in lymph vessels is between 1 mm/sec to 10 mm/sec; therefore our imaging system is sufficient to measure particle velocity in phantoms modeling lymphatic flow.
Objective quality assessment of tone-mapped images.
Yeganeh, Hojatollah; Wang, Zhou
2013-02-01
Tone-mapping operators (TMOs) that convert high dynamic range (HDR) to low dynamic range (LDR) images provide practically useful tools for the visualization of HDR images on standard LDR displays. Different TMOs create different tone-mapped images, and a natural question is which one has the best quality. Without an appropriate quality measure, different TMOs cannot be compared, and further improvement is directionless. Subjective rating may be a reliable evaluation method, but it is expensive and time consuming, and more importantly, is difficult to be embedded into optimization frameworks. Here we propose an objective quality assessment algorithm for tone-mapped images by combining: 1) a multiscale signal fidelity measure on the basis of a modified structural similarity index and 2) a naturalness measure on the basis of intensity statistics of natural images. Validations using independent subject-rated image databases show good correlations between subjective ranking score and the proposed tone-mapped image quality index (TMQI). Furthermore, we demonstrate the extended applications of TMQI using two examples-parameter tuning for TMOs and adaptive fusion of multiple tone-mapped images.
NASA Astrophysics Data System (ADS)
Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin
2013-12-01
Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.
NASA Astrophysics Data System (ADS)
Lynam, Jeff R.
2001-09-01
A more highly integrated, electro-optical sensor suite using Laser Illuminated Viewing and Ranging (LIVAR) techniques is being developed under the Army Advanced Concept Technology- II (ACT-II) program for enhanced manportable target surveillance and identification. The ManPortable LIVAR system currently in development employs a wide-array of sensor technologies that provides the foot-bound soldier and UGV significant advantages and capabilities in lightweight, fieldable, target location, ranging and imaging systems. The unit incorporates a wide field-of-view, 5DEG x 3DEG, uncooled LWIR passive sensor for primary target location. Laser range finding and active illumination is done with a triggered, flash-lamp pumped, eyesafe micro-laser operating in the 1.5 micron region, and is used in conjunction with a range-gated, electron-bombarded CCD digital camera to then image the target objective in a more- narrow, 0.3$DEG, field-of-view. Target range determination is acquired using the integrated LRF and a target position is calculated using data from other onboard devices providing GPS coordinates, tilt, bank and corrected magnetic azimuth. Range gate timing and coordinated receiver optics focus control allow for target imaging operations to be optimized. The onboard control electronics provide power efficient, system operations for extended field use periods from the internal, rechargeable battery packs. Image data storage, transmission, and processing performance capabilities are also being incorporated to provide the best all-around support, for the electronic battlefield, in this type of system. The paper will describe flash laser illumination technology, EBCCD camera technology with flash laser detection system, and image resolution improvement through frame averaging.
Mackenzie, Alistair; Dance, David R; Workman, Adam; Yip, Mary; Wells, Kevin; Young, Kenneth C
2012-05-01
Undertaking observer studies to compare imaging technology using clinical radiological images is challenging due to patient variability. To achieve a significant result, a large number of patients would be required to compare cancer detection rates for different image detectors and systems. The aim of this work was to create a methodology where only one set of images is collected on one particular imaging system. These images are then converted to appear as if they had been acquired on a different detector and x-ray system. Therefore, the effect of a wide range of digital detectors on cancer detection or diagnosis can be examined without the need for multiple patient exposures. Three detectors and x-ray systems [Hologic Selenia (ASE), GE Essential (CSI), Carestream CR (CR)] were characterized in terms of signal transfer properties, noise power spectra (NPS), modulation transfer function, and grid properties. The contributions of the three noise sources (electronic, quantum, and structure noise) to the NPS were calculated by fitting a quadratic polynomial at each spatial frequency of the NPS against air kerma. A methodology was developed to degrade the images to have the characteristics of a different (target) imaging system. The simulated images were created by first linearizing the original images such that the pixel values were equivalent to the air kerma incident at the detector. The linearized image was then blurred to match the sharpness characteristics of the target detector. Noise was then added to the blurred image to correct for differences between the detectors and any required change in dose. The electronic, quantum, and structure noise were added appropriate to the air kerma selected for the simulated image and thus ensuring that the noise in the simulated image had the same magnitude and correlation as the target image. A correction was also made for differences in primary grid transmission, scatter, and veiling glare. The method was validated by acquiring images of a CDMAM contrast detail test object (Artinis, The Netherlands) at five different doses for the three systems. The ASE CDMAM images were then converted to appear with the imaging characteristics of target CR and CSI detectors. The measured threshold gold thicknesses of the simulated and target CDMAM images were closely matched at normal dose level and the average differences across the range of detail diameters were -4% and 0% for the CR and CSI systems, respectively. The conversion was successful for images acquired over a wide dose range. The average difference between simulated and target images for a given dose was a maximum of 11%. The validation shows that the image quality of a digital mammography image obtained with a particular system can be degraded, in terms of noise magnitude and color, sharpness, and contrast to account for differences in the detector and antiscatter grid. Potentially, this is a powerful tool for observer studies, as a range of image qualities can be examined by modifying an image set obtained at a single (better) image quality thus removing the patient variability when comparing systems.
Magellan radar image of Danu Montes in Lakshmi Region of Venus
NASA Technical Reports Server (NTRS)
1990-01-01
This Magellan radar mosaic image is of part of the Danu Montes in the Lakshmi Region of Venus. The area in the image is located at 329.6 degrees east longitude and 58.75 degrees north latitude. This image shows an area 40 kilometers (km) (19.6 miles) wide and 60 km (39.2 miles) long. Danu Montes is a mountain belt located at the southern edge of the Ishtar Terra highland region. It rises one to three kilometers above a flat plain to the north known as Lakshmi Planum. On the basis of Pioneer Venus, Arecibo and Venera data, Danu Montes and the other mountain belts surrounding Lakshmi Planum have been interpreted to be orogenic belts marking the focus of compressional deformation, much like the Appalachian and Andes ranges on Earth. In the upper right part of this image, relatively bright, smooth-textured plains of Lakshmi Planum are seen to embay the heavily deformed mountain range to the south. In the mountain range south of these plains the geology is dominated by abundant faults at mu
A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083
A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.
Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés
2011-01-01
Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.
In vivo verification of particle therapy: how Compton camera configurations affect 3D image quality
NASA Astrophysics Data System (ADS)
Mackin, D.; Draeger, E.; Peterson, S.; Polf, J.; Beddar, S.
2017-05-01
The steep dose gradients enabled by the Bragg peaks of particle therapy beams are a double edged sword. They enable highly conformal dose distributions, but even small deviations from the planned beam range can cause overdosing of healthy tissue or under-dosing of the tumour. To reduce this risk, particle therapy treatment plans include margins large enough to account for all the sources of range uncertainty, which include patient setup errors, patient anatomy changes, and CT number to stopping power ratios. Any system that could verify the beam range in vivo, would allow reduced margins and more conformal dose distributions. Toward our goal developing such a system based on Compton camera (CC) imaging, we studied how three configurations (single camera, parallel opposed, and orthogonal) affect the quality of the 3D images. We found that single CC and parallel opposed configurations produced superior images in 2D. The increase in parallax produced by an orthogonal CC configuration was shown to be beneficial in producing artefact free 3D images.
The long range voice coil atomic force microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, H.; Randall, C.; Bridges, D.
2012-02-15
Most current atomic force microscopes (AFMs) use piezoelectric ceramics for scan actuation. Piezoelectric ceramics provide precision motion with fast response to applied voltage potential. A drawback to piezoelectric ceramics is their inherently limited ranges. For many samples this is a nonissue, as imaging the nanoscale details is the goal. However, a key advantage of AFM over other microscopy techniques is its ability to image biological samples in aqueous buffer. Many biological specimens have topography for which the range of piezoactuated stages is limiting, a notable example of which is bone. In this article, we present the use of voice coilsmore » in scan actuation for an actuation range in the Z-axis an order of magnitude larger than any AFM commercially available today. The increased scan size will allow for imaging an important new variety of samples, including bone fractures.« less
Adaptive DOF for plenoptic cameras
NASA Astrophysics Data System (ADS)
Oberdörster, Alexander; Lensch, Hendrik P. A.
2013-03-01
Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.
HDR imaging and color constancy: two sides of the same coin?
NASA Astrophysics Data System (ADS)
McCann, John J.
2011-01-01
At first, we think that High Dynamic Range (HDR) imaging is a technique for improved recordings of scene radiances. Many of us think that human color constancy is a variation of a camera's automatic white balance algorithm. However, on closer inspection, glare limits the range of light we can detect in cameras and on retinas. All scene regions below middle gray are influenced, more or less, by the glare from the bright scene segments. Instead of accurate radiance reproduction, HDR imaging works well because it preserves the details in the scene's spatial contrast. Similarly, on closer inspection, human color constancy depends on spatial comparisons that synthesize appearances from all the scene segments. Can spatial image processing play similar principle roles in both HDR imaging and color constancy?
New segmentation-based tone mapping algorithm for high dynamic range image
NASA Astrophysics Data System (ADS)
Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong
2017-07-01
The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.
Kim, Young-sun; Kim, Byoung-Gie; Rhim, Hyunchul; Bae, Duk-Soo; Lee, Jeong-Won; Kim, Tae-Joong; Choi, Chel Hun; Lee, Yoo-Young; Lim, Hyo Keun
2014-11-01
To determine whether semiquantitative perfusion magnetic resonance (MR) imaging parameters are associated with therapeutic effectiveness of MR imaging-guided high-intensity focused ultrasound ( HIFU high-intensity focused ultrasound ) ablation of uterine fibroids and which semiquantitative perfusion parameters are significant with regard to treatment efficiency. This study was approved by the institutional review board, and informed consent was obtained from all subjects. Seventy-seven women (mean age, 43.3 years) with 119 fibroids (mean diameter, 7.5 cm) treated with MR imaging-guided HIFU high-intensity focused ultrasound ablation were analyzed. The correlation between semiquantitative perfusion MR parameters (peak enhancement, relative peak enhancement, time to peak, wash-in rate, washout rate) and heating and ablation efficiencies (lethal thermal dose volume based on MR thermometry and nonperfused volume based on immediate contrast-enhanced image divided by intended treatment volume) were evaluated by using a linear mixed model on a per-fibroid basis. The specific value of the significant parameter that had a substantial effect on treatment efficiency was determined. The mean peak enhancement, relative peak enhancement, time to peak, wash-in rate, and washout rate of the fibroids were 1293.1 ± 472.8 (range, 570.2-2477.8), 171.4% ± 57.2 (range, 0.6%-370.2%), 137.2 seconds ± 119.8 (range, 20.0-300.0 seconds), 79.5 per second ± 48.2 (range, 12.5-236.7 per second), and 11.4 per second ± 10.1 (range, 0-39.3 per second), respectively. Relative peak enhancement was found to be independently significant for both heating and ablation efficiencies (B = -0.002, P < .001 and B = -0.003, P = .050, respectively). The washout rate was significantly associated with ablation efficiency (B = -0.018, P = .043). Both efficiencies showed the most abrupt transitions at 220% of relative peak enhancement. Relative peak enhancement at semiquantitative perfusion MR imaging was significantly associated with treatment efficiency of MR imaging-guided HIFU high-intensity focused ultrasound ablation of uterine fibroids, and a value of 220% or less is suggested as a screening guideline for more efficient treatment.
Advanced studies of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Ling, Hao
1994-01-01
In radar signature applications it is often desirable to generate the range profiles and inverse synthetic aperture radar (ISAR) images of a target. They can be used either as identification tools to distinguish and classify the target from a collection of possible targets, or as diagnostic/design tools to pinpoint the key scattering centers on the target. The simulation of synthetic range profiles and ISAR images is usually a time intensive task and computation time is of prime importance. Our research has been focused on the development of fast simulation algorithms for range profiles and ISAR images using the shooting and bouncing ray (SBR) method, a high frequency electromagnetic simulation technique for predicting the radar returns from realistic aerospace vehicles and the scattering by complex media.
Lim, Byoung-Gyun; Woo, Jea-Choon; Lee, Hee-Young; Kim, Young-Soo
2008-01-01
Synthetic wideband waveforms (SWW) combine a stepped frequency CW waveform and a chirp signal waveform to achieve high range resolution without requiring a large bandwidth or the consequent very high sampling rate. If an efficient algorithm like the range-Doppler algorithm (RDA) is used to acquire the SAR images for synthetic wideband signals, errors occur due to approximations, so the images may not show the best possible result. This paper proposes a modified subpulse SAR processing algorithm for synthetic wideband signals which is based on RDA. An experiment with an automobile-based SAR system showed that the proposed algorithm is quite accurate with a considerable improvement in resolution and quality of the obtained SAR image. PMID:27873984
NASA Astrophysics Data System (ADS)
Purnamasari, L.; Iskandar, H. H. B.; Makes, B. N.
2017-08-01
In digitized radiography techniques, adjusting the image enhancement can improve the subjective image quality by optimizing the brightness and contrast for diagnostic needs. To determine the value range of image enhancement (brightness and contrast) on chronic apical abscess and apical granuloma interpretation. 30 periapical radiographs that diagnosed chronic apical abscess and 30 that diagnosed apical granuloma were adjusted by changing brightness and contrast values. The value range of brightness and contrast adjustment that can be tolerated in radiographic interpretations of chronic apical abscess and apical granuloma spans from -10 to +10. Brightness and contrast adjustments on digital radiographs do not affect the radiographic interpretation of chronic apical abscess and apical granuloma if conducted within the value range.
Range image segmentation using Zernike moment-based generalized edge detector
NASA Technical Reports Server (NTRS)
Ghosal, S.; Mehrotra, R.
1992-01-01
The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.
Clustering approaches to feature change detection
NASA Astrophysics Data System (ADS)
G-Michael, Tesfaye; Gunzburger, Max; Peterson, Janet
2018-05-01
The automated detection of changes occurring between multi-temporal images is of significant importance in a wide range of medical, environmental, safety, as well as many other settings. The usage of k-means clustering is explored as a means for detecting objects added to a scene. The silhouette score for the clustering is used to define the optimal number of clusters that should be used. For simple images having a limited number of colors, new objects can be detected by examining the change between the optimal number of clusters for the original and modified images. For more complex images, new objects may need to be identified by examining the relative areas covered by corresponding clusters in the original and modified images. Which method is preferable depends on the composition and range of colors present in the images. In addition to describing the clustering and change detection methodology of our proposed approach, we provide some simple illustrations of its application.
Nanohole-array-based device for 2D snapshot multispectral imaging
Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.
2013-01-01
We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065
High-resolution three-dimensional imaging radar
NASA Technical Reports Server (NTRS)
Cooper, Ken B. (Inventor); Chattopadhyay, Goutam (Inventor); Siegel, Peter H. (Inventor); Dengler, Robert J. (Inventor); Schlecht, Erich T. (Inventor); Mehdi, Imran (Inventor); Skalare, Anders J. (Inventor)
2010-01-01
A three-dimensional imaging radar operating at high frequency e.g., 670 GHz, is disclosed. The active target illumination inherent in radar solves the problem of low signal power and narrow-band detection by using submillimeter heterodyne mixer receivers. A submillimeter imaging radar may use low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform. Three-dimensional images are generated through range information derived for each pixel scanned over a target. A peak finding algorithm may be used in processing for each pixel to differentiate material layers of the target. Improved focusing is achieved through a compensation signal sampled from a point source calibration target and applied to received signals from active targets prior to FFT-based range compression to extract and display high-resolution target images. Such an imaging radar has particular application in detecting concealed weapons or contraband.
Design of CMOS imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Cha, Dong Ik; Lee, Min Woo; Song, Kyoung Doo; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-06-01
To compare the accuracy and required time for image fusion of real-time ultrasound (US) with pre-procedural magnetic resonance (MR) images between positioning auto-registration and manual registration for percutaneous radiofrequency ablation or biopsy of hepatic lesions. This prospective study was approved by the institutional review board, and all patients gave written informed consent. Twenty-two patients (male/female, n = 18/n = 4; age, 61.0 ± 7.7 years) who were referred for planning US to assess the feasibility of radiofrequency ablation (n = 21) or biopsy (n = 1) for focal hepatic lesions were included. One experienced radiologist performed the two types of image fusion methods in each patient. The performance of auto-registration and manual registration was evaluated. The accuracy of the two methods, based on measuring registration error, and the time required for image fusion for both methods were recorded using in-house software and respectively compared using the Wilcoxon signed rank test. Image fusion was successful in all patients. The registration error was not significantly different between the two methods (auto-registration: median, 3.75 mm; range, 1.0-15.8 mm vs. manual registration: median, 2.95 mm; range, 1.2-12.5 mm, p = 0.242). The time required for image fusion was significantly shorter with auto-registration than with manual registration (median, 28.5 s; range, 18-47 s, vs. median, 36.5 s; range, 14-105 s, p = 0.026). Positioning auto-registration showed promising results compared with manual registration, with similar accuracy and even shorter registration time.
Shoji, Sunao; Hiraiwa, Shinichiro; Endo, Jun; Hashida, Kazunobu; Tomonaga, Tetsuro; Nakano, Mayura; Sugiyama, Tomoko; Tajiri, Takuma; Terachi, Toshiro; Uchida, Toyoaki
2015-02-01
To report our early experience with manually controlled targeted biopsy with real-time multiparametric magnetic resonance imaging and transrectal ultrasound fusion images for the diagnosis of prostate cancer. A total of 20 consecutive patients suspicious of prostate cancer at the multiparametric magnetic resonance imaging scan were recruited prospectively. Targeted biopsies were carried out for each cancer-suspicious lesion, and 12 systematic biopsies using the BioJet system. Pathological findings of targeted and systematic biopsies were analyzed. The median age of the patients was 70 years (range 52-83 years). The median preoperative prostate-specific antigen value was 7.4 ng/mL (range 3.54-19.9 ng/mL). Median preoperative prostate volume was 38 mL (range 24-68 mL). The number of cancer-detected cases was 14 (70%). The median Gleason score was 6.5 (range 6-8). Cancer-detected rates of the systematic and targeted biopsy cores were 6.7 and 31.8%, respectively (P < 0.0001). In six patients who underwent radical prostatectomy, the geographic locations and pathological grades of clinically significant cancers and index lesions corresponded to the pathological results of the targeted biopsies. Prostate cancers detected by targeted biopsies with manually controlled targeted biopsy using real-time multiparametric magnetic resonance imaging and transrectal ultrasound fusion imaging have significantly higher grades and longer length compared with those detected by systematic biopsies. Further studies and comparison with the pathological findings of whole-gland specimens have the potential to determine the role of this biopsy methodology in patients selected for focal therapy and those under active surveillance. © 2014 The Japanese Urological Association.
SU-C-207A-03: Development of Proton CT Imaging System Using Thick Scintillator and CCD Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, S; Uesaka, M; Nishio, T
2016-06-15
Purpose: In the treatment planning of proton therapy, Water Equivalent Length (WEL), which is the parameter for the calculation of dose and the range of proton, is derived by X-ray CT (xCT) image and xCT-WEL conversion. However, about a few percent error in the accuracy of proton range calculation through this conversion has been reported. The purpose of this study is to construct a proton CT (pCT) imaging system for an evaluation of the error. Methods: The pCT imaging system was constructed with a thick scintillator and a cooled CCD camera, which acquires the two-dimensional image of integrated value ofmore » the scintillation light toward the beam direction. The pCT image is reconstructed by FBP method using a correction between the light intensity and residual range of proton beam. An experiment for the demonstration of this system was performed with 70-MeV proton beam provided by NIRS cyclotron. The pCT image of several objects reconstructed from the experimental data was evaluated quantitatively. Results: Three-dimensional pCT images of several objects were reconstructed experimentally. A finestructure of approximately 1 mm was clearly observed. The position resolution of pCT image was almost the same as that of xCT image. And the error of proton CT pixel value was up to 4%. The deterioration of image quality was caused mainly by the effect of multiple Coulomb scattering. Conclusion: We designed and constructed the pCT imaging system using a thick scintillator and a CCD camera. And the system was evaluated with the experiment by use of 70-MeV proton beam. Three-dimensional pCT images of several objects were acquired by the system. This work was supported by JST SENTAN Grant Number 13A1101 and JSPS KAKENHI Grant Number 15H04912.« less
See around the corner using active imaging
NASA Astrophysics Data System (ADS)
Steinvall, Ove; Elmqvist, Magnus; Larsson, Håkan
2011-11-01
This paper investigates the prospects of "seeing around the corner" using active imaging. A monostatic active imaging system offers interesting capabilities in the presence of glossy reflecting objects. Examples of such surfaces are windows in buildings and cars, calm water, signs and vehicle surfaces. During daylight it might well be possible to use mirrorlike reflection by the naked eye or a CCD camera for non-line of sight imaging. However the advantage with active imaging is that one controls the illumination. This will not only allow for low light and night utilization but also for use in cases where the sun or other interfering lights limit the non-line of sight imaging possibility. The range resolution obtained by time gating will reduce disturbing direct reflections and allow simultaneous view in several directions using range discrimination. Measurements and theoretical considerations in this report support the idea of using laser to "see around the corner". Examples of images and reflectivity measurements will be presented together with examples of potential system applications.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-01
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-10
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
Automatic Focusing for a 675 GHz Imaging Radar with Target Standoff Distances from 14 to 34 Meters
NASA Technical Reports Server (NTRS)
Tang, Adrian; Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria; Siegel, Peter H.
2013-01-01
This paper dicusses the issue of limited focal depth for high-resolution imaging radar operating over a wide range of standoff distances. We describe a technique for automatically focusing a THz imaging radar system using translational optics combined with range estimation based on a reduced chirp bandwidth setting. The demonstarted focusing algorithm estimates the correct focal depth for desired targets in the field of view at unknown standoffs and in the presence of clutter to provide good imagery at 14 to 30 meters of standoff.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
Characterization of controlled bone defects using 2D and 3D ultrasound imaging techniques.
Parmar, Biren J; Longsine, Whitney; Sabonghy, Eric P; Han, Arum; Tasciotti, Ennio; Weiner, Bradley K; Ferrari, Mauro; Righetti, Raffaella
2010-08-21
Ultrasound is emerging as an attractive alternative modality to standard x-ray and CT methods for bone assessment applications. As of today, however, there is a lack of systematic studies that investigate the performance of diagnostic ultrasound techniques in bone imaging applications. This study aims at understanding the performance limitations of new ultrasound techniques for imaging bones in controlled experiments in vitro. Experiments are performed on samples of mammalian and non-mammalian bones with controlled defects with size ranging from 400 microm to 5 mm. Ultrasound findings are statistically compared with those obtained from the same samples using standard x-ray imaging modalities and optical microscopy. The results of this study demonstrate that it is feasible to use diagnostic ultrasound imaging techniques to assess sub-millimeter bone defects in real time and with high accuracy and precision. These results also demonstrate that ultrasound imaging techniques perform comparably better than x-ray imaging and optical imaging methods, in the assessment of a wide range of controlled defects both in mammalian and non-mammalian bones. In the future, ultrasound imaging techniques might provide a cost-effective, real-time, safe and portable diagnostic tool for bone imaging applications.
Genetics algorithm optimization of DWT-DCT based image Watermarking
NASA Astrophysics Data System (ADS)
Budiman, Gelar; Novamizanti, Ledya; Iwut, Iwan
2017-01-01
Data hiding in an image content is mandatory for setting the ownership of the image. Two dimensions discrete wavelet transform (DWT) and discrete cosine transform (DCT) are proposed as transform method in this paper. First, the host image in RGB color space is converted to selected color space. We also can select the layer where the watermark is embedded. Next, 2D-DWT transforms the selected layer obtaining 4 subband. We select only one subband. And then block-based 2D-DCT transforms the selected subband. Binary-based watermark is embedded on the AC coefficients of each block after zigzag movement and range based pixel selection. Delta parameter replacing pixels in each range represents embedded bit. +Delta represents bit “1” and -delta represents bit “0”. Several parameters to be optimized by Genetics Algorithm (GA) are selected color space, layer, selected subband of DWT decomposition, block size, embedding range, and delta. The result of simulation performs that GA is able to determine the exact parameters obtaining optimum imperceptibility and robustness, in any watermarked image condition, either it is not attacked or attacked. DWT process in DCT based image watermarking optimized by GA has improved the performance of image watermarking. By five attacks: JPEG 50%, resize 50%, histogram equalization, salt-pepper and additive noise with variance 0.01, robustness in the proposed method has reached perfect watermark quality with BER=0. And the watermarked image quality by PSNR parameter is also increased about 5 dB than the watermarked image quality from previous method.
Method of orthogonally splitting imaging pose measurement
NASA Astrophysics Data System (ADS)
Zhao, Na; Sun, Changku; Wang, Peng; Yang, Qian; Liu, Xintong
2018-01-01
In order to meet the aviation's and machinery manufacturing's pose measurement need of high precision, fast speed and wide measurement range, and to resolve the contradiction between measurement range and resolution of vision sensor, this paper proposes an orthogonally splitting imaging pose measurement method. This paper designs and realizes an orthogonally splitting imaging vision sensor and establishes a pose measurement system. The vision sensor consists of one imaging lens, a beam splitter prism, cylindrical lenses and dual linear CCD. Dual linear CCD respectively acquire one dimensional image coordinate data of the target point, and two data can restore the two dimensional image coordinates of the target point. According to the characteristics of imaging system, this paper establishes the nonlinear distortion model to correct distortion. Based on cross ratio invariability, polynomial equation is established and solved by the least square fitting method. After completing distortion correction, this paper establishes the measurement mathematical model of vision sensor, and determines intrinsic parameters to calibrate. An array of feature points for calibration is built by placing a planar target in any different positions for a few times. An terative optimization method is presented to solve the parameters of model. The experimental results show that the field angle is 52 °, the focus distance is 27.40 mm, image resolution is 5185×5117 pixels, displacement measurement error is less than 0.1mm, and rotation angle measurement error is less than 0.15°. The method of orthogonally splitting imaging pose measurement can satisfy the pose measurement requirement of high precision, fast speed and wide measurement range.
Truby, Helen; Paxton, Susan J
2008-03-01
To test the reliability of the Children's Body Image Scale (CBIS) and assess its usefulness in the context of new body size charts for children. Participants were 281 primary schoolchildren with 50% being retested after 3 weeks. The CBIS figure scale was compared with a range of international body mass index (BMI) reference standards. Children had a high degree of body image dissatisfaction. The test-retest reliability of the CBIS was supported. The CBIS is a useful tool for assessing body image in children with sound scale properties. It can also be used to identify the body size of children, which lies outside the healthy weight range of BMI.
An Integrated Tone Mapping for High Dynamic Range Image Visualization
NASA Astrophysics Data System (ADS)
Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun
2018-01-01
There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.
NASA Technical Reports Server (NTRS)
Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.
1999-01-01
Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.
NASA Astrophysics Data System (ADS)
Yang, Victor X. D.; Gordon, Maggie L.; Tang, Shou-Jiang; Marcon, Norman E.; Gardiner, Geoffrey; Qi, Bing; Bisland, Stuart; Seng-Yue, Emily; Lo, Stewart; Pekar, Julius; Wilson, Brian C.; Vitkin, I. Alex
2003-09-01
We previously described a fiber based Doppler optical coherence tomography system [1] capable of imaging embryo cardiac blood flow at 4~16 frames per second with wide velocity dynamic range [2]. Coupling this system to a linear scanning fiber optical catheter design that minimizes friction and vibrations, we report here the initial results of in vivo endoscopic Doppler optical coherence tomography (EDOCT) imaging in normal rat and human esophagus. Microvascular flow in blood vessels less than 100 µm diameter was detected using a combination of color-Doppler and velocity variance imaging modes, during clinical endoscopy using a mobile EDOCT system.
Costa, Ana C S; Dibai Filho, Almir V; Packer, Amanda C; Rodrigues-Bigaton, Delaine
2013-01-01
Infrared thermography is an aid tool that can be used to evaluate several pathologies given its efficiency in analyzing the distribution of skin surface temperature. To propose two forms of infrared image analysis of the masticatory and upper trapezius muscles, and to determine the intra and inter-rater reliability of both forms of analysis. Infrared images of masticatory and upper trapezius muscles of 64 female volunteers with and without temporomandibular disorder (TMD) were collected. Two raters performed the infrared image analysis, which occurred in two ways: temperature measurement of the muscle length and in central portion of the muscle. The Intraclass Correlation Coefficient (ICC) was used to determine the intra and inter-rater reliability. The ICC showed excellent intra and inter-rater values for both measurements: temperature measurement of the muscle length (TMD group, intra-rater, ICC ranged from 0.996 to 0.999, inter-rater, ICC ranged from 0.992 to 0.999; control group, intra-rater, ICC ranged from 0.993 to 0.998, inter-rater, ICC ranged from 0.990 to 0.998), and temperature measurement of the central portion of the muscle (TMD group, intra-rater, ICC ranged from 0.981 to 0.998, inter-rater, ICC ranged from 0.971 to 0.998; control group, intra-rater, ICC ranged from 0.887 to 0.996, inter-rater, ICC ranged from 0.852 to 0.996). The results indicated that temperature measurements of the masticatory and upper trapezius muscles carried out by the analysis of the muscle length and central portion yielded excellent intra and inter-rater reliability.
International study on inter-reader variability for circulating tumor cells in breast cancer.
Ignatiadis, Michail; Riethdorf, Sabine; Bidard, François-Clement; Vaucher, Isabelle; Khazour, Mustapha; Rothé, Françoise; Metallo, Jessica; Rouas, Ghizlane; Payne, Rachel E; Coombes, Raoul; Teufel, Ingrid; Andergassen, Ulrich; Apostolaki, Stella; Politaki, Eleni; Mavroudis, Dimitris; Bessi, Silvia; Pestrin, Marta; Di Leo, Angelo; Campion, Michael; Reinholz, Monica; Perez, Edith; Piccart, Martine; Borgen, Elin; Naume, Bjorn; Jimenez, Jose; Aura, Claudia; Zorzino, Laura; Cassatella, Maria; Sandri, Maria; Mostert, Bianca; Sleijfer, Stefan; Kraan, Jaco; Janni, Wolfgang; Fehm, Tanja; Rack, Brigitte; Terstappen, Leon; Repollet, Madeline; Pierga, Jean-Yves; Miller, Craig; Sotiriou, Christos; Michiels, Stefan; Pantel, Klaus
2014-04-23
Circulating tumor cells (CTCs) have been studied in breast cancer with the CellSearch® system. Given the low CTC counts in non-metastatic breast cancer, it is important to evaluate the inter-reader agreement. CellSearch® images (N = 272) of either CTCs or white blood cells or artifacts from 109 non-metastatic (M0) and 22 metastatic (M1) breast cancer patients from reported studies were sent to 22 readers from 15 academic laboratories and 8 readers from two Veridex laboratories. Each image was scored as No CTC vs CTC HER2- vs CTC HER2+. The 8 Veridex readers were summarized to a Veridex Consensus (VC) to compare each academic reader using % agreement and kappa (κ) statistics. Agreement was compared according to disease stage and CTC counts using the Wilcoxon signed rank test. For CTC definition (No CTC vs CTC), the median agreement between academic readers and VC was 92% (range 69 to 97%) with a median κ of 0.83 (range 0.37 to 0.93). Lower agreement was observed in images from M0 (median 91%, range 70 to 96%) compared to M1 (median 98%, range 64 to 100%) patients (P < 0.001) and from M0 and <3CTCs (median 87%, range 66 to 95%) compared to M0 and ≥3CTCs samples (median 95%, range 77 to 99%), (P < 0.001). For CTC HER2 expression (HER2- vs HER2+), the median agreement was 87% (range 51 to 95%) with a median κ of 0.74 (range 0.25 to 0.90). The inter-reader agreement for CTC definition was high. Reduced agreement was observed in M0 patients with low CTC counts. Continuous training and independent image review are required.
NASA Astrophysics Data System (ADS)
Tavakolian, Pantea; Sivagurunathan, Koneswaran; Mandelis, Andreas
2017-07-01
Photothermal diffusion-wave imaging is a promising technique for non-destructive evaluation and medical applications. Several diffusion-wave techniques have been developed to produce depth-resolved planar images of solids and to overcome imaging depth and image blurring limitations imposed by the physics of parabolic diffusion waves. Truncated-Correlation Photothermal Coherence Tomography (TC-PCT) is the most successful class of these methodologies to-date providing 3-D subsurface visualization with maximum depth penetration and high axial and lateral resolution. To extend the depth range and axial and lateral resolution, an in-depth analysis of TC-PCT, a novel imaging system with improved instrumentation, and an optimized reconstruction algorithm over the original TC-PCT technique is developed. Thermal waves produced by a laser chirped pulsed heat source in a finite thickness solid and the image reconstruction algorithm are investigated from the theoretical point of view. 3-D visualization of subsurface defects utilizing the new TC-PCT system is reported. The results demonstrate that this method is able to detect subsurface defects at the depth range of ˜4 mm in a steel sample, which exhibits dynamic range improvement by a factor of 2.6 compared to the original TC-PCT. This depth does not represent the upper limit of the enhanced TC-PCT. Lateral resolution in the steel sample was measured to be ˜31 μm.
Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao
2016-01-01
For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy. PMID:27897976
NASA Astrophysics Data System (ADS)
Staple, Bevan; Earhart, R. P.; Slaymaker, Philip A.; Drouillard, Thomas F., II; Mahony, Thomas
2005-05-01
3D imaging LADARs have emerged as the key technology for producing high-resolution imagery of targets in 3-dimensions (X and Y spatial, and Z in the range/depth dimension). Ball Aerospace & Technologies Corp. continues to make significant investments in this technology to enable critical NASA, Department of Defense, and national security missions. As a consequence of rapid technology developments, two issues have emerged that need resolution. First, the terminology used to rate LADAR performance (e.g., range resolution) is inconsistently defined, is improperly used, and thus has become misleading. Second, the terminology does not include a metric of the system"s ability to resolve the 3D depth features of targets. These two issues create confusion when translating customer requirements into hardware. This paper presents a candidate framework for addressing these issues. To address the consistency issue, the framework utilizes only those terminologies proposed and tested by leading LADAR research and standards institutions. We also provide suggestions for strengthening these definitions by linking them to the well-known Rayleigh criterion extended into the range dimension. To address the inadequate 3D image quality metrics, the framework introduces the concept of a Range/Depth Modulation Transfer Function (RMTF). The RMTF measures the impact of the spatial frequencies of a 3D target on its measured modulation in range/depth. It is determined using a new, Range-Based, Slanted Knife-Edge test. We present simulated results for two LADAR pulse detection techniques and compare them to a baseline centroid technique. Consistency in terminology plus a 3D image quality metric enable improved system standardization.
Social smile reproducibility using 3-D stereophotogrammetry and reverse engineering technology.
Dindaroğlu, Furkan; Duran, Gökhan Serhat; Görgülü, Serkan; Yetkiner, Enver
2016-05-01
To assess the range of social smile reproducibility using 3-D stereophotogrammetry and reverse engineering technology. Social smile images of white adolescents (N = 15, mean age = 15.4 ±1.5 years; range = 14-17 years) were obtained using 3dMDFlex (3dMD, Atlanta, Ga). Each participant was asked to produce 16 social smiles at 3-minute intervals. All images were obtained in natural head position. Alignment of images, segmentation of smile area, and 3-D deviation analysis were carried out using Geomagic Control software (3D Systems Inc, Cary, NC). A single image was taken as a reference, and the remaining 15 images were compared with the reference image to evaluate positive and negative deviations. The differences between the mean deviation limits of participants with the highest and the lowest deviations and the total mean deviations were evaluated using Bland-Altman Plots. Minimum and maximum deviations of a single image from the reference image were 0.34 and 2.69 mm, respectively. Lowest deviation between two images was within 0.5 mm and 1.54 mm among all participants (mean, 0.96 ± 0.21 mm), and the highest deviation was between 0.41 mm and 2.69 mm (mean, 1.53 ± 0.46 mm). For a single patient, when all alignments were considered together, the mean deviation was between 0.32 ± 0.10 mm and 0.59 ± 0.24 mm. Mean deviation for one image was between 0.14 and 1.21 mm. The range of reproducibility of the social smile presented individual variability, but this variation was not clinically significant or detectable under routine clinical observation.
Reliability of a novel thermal imaging system for temperature assessment of healthy feet.
Petrova, N L; Whittam, A; MacDonald, A; Ainarkar, S; Donaldson, A N; Bevans, J; Allen, J; Plassmann, P; Kluwe, B; Ring, F; Rogers, L; Simpson, R; Machin, G; Edmonds, M E
2018-01-01
Thermal imaging is a useful modality for identifying preulcerative lesions ("hot spots") in diabetic foot patients. Despite its recognised potential, at present, there is no readily available instrument for routine podiatric assessment of patients at risk. To address this need, a novel thermal imaging system was recently developed. This paper reports the reliability of this device for temperature assessment of healthy feet. Plantar skin foot temperatures were measured with the novel thermal imaging device (Diabetic Foot Ulcer Prevention System (DFUPS), constructed by Photometrix Imaging Ltd) and also with a hand-held infrared spot thermometer (Thermofocus® 01500A3, Tecnimed, Italy) after 20 min of barefoot resting with legs supported and extended in 105 subjects (52 males and 53 females; age range 18 to 69 years) as part of a multicentre clinical trial. The temperature differences between the right and left foot at five regions of interest (ROIs), including 1st and 4th toes, 1st, 3rd and 5th metatarsal heads were calculated. The intra-instrument agreement (three repeated measures) and the inter-instrument agreement (hand-held thermometer and thermal imaging device) were quantified using intra-class correlation coefficients (ICCs) and the 95% confidence intervals (CI). Both devices showed almost perfect agreement in replication by instrument. The intra-instrument ICCs for the thermal imaging device at all five ROIs ranged from 0.95 to 0.97 and the intra-instrument ICCs for the hand-held-thermometer ranged from 0.94 to 0.97. There was substantial to perfect inter-instrument agreement between the hand-held thermometer and the thermal imaging device and the ICCs at all five ROIs ranged between 0.94 and 0.97. This study reports the performance of a novel thermal imaging device in the assessment of foot temperatures in healthy volunteers in comparison with a hand-held infrared thermometer. The newly developed thermal imaging device showed very good agreement in repeated temperature assessments at defined ROIs as well as substantial to perfect agreement in temperature assessment with the hand-held infrared thermometer. In addition to the reported non-inferior performance in temperature assessment, the thermal imaging device holds the potential to provide an instantaneous thermal image of all sites of the feet (plantar, dorsal, lateral and medial views). Diabetic Foot Ulcer Prevention System NCT02317835, registered December 10, 2014.
Sutton, Elizabeth J; Watson, Elizabeth J; Gibbons, Girard; Goldman, Debra A; Moskowitz, Chaya S; Jochelson, Maxine S; Dershaw, D David; Morris, Elizabeth A
2015-11-01
To assess the incidence of benign and malignant internal mammary lymph nodes (IMLNs) at magnetic resonance (MR) imaging among women with a history of treated breast cancer and silicone implant reconstruction. The institutional review board approved this HIPAA-compliant retrospective study and waived informed consent. Women were identified who (a) had breast cancer, (b) underwent silicone implant oncoplastic surgery, and (c) underwent postoperative implant-protocol MR imaging with or without positron emission tomography (PET)/computed tomography (CT) between 2000 and 2013. The largest IMLNs were measured. A benign IMLN was pathologically proven or defined as showing 1 year of imaging stability and/or no clinical evidence of disease. Malignant IMLNs were pathologically proven. Incidence of IMLN and positive predictive value (PPV) were calculated on a per-patient level by using proportions and exact 95% confidence intervals (CIs). The Wilcoxon rank sum test was used to assess the difference in axis size. In total, 923 women with breast cancer and silicone implants were included (median age, 46 years; range, 22-89 years). The median time between reconstructive surgery and first MR imaging examination was 49 months (range, 5-513 months). Of the 923 women, 347 (37.6%) had IMLNs at MR imaging. Median short- and long-axis measurements were 0.40 cm (range, 0.20-1.70 cm) and 0.70 cm (range, 0.30-1.90 cm), respectively. Two hundred seven of 923 patients (22.4%) had adequate follow-up; only one of the 207 IMLNs was malignant, with a PPV of 0.005 (95% CI: 0.000, 0.027). Fifty-eight of 923 patients (6.3%) had undergone PET/CT; of these, 39 (67.2%) had IMLN at MR imaging. Twelve of the 58 patients (20.7%) with adequate follow-up had fluorine 18 fluorodeoxyglucose-avid IMLN, with a median standardized uptake value of 2.30 (range, 1.20-6.10). Only one of the 12 of the fluorodeoxyglucose-avid IMLNs was malignant, with a PPV of 0.083 (95% CI: 0.002, 0.385). IMLNs identified at implant-protocol breast MR imaging after oncoplastic surgery for breast cancer are overwhelmingly more likely to be benign than malignant. Imaging follow-up instead of immediate metastatic work-up may be warranted. © RSNA, 2015
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
NASA Astrophysics Data System (ADS)
Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung
2016-09-01
Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on our results, we recommend using this technique for high image quality.
Automatic image registration performance for two different CBCT systems; variation with imaging dose
NASA Astrophysics Data System (ADS)
Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.
2014-03-01
The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.
The superiority of L3-CCDs in the high-flux and wide dynamic range regimes
NASA Astrophysics Data System (ADS)
Butler, Raymond F.; Sheehan, Brendan J.
2008-02-01
Low Light Level CCD (L3-CCD) cameras have received much attention for high cadence astronomical imaging applications. Efforts to date have concentrated on exploiting them for two scenarios: post-exposure image sharpening and ``lucky imaging'', and rapid variability in astrophysically interesting sources. We demonstrate their marked superiority in a third distinct scenario: observing in the high-flux and wide dynamic range regimes. We realized that the unique features of L3-CCDs would make them ideal for maximizing signal-to-noise in observations of bright objects (whether variable or not), and for high dynamic range scenarios such as faint targets embedded in a crowded field of bright objects. Conventional CCDs have drawbacks in such regimes, due to a poor duty cycle-the combination of short exposure times (for time-series sampling or to avoid saturation) and extended readout times (for minimizing readout noise). For different telescope sizes, we use detailed models to show that a range of conventional imaging systems are photometrically out-performed across a wide range of object brightness, once the operational parameters of the L3-CCD are carefully set. The cross-over fluxes, above which the L3-CCD is operationally superior, are surprisingly faint-even for modest telescope apertures. We also show that the use of L3-CCDs is the optimum strategy for minimizing atmospheric scintillation noise in photometric observations employing a given telescope aperture. This is particularly significant, since scintillation can be the largest source of error in timeseries photometry. These results should prompt a new direction in developing imaging instrumentation solutions for observatories.
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Bioorthogonal chemistry in bioluminescence imaging.
Godinat, Aurélien; Bazhin, Arkadiy A; Goun, Elena A
2018-05-18
Bioorthogonal chemistry has developed significant over the past few decades, to the particular benefit of molecular imaging. Bioluminescence imaging (BLI) along with other imaging modalities have significantly benefitted from this chemistry. Here, we review bioorthogonal reactions that have been used to signific antly broaden the application range of BLI. Copyright © 2018. Published by Elsevier Ltd.
High dynamic spectroscopy using a digital micromirror device and periodic shadowing.
Kristensson, Elias; Ehn, Andreas; Berrocal, Edouard
2017-01-09
We present an optical solution called DMD-PS to boost the dynamic range of 2D imaging spectroscopic measurements up to 22 bits by incorporating a digital micromirror device (DMD) prior to detection in combination with the periodic shadowing (PS) approach. In contrast to high dynamic range (HDR), where the dynamic range is increased by recording several images at different exposure times, the current approach has the potential of improving the dynamic range from a single exposure and without saturation of the CCD sensor. In the procedure, the spectrum is imaged onto the DMD that selectively reduces the reflection from the intense spectral lines, allowing the signal from the weaker lines to be increased by a factor of 28 via longer exposure times, higher camera gains or increased laser power. This manipulation of the spectrum can either be based on a priori knowledge of the spectrum or by first performing a calibration measurement to sense the intensity distribution. The resulting benefits in detection sensitivity come, however, at the cost of strong generation of interfering stray light. To solve this issue the Periodic Shadowing technique, which is based on spatial light modulation, is also employed. In this proof-of-concept article we describe the full methodology of DMD-PS and demonstrate - using the calibration-based concept - an improvement in dynamic range by a factor of ~100 over conventional imaging spectroscopy. The dynamic range of the presented approach will directly benefit from future technological development of DMDs and camera sensors.
NASA Astrophysics Data System (ADS)
Sanchez-Lavega, A.; Hueso, R.; Perez-Hoyos, S.; Iñurrigarro, P.; Mendikoa, I.; Rojas, J. F.
2016-12-01
We present the results of a long term campaign between September 2015 and August 2016 of imaging of Jupiter's cloud morphology and zonal winds in the 0.38 - 1.7 μm wavelength spectral range. We use PlanetCam lucky imaging camera at the 2.2m telescope at Calar Alto Observatory in Spain, and for the optical range, the contribution of a network of observers to the Planetary Virtual Observatory Laboratory database (PVOL-IOPW at http://pvol.ehu.eus). We have complemented the study with Hubble Space Telescope WFC3 camera images taken in the 0.275 - 0.89 μm wavelength spectral range during the OPAL program on 9 February 2016. The PlanetCam images have been calibrated in radiance using spectrophotometric standard stars providing absolute reflectivity across the disk in a large series of broadband and narrowband filters sensitive to the altitude distribution and size of aerosols above the ammonia cloud level, and to the spectral dependence of the chromophore coloring agents. The cloud morphology evolution has been studied with an horizontal resolution ranging from 150 to 1000 km. Zonal wind profiles have been retrieved along the whole observing period from tracking cloud motions that span the latitude range from -80° to +77º. Combining all these results we characterized the 3D-dynamical state and cloud and haze distribution in Jupiter's atmosphere in the altitude range between 10 mbar and 1.5 bar before and during Juno initial exploration.
Cannata, Jonathan M; Ritter, Timothy A; Chen, Wo-Hsing; Silverman, Ronald H; Shung, K Kirk
2003-11-01
This paper discusses the design, fabrication, and testing of sensitive broadband lithium niobate (LiNbO3) single-element ultrasonic transducers in the 20-80 MHz frequency range. Transducers of varying dimensions were built for an f# range of 2.0-3.1. The desired focal depths were achieved by either casting an acoustic lens on the transducer face or press-focusing the piezoelectric into a spherical curvature. For designs that required electrical impedance matching, a low impedance transmission line coaxial cable was used. All transducers were tested in a pulse-echo arrangement, whereby the center frequency, bandwidth, insertion loss, and focal depth were measured. Several transducers were fabricated with center frequencies in the 20-80 MHz range with the measured -6 dB bandwidths and two-way insertion loss values ranging from 57 to 74% and 9.6 to 21.3 dB, respectively. Both transducer focusing techniques proved successful in producing highly sensitive, high-frequency, single-element, ultrasonic-imaging transducers. In vivo and in vitro ultrasonic backscatter microscope (UBM) images of human eyes were obtained with the 50 MHz transducers. The high sensitivity of these devices could possibly allow for an increase in depth of penetration, higher image signal-to-noise ratio (SNR), and improved image contrast at high frequencies when compared to previously reported results.
Magneto-acoustic imaging by continuous-wave excitation.
Shunqi, Zhang; Zhou, Xiaoqing; Tao, Yin; Zhipeng, Liu
2017-04-01
The electrical characteristics of tissue yield valuable information for early diagnosis of pathological changes. Magneto-acoustic imaging is a functional approach for imaging of electrical conductivity. This study proposes a continuous-wave magneto-acoustic imaging method. A kHz-range continuous signal with an amplitude range of several volts is used to excite the magneto-acoustic signal and improve the signal-to-noise ratio. The magneto-acoustic signal amplitude and phase are measured to locate the acoustic source via lock-in technology. An optimisation algorithm incorporating nonlinear equations is used to reconstruct the magneto-acoustic source distribution based on the measured amplitude and phase at various frequencies. Validation simulations and experiments were performed in pork samples. The experimental and simulation results agreed well. While the excitation current was reduced to 10 mA, the acoustic signal magnitude increased up to 10 -7 Pa. Experimental reconstruction of the pork tissue showed that the image resolution reached mm levels when the excitation signal was in the kHz range. The signal-to-noise ratio of the detected magneto-acoustic signal was improved by more than 25 dB at 5 kHz when compared to classical 1 MHz pulse excitation. The results reported here will aid further research into magneto-acoustic generation mechanisms and internal tissue conductivity imaging.
Exposure Range For Cine Radiographic Procedures
NASA Astrophysics Data System (ADS)
Moore, Robert J.
1980-08-01
Based on the author's experience, state-of-the-art cine radiographic equipment of the type used in modern cardiovascular laboratories for selective coronary arteriography must perform at well-defined levels to produce cine images with acceptable quantum mottle, contrast, and detail, as judged by consensus of across section of American cardiologists/radiologists experienced in viewing such images. Accordingly, a "standard" undertable state-of-the-art cine radiographic imaging system is postulated to answer the question of what patient exposure range is necessary to obtain cine images of acceptable quality. It is shown that such a standard system would be expected to produce a 'tabletop exposure of about 25 milliRoentgens per frame for the "standard" adult patient, plus-or-minus 33% for accept-able variation of system parameters. This means that for cine radiography at 60 frames per second (30 frames per second) the exposure rate range based on this model is 60 to 120 Roentgens per minute (30 to 60 Roentgens per minute). The author contends that studies at exposure levels below these will yield cine images of questionable diagnostic value; studies at exposure levels above these may yield cine images of excellent visual quality but having little additional diagnostic value, at the expense of added patient/personnel radiation exposure and added x-ray tube heat loading.
Cardiac motion correction based on partial angle reconstructed images in x-ray CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
2015-05-15
Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogrammore » with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of two conjugate PAR images. To evaluate the proposed algorithm, digital XCAT and physical dynamic cardiac phantom datasets are used. The XCAT phantom datasets were generated with heart rates of 70 and 100 bpm, respectively, by assuming a system rotation time of 300 ms. A physical dynamic cardiac phantom was scanned using a slowly rotating XCT system so that the effective heart rate will be 70 bpm for a system rotation speed of 300 ms. Results: In the XCAT phantom experiment, motion-compensated 3D images obtained from the proposed algorithm show coronary arteries with fewer motion artifacts for all phases. Moreover, object boundaries contaminated by motion are well restored. Even though object positions and boundary shapes are still somewhat different from the ground truth in some cases, the authors see that visibilities of coronary arteries are improved noticeably and motion artifacts are reduced considerably. The physical phantom study also shows that the visual quality of motion-compensated images is greatly improved. Conclusions: The authors propose a novel PAR image-based cardiac motion estimation and compensation algorithm. The algorithm requires an angular scan range of less than 360°. The excellent performance of the proposed algorithm is illustrated by using digital XCAT and physical dynamic cardiac phantom datasets.« less
Planetary Hyperspectral Imager (PHI)
NASA Technical Reports Server (NTRS)
Silvergate, Peter
1996-01-01
A hyperspectral imaging spectrometer was breadboarded. Key innovations were use of a sapphire prism and single InSb focal plane to cover the entire spectral range, and a novel slit optic and relay optics to reduce thermal background. Operation over a spectral range of 450 - 4950 nm (approximately 3.5 spectral octaves) was demonstrated. Thermal background reduction by a factor of 8 - 10 was also demonstrated.
2015-07-06
This color version of NASA's New Horizons Long Range Reconnaissance Imager (LORRI) picture of Pluto taken July 3, 2015, was created by adding color data from the Ralph instrument gathered earlier in the mission. The LORRI image was taken from a range of 7.8 million miles (12.5 million km), with a central longitude of 19°. http://photojournal.jpl.nasa.gov/catalog/PIA19699
Interactive data-processing system for metallurgy
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1978-01-01
Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.
Analog signal processing for optical coherence imaging systems
NASA Astrophysics Data System (ADS)
Xu, Wei
Optical coherence tomography (OCT) and optical coherence microscopy (OCM) are non-invasive optical coherence imaging techniques, which enable micron-scale resolution, depth resolved imaging capability. Both OCT and OCM are based on Michelson interferometer theory. They are widely used in ophthalmology, gastroenterology and dermatology, because of their high resolution, safety and low cost. OCT creates cross sectional images whereas OCM obtains en face images. In this dissertation, the design and development of three increasingly complicated analog signal processing (ASP) solutions for optical coherence imaging are presented. The first ASP solution was implemented for a time domain OCT system with a Rapid Scanning Optical Delay line (RSOD)-based optical signal modulation and logarithmic amplifier (Log amp) based demodulation. This OCT system can acquire up to 1600 A-scans per second. The measured dynamic range is 106dB at 200A-scan per second. This OCT signal processing electronics includes an off-the-shelf filter box with a Log amp circuit implemented on a PCB board. The second ASP solution was developed for an OCM system with synchronized modulation and demodulation and compensation for interferometer phase drift. This OCM acquired micron-scale resolution, high dynamic range images at acquisition speeds up to 45,000 pixels/second. This OCM ASP solution is fully custom designed on a perforated circuit board. The third ASP solution was implemented on a single 2.2 mm x 2.2 mm complementary metal oxide semiconductor (CMOS) chip. This design is expandable to a multiple channel OCT system. A single on-chip CMOS photodetector and ASP channel was used for coherent demodulation in a time domain OCT system. Cross-sectional images were acquired with a dynamic range of 76dB (limited by photodetector responsivity). When incorporated with a bump-bonded InGaAs photodiode with higher responsivity, the expected dynamic range is close to 100dB.
Optical edge effects create conjunctival indentation thickness artefacts.
Sorbara, Luigina; Simpson, Trefford L; Maram, Jyotsna; Song, Eun Sun; Bizheva, Kostadinka; Hutchings, Natalie
2015-05-01
Conjunctival compression observed in ultrahigh resolution optical coherence tomography (UHR-OCT) images of contact lens edges could be actual tissue alteration, may be an optical artefact arising from the difference between the refractive indexes of the lens material and the conjunctival tissue, or could be a combination of the two. The purpose of this study is to image the artefact with contact lenses on a non-biological (non-indentable) medium and to determine the origins of the observed conjunctival compression. Two-dimensional cross-sectional images of the edges of a selection of marketed silicone hydrogel and hydrogel lenses (refractive index ranging from 1.40 to 1.43) were acquired with a research grade UHR-OCT system. The lenses were placed on three continuous surfaces, a glass sphere (refractive index n = 1.52), a rigid contact lens (n = 1.376) and the cornea of a healthy human subject (average n = 1.376). The displacement observed was analysed using ImageJ. The observed optical displacement ranged between 5.39(0.06) μm with Acuvue Advance and 11.99(0.18) μm with Air Optix Night & Day when the lens was imaged on the glass reference sphere. Similarly, on a rigid contact lens displacement ranged between 5.51(0.03) and 9.72(0.12) μm. Displacement was also observed when the lenses were imaged on the human conjunctiva and ranged from 6.49(0.80) μm for the 1-day Acuvue Moist to 17.4(0.22) μm for the Pure Vision contact lens. An optical displacement artefact was observed when imaging a contact lens on two rigid continuous surfaces with UHR-OCT where compression or indentation of the surface could not have been a factor. Contact lenses imaged in situ also exhibited displacement at the intersection of the contact lens edge and the conjunctiva, likely a manifestation of both the artefact and compression of the conjunctiva. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.
Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde
2010-05-01
The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamichhane, N; Johnson, P; Chinea, F
Purpose: To evaluate the correlation between image features and the accuracy of manually drawn target contours on synthetic PET images Methods: A digital PET phantom was used in combination with Monte Carlo simulation to create a set of 26 simulated PET images featuring a variety of tumor shapes and activity heterogeneity. These tumor volumes were used as a gold standard in comparisons with manual contours delineated by 10 radiation oncologist on the simulated PET images. Metrics used to evaluate segmentation accuracy included the dice coefficient, false positive dice, false negative dice, symmetric mean absolute surface distance, and absolute volumetric difference.more » Image features extracted from the simulated tumors consisted of volume, shape complexity, mean curvature, and intensity contrast along with five texture features derived from the gray-level neighborhood difference matrices including contrast, coarseness, busyness, strength, and complexity. Correlation between these features and contouring accuracy were examined. Results: Contour accuracy was reasonably well correlated with a variety of image features. Dice coefficient ranged from 0.7 to 0.90 and was correlated closely with contrast (r=0.43, p=0.02) and complexity (r=0.5, p<0.001). False negative dice ranged from 0.10 to 0.50 and was correlated closely with contrast (r=0.68, p<0.001) and complexity (r=0.66, p<0.001). Absolute volumetric difference ranged from 0.0002 to 0.67 and was correlated closely with coarseness (r=0.46, p=0.02) and complexity (r=0.49, p=0.008). Symmetric mean absolute difference ranged from 0.02 to 1 and was correlated closely with mean curvature (r=0.57, p=0.02) and contrast (r=0.6, p=0.001). Conclusion: The long term goal of this study is to assess whether contouring variability can be reduced by providing feedback to the practitioner based on image feature analysis. The results are encouraging and will be used to develop a statistical model which will enable a prediction of contour accuracy based purely on image feature analysis.« less
Holm, Sven; Russell, Greg; Nourrit, Vincent; McLoughlin, Niall
2017-01-01
A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).
A CMOS-based large-area high-resolution imaging system for high-energy x-ray applications
NASA Astrophysics Data System (ADS)
Rodricks, Brian; Fowler, Boyd; Liu, Chiao; Lowes, John; Haeffner, Dean; Lienert, Ulrich; Almer, John
2008-08-01
CCDs have been the primary sensor in imaging systems for x-ray diffraction and imaging applications in recent years. CCDs have met the fundamental requirements of low noise, high-sensitivity, high dynamic range and spatial resolution necessary for these scientific applications. State-of-the-art CMOS image sensor (CIS) technology has experienced dramatic improvements recently and their performance is rivaling or surpassing that of most CCDs. The advancement of CIS technology is at an ever-accelerating pace and is driven by the multi-billion dollar consumer market. There are several advantages of CIS over traditional CCDs and other solid-state imaging devices; they include low power, high-speed operation, system-on-chip integration and lower manufacturing costs. The combination of superior imaging performance and system advantages makes CIS a good candidate for high-sensitivity imaging system development. This paper will describe a 1344 x 1212 CIS imaging system with a 19.5μm pitch optimized for x-ray scattering studies at high-energies. Fundamental metrics of linearity, dynamic range, spatial resolution, conversion gain, sensitivity are estimated. The Detective Quantum Efficiency (DQE) is also estimated. Representative x-ray diffraction images are presented. Diffraction images are compared against a CCD-based imaging system.
Multirate and event-driven Kalman filters for helicopter flight
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Smith, Phillip; Suorsa, Raymond E.; Hussien, Bassam
1993-01-01
A vision-based obstacle detection system that provides information about objects as a function of azimuth and elevation is discussed. The range map is computed using a sequence of images from a passive sensor, and an extended Kalman filter is used to estimate range to obstacles. The magnitude of the optical flow that provides measurements for each Kalman filter varies significantly over the image depending on the helicopter motion and object location. In a standard Kalman filter, the measurement update takes place at fixed intervals. It may be necessary to use a different measurement update rate in different parts of the image in order to maintain the same signal to noise ratio in the optical flow calculations. A range estimation scheme that accepts the measurement only under certain conditions is presented. The estimation results from the standard Kalman filter are compared with results from a multirate Kalman filter and an event-driven Kalman filter for a sequence of helicopter flight images.
Radiometric infrared focal plane array imaging system for thermographic applications
NASA Technical Reports Server (NTRS)
Esposito, B. J.; Mccafferty, N.; Brown, R.; Tower, J. R.; Kosonocky, W. F.
1992-01-01
This document describes research performed under the Radiometric Infrared Focal Plane Array Imaging System for Thermographic Applications contract. This research investigated the feasibility of using platinum silicide (PtSi) Schottky-barrier infrared focal plane arrays (IR FPAs) for NASA Langley's specific radiometric thermal imaging requirements. The initial goal of this design was to develop a high spatial resolution radiometer with an NETD of 1 percent of the temperature reading over the range of 0 to 250 C. The proposed camera design developed during this study and described in this report provides: (1) high spatial resolution (full-TV resolution); (2) high thermal dynamic range (0 to 250 C); (3) the ability to image rapid, large thermal transients utilizing electronic exposure control (commandable dynamic range of 2,500,000:1 with exposure control latency of 33 ms); (4) high uniformity (0.5 percent nonuniformity after correction); and (5) high thermal resolution (0.1 C at 25 C background and 0.5 C at 250 C background).
Radiometric infrared focal plane array imaging system for thermographic applications
NASA Astrophysics Data System (ADS)
Esposito, B. J.; McCafferty, N.; Brown, R.; Tower, J. R.; Kosonocky, W. F.
1992-11-01
This document describes research performed under the Radiometric Infrared Focal Plane Array Imaging System for Thermographic Applications contract. This research investigated the feasibility of using platinum silicide (PtSi) Schottky-barrier infrared focal plane arrays (IR FPAs) for NASA Langley's specific radiometric thermal imaging requirements. The initial goal of this design was to develop a high spatial resolution radiometer with an NETD of 1 percent of the temperature reading over the range of 0 to 250 C. The proposed camera design developed during this study and described in this report provides: (1) high spatial resolution (full-TV resolution); (2) high thermal dynamic range (0 to 250 C); (3) the ability to image rapid, large thermal transients utilizing electronic exposure control (commandable dynamic range of 2,500,000:1 with exposure control latency of 33 ms); (4) high uniformity (0.5 percent nonuniformity after correction); and (5) high thermal resolution (0.1 C at 25 C background and 0.5 C at 250 C background).
Macro-SICM: A Scanning Ion Conductance Microscope for Large-Range Imaging.
Schierbaum, Nicolas; Hack, Martin; Betz, Oliver; Schäffer, Tilman E
2018-04-17
The scanning ion conductance microscope (SICM) is a versatile, high-resolution imaging technique that uses an electrolyte-filled nanopipet as a probe. Its noncontact imaging principle makes the SICM uniquely suited for the investigation of soft and delicate surface structures in a liquid environment. The SICM has found an ever-increasing number of applications in chemistry, physics, and biology. However, a drawback of conventional SICMs is their relatively small scan range (typically 100 μm × 100 μm in the lateral and 10 μm in the vertical direction). We have developed a Macro-SICM with an exceedingly large scan range of 25 mm × 25 mm in the lateral and 0.25 mm in the vertical direction. We demonstrate the high versatility of the Macro-SICM by imaging at different length scales: from centimeters (fingerprint, coin) to millimeters (bovine tongue tissue, insect wing) to micrometers (cellular extensions). We applied the Macro-SICM to the study of collective cell migration in epithelial wound healing.
Using turbulence scintillation to assist object ranging from a single camera viewpoint.
Wu, Chensheng; Ko, Jonathan; Coffaro, Joseph; Paulson, Daniel A; Rzasa, John R; Andrews, Larry C; Phillips, Ronald L; Crabbs, Robert; Davis, Christopher C
2018-03-20
Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.
Real-time image processing of TOF range images using a reconfigurable processor system
NASA Astrophysics Data System (ADS)
Hussmann, S.; Knoll, F.; Edeler, T.
2011-07-01
During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.
NASA Technical Reports Server (NTRS)
Grinstead, Jay H.; Wilder, Michael C.; Reda, Daniel C.; Cruden, Brett A.; Bogdanoff, David W.
2010-01-01
The Electric Arc Shock Tube (EAST) facility and Hypervelocity Free Flight Aerodynamic Facility (HFFAF, an aeroballistic range) at NASA Ames support basic research in aerothermodynamic phenomena of atmospheric entry, specifically shock layer radiation spectroscopy, convective and radiative heat transfer, and transition to turbulence. Innovative optical instrumentation has been developed and implemented to meet the challenges posed from obtaining such data in these impulse facilities. Spatially and spectrally resolved measurements of absolute radiance of a travelling shock wave in EAST are acquired using multiplexed, time-gated imaging spectrographs. Nearly complete spectral coverage from the vacuum ultraviolet to the near infrared is possible in a single experiment. Time-gated thermal imaging of ballistic range models in flight enables quantitative, global measurements of surface temperature. These images can be interpreted to determine convective heat transfer rates and reveal transition to turbulence due to isolated and distributed surface roughness at hypersonic velocities. The focus of this paper is a detailed description of the optical instrumentation currently in use in the EAST and HFFAF.
Moving target detection in flash mode against stroboscopic mode by active range-gated laser imaging
NASA Astrophysics Data System (ADS)
Zhang, Xuanyu; Wang, Xinwei; Sun, Liang; Fan, Songtao; Lei, Pingshun; Zhou, Yan; Liu, Yuliang
2018-01-01
Moving target detection is important for the application of target tracking and remote surveillance in active range-gated laser imaging. This technique has two operation modes based on the difference of the number of pulses per frame: stroboscopic mode with the accumulation of multiple laser pulses per frame and flash mode with a single shot of laser pulse per frame. In this paper, we have established a range-gated laser imaging system. In the system, two types of lasers with different frequency were chosen for the two modes. Electric fan and horizontal sliding track were selected as the moving targets to compare the moving blurring between two modes. Consequently, the system working in flash mode shows more excellent performance in motion blurring against stroboscopic mode. Furthermore, based on experiments and theoretical analysis, we presented the higher signal-to-noise ratio of image acquired by stroboscopic mode than flash mode in indoor and underwater environment.
The effect of vegetation type, microrelief, and incidence angle on radar backscatter
NASA Technical Reports Server (NTRS)
Owe, M.; Oneill, P. E.; Jackson, T. J.; Schmugge, T. J.
1985-01-01
The NASA/JPL Synthetic Aperture Radar (SAR) was flown over a 20 x 110 km test site in the Texas High Plains regions north of Lubbock during February/March 1984. The effect of incidence angle was investigated by comparing the pixel values of the calibrated and uncalibrated images. Ten-pixel-wide transects along the entire azimuth were averaged in each of the two scenes, and plotted against the calculated incidence angle of the center of each range increment. It is evident from the graphs that both the magnitudes and patterns exhibited by the corresponding transect means of the two images are highly dissimilar. For each of the cross-poles, the uncalibrated image displayed very distinct and systematic positive trends through the entire range of incidence angles. The two like-poles, however, exhibited relatively constant returns. In the calibrated image, the cross-poles exhibited a constant return, while the like-poles demonstrated a strong negative trend across the range of look-angles, as might be expected.
A CMOS image sensor with programmable pixel-level analog processing.
Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea
2005-11-01
A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.
Synchronous Phase-Resolving Flash Range Imaging
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Hancock, Bruce
2007-01-01
An apparatus, now undergoing development, for range imaging based on measurement of the round-trip phase delay of a pulsed laser beam is described. The apparatus would operate in a staring mode. A pulsed laser would illuminate a target. Laser light reflected from the target would be imaged on a verylarge- scale integrated (VLSI)-circuit image detector, each pixel of which would contain a photodetector and a phase-measuring circuit. The round-trip travel time for the reflected laser light incident on each pixel, and thus the distance to the portion of the target imaged in that pixel, would be measured in terms of the phase difference between (1) the photodetector output pulse and (2) a local-oscillator signal that would have a frequency between 10 and 20 MHz and that would be synchronized with the laser-pulse-triggering signal.
Extended depth of focus adaptive optics spectral domain optical coherence tomography
Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki
2012-01-01
We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA. PMID:23082278
Extended depth of focus adaptive optics spectral domain optical coherence tomography.
Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Yasuno, Yoshiaki
2012-10-01
We present an adaptive optics spectral domain optical coherence tomography (AO-SDOCT) with a long focal range by active phase modulation of the pupil. A long focal range is achieved by introducing AO-controlled third-order spherical aberration (SA). The property of SA and its effects on focal range are investigated in detail using the Huygens-Fresnel principle, beam profile measurement and OCT imaging of a phantom. The results indicate that the focal range is extended by applying SA, and the direction of extension can be controlled by the sign of applied SA. Finally, we demonstrated in vivo human retinal imaging by altering the applied SA.
Inverse Tone Mapping Based upon Retina Response
Huo, Yongqing; Yang, Fan; Brost, Vincent
2014-01-01
The development of high dynamic range (HDR) display arouses the research of inverse tone mapping methods, which expand dynamic range of the low dynamic range (LDR) image to match that of HDR monitor. This paper proposed a novel physiological approach, which could avoid artifacts occurred in most existing algorithms. Inspired by the property of the human visual system (HVS), this dynamic range expansion scheme performs with a low computational complexity and a limited number of parameters and obtains high-quality HDR results. Comparisons with three recent algorithms in the literature also show that the proposed method reveals more important image details and produces less contrast loss and distortion. PMID:24744678
Fusion of MODIS and Landsat-8 Surface Temperature Images: A New Approach
Hazaymeh, Khaled; Hassan, Quazi K.
2015-01-01
Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93–0.94, 0.94–0.99; and 2.97–20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084–0.90, 0.061–0.080, and 0.003–0.004, respectively. PMID:25730279
Fusion of MODIS and landsat-8 surface temperature images: a new approach.
Hazaymeh, Khaled; Hassan, Quazi K
2015-01-01
Here, our objective was to develop a spatio-temporal image fusion model (STI-FM) for enhancing temporal resolution of Landsat-8 land surface temperature (LST) images by fusing LST images acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS); and implement the developed algorithm over a heterogeneous semi-arid study area in Jordan, Middle East. The STI-FM technique consisted of two major components: (i) establishing a linear relationship between two consecutive MODIS 8-day composite LST images acquired at time 1 and time 2; and (ii) utilizing the above mentioned relationship as a function of a Landsat-8 LST image acquired at time 1 in order to predict a synthetic Landsat-8 LST image at time 2. It revealed that strong linear relationships (i.e., r2, slopes, and intercepts were in the range 0.93-0.94, 0.94-0.99; and 2.97-20.07) existed between the two consecutive MODIS LST images. We evaluated the synthetic LST images qualitatively and found high visual agreements with the actual Landsat-8 LST images. In addition, we conducted quantitative evaluations of these synthetic images; and found strong agreements with the actual Landsat-8 LST images. For example, r2, root mean square error (RMSE), and absolute average difference (AAD)-values were in the ranges 084-0.90, 0.061-0.080, and 0.003-0.004, respectively.
Multiple energy synchrotron biomedical imaging system
NASA Astrophysics Data System (ADS)
Bassey, B.; Martinson, M.; Samadi, N.; Belev, G.; Karanfil, C.; Qi, P.; Chapman, D.
2016-12-01
A multiple energy imaging system that can extract multiple endogenous or induced contrast materials as well as water and bone images would be ideal for imaging of biological subjects. The continuous spectrum available from synchrotron light facilities provides a nearly perfect source for multiple energy x-ray imaging. A novel multiple energy x-ray imaging system, which prepares a horizontally focused polychromatic x-ray beam, has been developed at the BioMedical Imaging and Therapy bend magnet beamline at the Canadian Light Source. The imaging system is made up of a cylindrically bent Laue single silicon (5,1,1) crystal monochromator, scanning and positioning stages for the subjects, flat panel (area) detector, and a data acquisition and control system. Depending on the crystal’s bent radius, reflection type, and the horizontal beam width of the filtered synchrotron radiation (20-50 keV) used, the size and spectral energy range of the focused beam prepared varied. For example, with a bent radius of 95 cm, a (1,1,1) type reflection and a 50 mm wide beam, a 0.5 mm wide focused beam of spectral energy range 27 keV-43 keV was obtained. This spectral energy range covers the K-edges of iodine (33.17 keV), xenon (34.56 keV), cesium (35.99 keV), and barium (37.44 keV) some of these elements are used as biomedical and clinical contrast agents. Using the developed imaging system, a test subject composed of iodine, xenon, cesium, and barium along with water and bone were imaged and their projected concentrations successfully extracted. The estimated dose rate to test subjects imaged at a ring current of 200 mA is 8.7 mGy s-1, corresponding to a cumulative dose of 1.3 Gy and a dose of 26.1 mGy per image. Potential biomedical applications of the imaging system will include projection imaging that requires any of the extracted elements as a contrast agent and multi-contrast K-edge imaging.
Automatic dynamic range adjustment for ultrasound B-mode imaging.
Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo
2015-02-01
In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.
Hierarchical tone mapping for high dynamic range image visualization
NASA Astrophysics Data System (ADS)
Qiu, Guoping; Duan, Jiang
2005-07-01
In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.
Tunable optical coherence tomography in the infrared range using visible photons
NASA Astrophysics Data System (ADS)
Paterova, Anna V.; Yang, Hongzhi; An, Chengwu; Kalashnikov, Dmitry A.; Krivitsky, Leonid A.
2018-04-01
Optical coherence tomography (OCT) is an appealing technique for bio-imaging, medicine, and material analysis. For many applications, OCT in mid- and far-infrared (IR) leads to significantly more accurate results. Reported mid-IR OCT systems require light sources and photodetectors which operate in mid-IR range. These devices are expensive and need cryogenic cooling. Here, we report a proof-of-concept demonstration of a wavelength tunable IR OCT technique with detection of only visible range photons. Our method is based on the nonlinear interference of frequency correlated photon pairs. The nonlinear crystal, introduced in the Michelson-type interferometer, generates photon pairs with one photon in the visible and another in the IR range. The intensity of detected visible photons depends on the phase and loss of IR photons, which interact with the sample under study. This enables us to characterize sample properties and perform imaging in the IR range by detecting visible photons. The technique possesses broad wavelength tunability and yields a fair axial and lateral resolution, which can be tailored to the specific application. The work contributes to the development of versatile 3D imaging and material characterization systems working in a broad range of IR wavelengths, which do not require the use of IR-range light sources and photodetectors.
Proton Range Uncertainty Due to Bone Cement Injected Into the Vertebra in Radiation Therapy Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Young Kyung; Hwang, Ui-Jung; Shin, Dongho, E-mail: dongho@ncc.re.kr
2011-10-01
We wanted to evaluate the influence of bone cement on the proton range and to derive a conversion factor predicting the range shift by correcting distorted computed tomography (CT) data as a reference to determine whether the correction is needed. Two CT datasets were obtained with and without a bone cement disk placed in a water phantom. Treatment planning was performed on a set of uncorrected CT images with the bone cement disk, and the verification plan was applied to the same set of CT images with an effective CT number for the bone cement disk. The effective CT numbermore » was determined by measuring the actual proton range with the bone cement disk. The effects of CT number, thicknesses, and position of bone cement on the proton range were evaluated in the treatment planning system (TPS) to draw a conversion factor predicting the range shift by correcting the CT number of bone cement. The effective CT number of bone cement was 260 Hounsfield units (HU). The calculated proton range for native CT data was significantly shorter than the measured proton range. However, the calculated range for the corrected CT data with the effective CT number coincided exactly with the measured range. The conversion factor was 209.6 [HU . cm/mm] for bone cement and predicted the range shift by approximately correcting the CT number. We found that the heterogeneity of bone cement could cause incorrect proton ranges in treatment plans using CT images. With an effective CT number of bone cement derived from the proton range and relative stopping power, a more actual proton range could be calculated in the TPS. The conversion factor could predict the necessity for CT data correction with sufficient accuracy.« less
Enhanced visualization of abnormalities in digital-mammographic images
NASA Astrophysics Data System (ADS)
Young, Susan S.; Moore, William E.
2002-05-01
This paper describes two new presentation methods that are intended to improve the ability of radiologists to visualize abnormalities in mammograms by enhancing the appearance of the breast parenchyma pattern relative to the fatty-tissue surroundings. The first method, referred to as mountain- view, is obtained via multiscale edge decomposition through filter banks. The image is displayed in a multiscale edge domain that causes the image to have a topographic-like appearance. The second method displays the image in the intensity domain and is referred to as contrast-enhancement presentation. The input image is first passed through a decomposition filter bank to produce a filtered output (Id). The image at the lowest resolution is processed using a LUT (look-up table) to produce a tone scaled image (I'). The LUT is designed to optimally map the code value range corresponding to the parenchyma pattern in the mammographic image into the dynamic range of the output medium. The algorithm uses a contrast weight control mechanism to produce the desired weight factors to enhance the edge information corresponding to the parenchyma pattern. The output image is formed using a reconstruction filter bank through I' and enhanced Id.
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young-Min; Pennycook, Stephen J.; Borisevich, Albina Y.
Octahedral tilt behavior is increasingly recognized as an important contributing factor to the physical behavior of perovskite oxide materials and especially their interfaces, necessitating the development of high-resolution methods of tilt mapping. There are currently two major approaches for quantitative imaging of tilts in scanning transmission electron microscopy (STEM), bright field (BF) and annular bright field (ABF). In this study, we show that BF STEM can be reliably used for measurements of oxygen octahedral tilts. While optimal conditions for BF imaging are more restricted with respect to sample thickness and defocus, we find that BF imaging with an aberration-corrected microscopemore » with the accelerating voltage of 300 kV gives us the most accurate quantitative measurement of the oxygen column positions. Using the tilted perovskite structure of BiFeO 3 (BFO) as our test sample, we simulate BF and ABF images in a wide range of conditions, identifying the optimal imaging conditions for each mode. Finally, we show that unlike ABF imaging, BF imaging remains directly quantitatively interpretable for a wide range of the specimen mistilt, suggesting that it should be preferable to the ABF STEM imaging for quantitative structure determination.« less
Kim, Young-Min; Pennycook, Stephen J.; Borisevich, Albina Y.
2017-04-29
Octahedral tilt behavior is increasingly recognized as an important contributing factor to the physical behavior of perovskite oxide materials and especially their interfaces, necessitating the development of high-resolution methods of tilt mapping. There are currently two major approaches for quantitative imaging of tilts in scanning transmission electron microscopy (STEM), bright field (BF) and annular bright field (ABF). In this study, we show that BF STEM can be reliably used for measurements of oxygen octahedral tilts. While optimal conditions for BF imaging are more restricted with respect to sample thickness and defocus, we find that BF imaging with an aberration-corrected microscopemore » with the accelerating voltage of 300 kV gives us the most accurate quantitative measurement of the oxygen column positions. Using the tilted perovskite structure of BiFeO 3 (BFO) as our test sample, we simulate BF and ABF images in a wide range of conditions, identifying the optimal imaging conditions for each mode. Finally, we show that unlike ABF imaging, BF imaging remains directly quantitatively interpretable for a wide range of the specimen mistilt, suggesting that it should be preferable to the ABF STEM imaging for quantitative structure determination.« less
Dos Santos, Denise Takehana; Costa e Silva, Adriana Paula Andrade; Vannier, Michael Walter; Cavalcanti, Marcelo Gusmão Paraiso
2004-12-01
The purpose of this study was to demonstrate the sensitivity and specificity of multislice computerized tomography (CT) for diagnosis of maxillofacial fractures following specific protocols using an independent workstation. The study population consisted of 56 patients with maxillofacial fractures who were submitted to a multislice CT. The original data were transferred to an independent workstation using volumetric imaging software to generate axial images and simultaneous multiplanar (MPR) and 3-dimensional (3D-CT) volume rendering reconstructed images. The images were then processed and interpreted by 2 examiners using the following protocols independently of each other: axial, MPR/axial, 3D-CT images, and the association of axial/MPR/3D images. The clinical/surgical findings were considered the gold standard corroborating the diagnosis of the fractures and their anatomic localization. The statistical analysis was carried out using validity and chi-squared tests. The association of axial/MPR/3D images indicated a higher sensitivity (range 95.8%) and specificity (range 99%) than the other methods regarding the analysis of all regions. CT imaging demonstrated high specificity and sensitivity for maxillofacial fractures. The association of axial/MPR/3D-CT images added important information in relationship to other CT protocols.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Development of proton CT imaging system using plastic scintillator and CCD camera
NASA Astrophysics Data System (ADS)
Tanaka, Sodai; Nishio, Teiji; Matsushita, Keiichiro; Tsuneda, Masato; Kabuki, Shigeto; Uesaka, Mitsuru
2016-06-01
A proton computed tomography (pCT) imaging system was constructed for evaluation of the error of an x-ray CT (xCT)-to-WEL (water-equivalent length) conversion in treatment planning for proton therapy. In this system, the scintillation light integrated along the beam direction is obtained by photography using the CCD camera, which enables fast and easy data acquisition. The light intensity is converted to the range of the proton beam using a light-to-range conversion table made beforehand, and a pCT image is reconstructed. An experiment for demonstration of the pCT system was performed using a 70 MeV proton beam provided by the AVF930 cyclotron at the National Institute of Radiological Sciences. Three-dimensional pCT images were reconstructed from the experimental data. A thin structure of approximately 1 mm was clearly observed, with spatial resolution of pCT images at the same level as that of xCT images. The pCT images of various substances were reconstructed to evaluate the pixel value of pCT images. The image quality was investigated with regard to deterioration including multiple Coulomb scattering.
Methods for reverberation suppression utilizing dual frequency band imaging.
Rau, Jochen M; Måsøy, Svein-Erik; Hansen, Rune; Angelsen, Bjørn; Tangen, Thor Andreas
2013-09-01
Reverberations impair the contrast resolution of diagnostic ultrasound images. Tissue harmonic imaging is a common method to reduce these artifacts, but does not remove all reverberations. Dual frequency band imaging (DBI), utilizing a low frequency pulse which manipulates propagation of the high frequency imaging pulse, has been proposed earlier for reverberation suppression. This article adds two different methods for reverberation suppression with DBI: the delay corrected subtraction (DCS) and the first order content weighting (FOCW) method. Both methods utilize the propagation delay of the imaging pulse of two transmissions with alternating manipulation pressure to extract information about its depth of first scattering. FOCW further utilizes this information to estimate the content of first order scattering in the received signal. Initial evaluation is presented where both methods are applied to simulated and in vivo data. Both methods yield visual and measurable substantial improvement in image contrast. Comparing DCS with FOCW, DCS produces sharper images and retains more details while FOCW achieves best suppression levels and, thus, highest image contrast. The measured improvement in contrast ranges from 8 to 27 dB for DCS and from 4 dB up to the dynamic range for FOCW.
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Laser one-dimensional range profile and the laser two-dimensional range profile of cylinders
NASA Astrophysics Data System (ADS)
Gong, Yanjun; Wang, Mingjun; Gong, Lei
2015-10-01
Laser one-dimensional range profile, that is scattering power from pulse laser scattering of target, is a radar imaging technology. The laser two-dimensional range profile is two-dimensional scattering imaging of pulse laser of target. Laser one-dimensional range profile and laser two-dimensional range profile are called laser range profile(LRP). The laser range profile can reflect the characteristics of the target shape and surface material. These techniques were motivated by applications of laser radar to target discrimination in ballistic missile defense. The radar equation of pulse laser is given in this paper. This paper demonstrates the analytical model of laser range profile of cylinder based on the radar equation of the pulse laser. Simulations results of laser one-dimensional range profiles of some cylinders are given. Laser range profiles of cylinder, whose surface material with diffuse lambertian reflectance, is given in this paper. Laser range profiles of different pulse width of cylinder are given in this paper. The influences of geometric parameters, pulse width, attitude on the range profiles are analyzed.
Limited Angle Dual Modality Breast Imaging
NASA Astrophysics Data System (ADS)
More, Mitali J.; Li, Heng; Goodale, Patricia J.; Zheng, Yibin; Majewski, Stan; Popov, Vladimir; Welch, Benjamin; Williams, Mark B.
2007-06-01
We are developing a dual modality breast scanner that can obtain x-ray transmission and gamma ray emission images in succession at multiple viewing angles with the breast held under mild compression. These views are reconstructed and fused to obtain three-dimensional images that combine structural and functional information. Here, we describe the dual modality system and present results of phantom experiments designed to test the system's ability to obtain fused volumetric dual modality data sets from a limited number of projections, acquired over a limited (less than 180 degrees) angular range. We also present initial results from phantom experiments conducted to optimize the acquisition geometry for gamma imaging. The optimization parameters include the total number of views and the angular range over which these views should be spread, while keeping the total number of detected counts fixed. We have found that in general, for a fixed number of views centered around the direction perpendicular to the direction of compression, in-plane contrast and SNR are improved as the angular range of the views is decreased. The improvement in contrast and SNR with decreasing angular range is much greater for deeper lesions and for a smaller number of views. However, the z-resolution of the lesion is significantly reduced with decreasing angular range. Finally, we present results from limited angle tomography scans using a system with dual, opposing heads.
NASA Astrophysics Data System (ADS)
Yan, Hao; Cervino, Laura; Jia, Xun; Jiang, Steve B.
2012-04-01
While compressed sensing (CS)-based algorithms have been developed for the low-dose cone beam CT (CBCT) reconstruction, a clear understanding of the relationship between the image quality and imaging dose at low-dose levels is needed. In this paper, we qualitatively investigate this subject in a comprehensive manner with extensive experimental and simulation studies. The basic idea is to plot both the image quality and imaging dose together as functions of the number of projections and mAs per projection over the whole clinically relevant range. On this basis, a clear understanding of the tradeoff between the image quality and imaging dose can be achieved and optimal low-dose CBCT scan protocols can be developed to maximize the dose reduction while minimizing the image quality loss for various imaging tasks in image-guided radiation therapy (IGRT). Main findings of this work include (1) under the CS-based reconstruction framework, image quality has little degradation over a large range of dose variation. Image quality degradation becomes evident when the imaging dose (approximated with the x-ray tube load) is decreased below 100 total mAs. An imaging dose lower than 40 total mAs leads to a dramatic image degradation, and thus should be used cautiously. Optimal low-dose CBCT scan protocols likely fall in the dose range of 40-100 total mAs, depending on the specific IGRT applications. (2) Among different scan protocols at a constant low-dose level, the super sparse-view reconstruction with the projection number less than 50 is the most challenging case, even with strong regularization. Better image quality can be acquired with low mAs protocols. (3) The optimal scan protocol is the combination of a medium number of projections and a medium level of mAs/view. This is more evident when the dose is around 72.8 total mAs or below and when the ROI is a low-contrast or high-resolution object. Based on our results, the optimal number of projections is around 90 to 120. (4) The clinically acceptable lowest imaging dose level is task dependent. In our study, 72.8 mAs is a safe dose level for visualizing low-contrast objects, while 12.2 total mAs is sufficient for detecting high-contrast objects of diameter greater than 3 mm.
Network Design in Close-Range Photogrammetry with Short Baseline Images
NASA Astrophysics Data System (ADS)
Barazzetti, L.
2017-08-01
The avaibility of automated software for image-based 3D modelling has changed the way people acquire images for photogrammetric applications. Short baseline images are required to match image points with SIFT-like algorithms, obtaining more images than those necessary for "old fashioned" photogrammetric projects based on manual measurements. This paper describes some considerations on network design for short baseline image sequences, especially on precision and reliability of bundle adjustment. Simulated results reveal that the large number of 3D points used for image orientation has very limited impact on network precision.
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Jackson, David A.; Podoleanu, Adrian
2018-03-01
Typically, swept source optical coherence tomography (SS-OCT) imaging instruments are capable of a longer axial range than their camera based (CB) counterpart. However, there are still various applications that would take advantage for an extended axial range. In this paper, we propose an interferometer configuration that can be used to extend the axial range of the OCT instruments equipped with conventional swept-source lasers up to a few cm. In this configuration, the two arms of the interferometer are equipped with adjustable optical path length rings. The use of semiconductor optical amplifiers in the two rings allows for compensating optical losses hence, multiple paths depth reflectivity profiles (Ascans) can be combined axially. In this way, extremely long overall axial ranges are possible. The use of the recirculation loops produces an effect equivalent to that of extending the coherence length of the swept source laser. Using this approach, the achievable axial imaging range in SS-OCT can reach values well beyond the limit imposed by the coherence length of the laser, to exceed in principle many centimeters. In the present work, we demonstrate axial ranges exceeding 4 cm using a commercial swept source laser and reaching 6 cm using an "in-house" swept source laser. When used in a conventional set-up alone, both these lasers can provide less than a few mm axial range.
Prowle, John R; Molan, Maurice P; Hornsey, Emma; Bellomo, Rinaldo
2012-06-01
In septic patients, decreased renal perfusion is considered to play a major role in the pathogenesis of acute kidney injury. However, the accurate measurement of renal blood flow in such patients is problematic and invasive. We sought to overcome such obstacles by measuring renal blood flow in septic patients with acute kidney injury using cine phase-contrast magnetic resonance imaging. Pilot observational study. University-affiliated general adult intensive care unit. Ten adult patients with established septic acute kidney injury and 11 normal volunteers. Cine phase-contrast magnetic resonance imaging measurement of renal blood flow and cardiac output. The median age of the study patients was 62.5 yrs and eight were male. At the time of magnetic resonance imaging, eight patients were mechanically ventilated, nine were on continuous hemofiltration, and five required vasopressors. Cine phase-contrast magnetic resonance imaging examinations were carried out without complication. Median renal blood flow was 482 mL/min (range 335-1137) in septic acute kidney injury and 1260 mL/min (range 791-1750) in healthy controls (p = .003). Renal blood flow indexed to body surface area was 244 mL/min/m2 (range 165-662) in septic acute kidney injury and 525 mL/min/m2 (range 438-869) in controls (p = .004). In patients with septic acute kidney injury, median cardiac index was 3.5 L/min/m2 (range 1.6-8.7), and median renal fraction of cardiac output was only 7.1% (range 4.4-10.8). There was no rank correlation between renal blood flow index and creatinine clearance in patients with septic acute kidney injury (r = .26, p = .45). Cine phase-contrast magnetic resonance imaging can be used to noninvasively and safely assess renal perfusion during critical illness in man. Near-simultaneous accurate measurement of cardiac output enables organ blood flow to be assessed in the context of the global circulation. Renal blood flow seems consistently reduced as a fraction of cardiac output in established septic acute kidney injury. Cine phase-contrast magnetic resonance imaging may be a valuable tool to further investigate renal blood flow and the effects of therapies on renal blood flow in critical illness.
Jupiter's Moons: Family Portrait
NASA Technical Reports Server (NTRS)
2007-01-01
This montage shows the best views of Jupiter's four large and diverse 'Galilean' satellites as seen by the Long Range Reconnaissance Imager (LORRI) on the New Horizons spacecraft during its flyby of Jupiter in late February 2007. The four moons are, from left to right: Io, Europa, Ganymede and Callisto. The images have been scaled to represent the true relative sizes of the four moons and are arranged in their order from Jupiter. Io, 3,640 kilometers (2,260 miles) in diameter, was imaged at 03:50 Universal Time on February 28 from a range of 2.7 million kilometers (1.7 million miles). The original image scale was 13 kilometers per pixel, and the image is centered at Io coordinates 6 degrees south, 22 degrees west. Io is notable for its active volcanism, which New Horizons has studied extensively. Europa, 3,120 kilometers (1,938 miles) in diameter, was imaged at 01:28 Universal Time on February 28 from a range of 3 million kilometers (1.8 million miles). The original image scale was 15 kilometers per pixel, and the image is centered at Europa coordinates 6 degrees south, 347 degrees west. Europa's smooth, icy surface likely conceals an ocean of liquid water. New Horizons obtained data on Europa's surface composition and imaged subtle surface features, and analysis of these data may provide new information about the ocean and the icy shell that covers it. New Horizons spied Ganymede, 5,262 kilometers (3,268 miles) in diameter, at 10:01 Universal Time on February 27 from 3.5 million kilometers (2.2 million miles) away. The original scale was 17 kilometers per pixel, and the image is centered at Ganymede coordinates 6 degrees south, 38 degrees west. Ganymede, the largest moon in the solar system, has a dirty ice surface cut by fractures and peppered by impact craters. New Horizons' infrared observations may provide insight into the composition of the moon's surface and interior. Callisto, 4,820 kilometers (2,995 miles) in diameter, was imaged at 03:50 Universal Time on February 28 from a range of 4.2 million kilometers (2.6 million miles). The original image scale was 21 kilometers per pixel, and the image is centered at Callisto coordinates 4 degrees south, 356 degrees west. Scientists are using the infrared spectra New Horizons gathered of Callisto's ancient, cratered surface to calibrate spectral analysis techniques that will help them to understand the surfaces of Pluto and its moon Charon when New Horizons passes them in 2015.The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Seed viability detection using computerized false-color radiographic image enhancement
NASA Technical Reports Server (NTRS)
Vozzo, J. A.; Marko, Michael
1994-01-01
Seed radiographs are divided into density zones which are related to seed germination. The seeds which germinate have densities relating to false-color red. In turn, a seed sorter may be designed which rejects those seeds not having sufficient red to activate a gate along a moving belt containing the seed source. This results in separating only seeds with the preselected densities representing biological viability lending to germination. These selected seeds demand a higher market value. Actual false-coloring isn't required for a computer to distinguish the significant gray-zone range. This range can be predetermined and screened without the necessity of red imaging. Applying false-color enhancement is a means of emphasizing differences in densities of gray within any subject from photographic, radiographic, or video imaging. Within the 0-255 range of gray levels, colors can be assigned to any single level or group of gray levels. Densitometric values then become easily recognized colors which relate to the image density. Choosing a color to identify any given density allows separation by morphology or composition (form or function). Additionally, relative areas of each color are readily available for determining distribution of that density by comparison with other densities within the image.
Fusion of radar and ultrasound sensors for concealed weapons detection
NASA Astrophysics Data System (ADS)
Felber, Franklin S.; Davis, Herbert T., III; Mallon, Charles E.; Wild, Norbert C.
1996-06-01
An integrated radar and ultrasound sensor, capable of remotely detecting and imaging concealed weapons, is being developed. A modified frequency-agile, mine-detection radar is intended to specify with high probability of detection at ranges of 1 to 10 m which individuals in a moving crowd may be concealing metallic or nonmetallic weapons. Within about 1 to 5 m, the active ultrasound sensor is intended to enable a user to identify a concealed weapon on a moving person with low false-detection rate, achieved through a real-time centimeter-resolution image of the weapon. The goal for sensor fusion is to have the radar acquire concealed weapons at long ranges and seamlessly hand over tracking data to the ultrasound sensor for high-resolution imaging on a video monitor. We have demonstrated centimeter-resolution ultrasound images of metallic and non-metallic weapons concealed on a human at ranges over 1 m. Processing of the ultrasound images includes filters for noise, frequency, brightness, and contrast. A frequency-agile radar has been developed by JAYCOR under the U.S. Army Advanced Mine Detection Radar Program. The signature of an armed person, detected by this radar, differs appreciably from that of the same person unarmed.
ERIC Educational Resources Information Center
Lebowitz Elkoubi, Allison
2009-01-01
Research on body image and body image disturbance has met with great debate and inconsistency regarding definition, conceptualization, and measurement. The fundamental understanding of body image ranges from being a perceptual or visual concept to actually representing attitudes or judgments individuals hold regarding their bodies. The present…
36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2012 CFR
2012-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2011 CFR
2011-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
36 CFR § 1237.18 - What are the environmental standards for audiovisual records storage?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ISO 18920 (incorporated by reference, see § 1237.3). (2) Color images and acetate-based media. Keep in... color images and the deterioration of acetate-based media. (b) Digital images on magnetic tape. For digital images stored on magnetic tape, keep in an area maintained at a constant temperature range of 62...
FITS Liberator: Image processing software
NASA Astrophysics Data System (ADS)
Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David
2012-06-01
The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.
Matrix-Assisted Laser Desorption Ionization Imaging Mass Spectrometry: In Situ Molecular Mapping
Angel, Peggi M.; Caprioli, Richard M.
2013-01-01
Matrix-assisted laser desorption ionization imaging mass spectrometry (IMS) is a relatively new imaging modality that allows mapping of a wide range of biomolecules within a thin tissue section. The technology uses a laser beam to directly desorb and ionize molecules from discrete locations on the tissue that are subsequently recorded in a mass spectrometer. IMS is distinguished by the ability to directly measure molecules in situ ranging from small metabolites to proteins, reporting hundreds to thousands of expression patterns from a single imaging experiment. This article reviews recent advances in IMS technology, applications, and experimental strategies that allow it to significantly aid in the discovery and understanding of molecular processes in biological and clinical samples. PMID:23259809
Subatomic Features on the Silicon (111)-(7x7) Surface Observed by Atomic Force Microscopy.
Giessibl; Hembacher; Bielefeldt; Mannhart
2000-07-21
The atomic force microscope images surfaces by sensing the forces between a sharp tip and a sample. If the tip-sample interaction is dominated by short-range forces due to the formation of covalent bonds, the image of an individual atom should reflect the angular symmetry of the interaction. Here, we report on a distinct substructure in the images of individual adatoms on silicon (111)-(7x7), two crescents with a spherical envelope. The crescents are interpreted as images of two atomic orbitals of the front atom of the tip. Key for the observation of these subatomic features is a force-detection scheme with superior noise performance and enhanced sensitivity to short-range forces.
Superharmonic imaging with chirp coded excitation: filtering spectrally overlapped harmonics.
Harput, Sevan; McLaughlan, James; Cowell, David M J; Freear, Steven
2014-11-01
Superharmonic imaging improves the spatial resolution by using the higher order harmonics generated in tissue. The superharmonic component is formed by combining the third, fourth, and fifth harmonics, which have low energy content and therefore poor SNR. This study uses coded excitation to increase the excitation energy. The SNR improvement is achieved on the receiver side by performing pulse compression with harmonic matched filters. The use of coded signals also introduces new filtering capabilities that are not possible with pulsed excitation. This is especially important when using wideband signals. For narrowband signals, the spectral boundaries of the harmonics are clearly separated and thus easy to filter; however, the available imaging bandwidth is underused. Wideband excitation is preferable for harmonic imaging applications to preserve axial resolution, but it generates spectrally overlapping harmonics that are not possible to filter in time and frequency domains. After pulse compression, this overlap increases the range side lobes, which appear as imaging artifacts and reduce the Bmode image quality. In this study, the isolation of higher order harmonics was achieved in another domain by using the fan chirp transform (FChT). To show the effect of excitation bandwidth in superharmonic imaging, measurements were performed by using linear frequency modulated chirp excitation with varying bandwidths of 10% to 50%. Superharmonic imaging was performed on a wire phantom using a wideband chirp excitation. Results were presented with and without applying the FChT filtering technique by comparing the spatial resolution and side lobe levels. Wideband excitation signals achieved a better resolution as expected, however range side lobes as high as -23 dB were observed for the superharmonic component of chirp excitation with 50% fractional bandwidth. The proposed filtering technique achieved >50 dB range side lobe suppression and improved the image quality without affecting the axial resolution.
NASA Astrophysics Data System (ADS)
Yang, Yujie; Dong, Di; Shi, Liangliang; Wang, Jun; Yang, Xin; Tian, Jie
2015-03-01
Optical projection tomography (OPT) is a mesoscopic scale optical imaging technique for specimens between 1mm and 10mm. OPT has been proven to be immensely useful in a wide variety of biological applications, such as developmental biology and pathology, but its shortcomings in imaging specimens containing widely differing contrast elements are obvious. The longer exposure for high intensity tissues may lead to over saturation of other areas, whereas a relatively short exposure may cause similarity with surrounding background. In this paper, we propose an approach to make a trade-off between capturing weak signals and revealing more details for OPT imaging. This approach consists of three steps. Firstly, the specimens are merely scanned in 360 degrees above a normal exposure but non-overexposure to acquire the projection data. This reduces the photo bleaching and pre-registration computation compared with multiple different exposures in conventional high dynamic range (HDR) imaging method. Secondly, three virtual channels are produced for each projection image based on the histogram distribution to simulate the low, normal and high exposure images used in the traditional HDR technology in photography. Finally, each virtual channel is normalized to the full gray scale range and three channels are recombined into one image using weighting coefficients optimized by a standard eigen-decomposition method. After applying our approach on the projection data, filtered back projection (FBP) algorithm is carried out for 3-dimentional reconstruction. The neonatal wild-type mouse paw has been scanned to verify this approach. Results demonstrated the effectiveness of the proposed approach.
Cassini/VIMS hyperspectral observations of the HUYGENS landing site on Titan
Rodriguez, S.; Le, Mouelic S.; Sotin, Christophe; Clenet, H.; Clark, R.N.; Buratti, B.; Brown, R.H.; McCord, T.B.; Nicholson, P.D.; Baines, K.H.
2006-01-01
Titan is one of the primary scientific objectives of the NASA-ESA-ASI Cassini-Huygens mission. Scattering by haze particles in Titan's atmosphere and numerous methane absorptions dramatically veil Titan's surface in the visible range, though it can be studied more easily in some narrow infrared windows. The Visual and Infrared Mapping Spectrometer (VIMS) instrument onboard the Cassini spacecraft successfully imaged its surface in the atmospheric windows, taking hyperspectral images in the range 0.4-5.2 ??m. On 26 October (TA flyby) and 13 December 2004 (TB flyby), the Cassini-Huygens mission flew over Titan at an altitude lower than 1200 km at closest approach. We report here on the analysis of VIMS images of the Huygens landing site acquired at TA and TB, with a spatial resolution ranging from 16 to14.4 km/pixel. The pure atmospheric backscattering component is corrected by using both an empirical method and a first-order theoretical model. Both approaches provide consistent results. After the removal of scattering, ratio images reveal subtle surface heterogeneities. A particularly contrasted structure appears in ratio images involving the 1.59 and 2.03 ??m images north of the Huygens landing site. Although pure water ice cannot be the only component exposed at Titan's surface, this area is consistent with a local enrichment in exposed water ice and seems to be consistent with DISR/Huygens images and spectra interpretations. The images show also a morphological structure that can be interpreted as a 150 km diameter impact crater with a central peak. ?? 2006 Elsevier Ltd. All rights reserved.
TU-AB-207-01: Introduction to Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sechopoulos, I.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-03: Tomosynthesis: Clinical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maidment, A.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-00: Digital Tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
TU-AB-207-02: Testing of Body and Breast Tomosynthesis Sytems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A.
2015-06-15
Digital Tomosynthesis (DT) is becoming increasingly common in breast imaging and many other applications. DT is a form of computed tomography in which a limited set of projection images are acquired over a small angular range and reconstructed into a tomographic data set. The angular range and number of projections is determined both by the imaging task and equipment manufacturer. For example, in breast imaging between 9 and 25 projections are acquired over a range of 15° to 60°. It is equally valid to treat DT as the digital analog of classical tomography - for example, linear tomography. In fact,more » the name “tomosynthesis” is an acronym for “synthetic tomography”. DT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DT systems is a hybrid between CT and classical tomographic methods. This lecture will consist of three presentations that will provide a complete overview of DT, including a review of the fundamentals of DT, a discussion of testing methods for DT systems, and a description of the clinical applications of DT. While digital breast tomosynthesis will be emphasized, analogies will be drawn to body imaging to illustrate and compare tomosynthesis methods. Learning Objectives: To understand the fundamental principles behind tomosynthesis, including the determinants of image quality and dose. To learn how to test the performance of tomosynthesis imaging systems. To appreciate the uses of tomosynthesis in the clinic and the future applications of tomosynthesis.« less
Image quality associated with the use of an MR-compatible incubator in neonatal neuroimaging.
O'Regan, K; Filan, P; Pandit, N; Maher, M; Fanning, N
2012-04-01
MRI in the neonate poses significant challenges associated with patient transport and monitoring, and the potential for diminished image quality owing to patient motion. The objective of this study was to evaluate the usefulness of a dedicated MR-compatible incubator with integrated radiofrequency coils in improving image quality of MRI studies of the brain acquired in term and preterm neonates using standard MRI equipment. Subjective and objective analyses of image quality of neonatal brain MR examinations were performed before and after the introduction of an MR-compatible incubator. For all studies, the signal-to-noise ratio (SNR) was calculated, image quality was graded (1-3) and each was assessed for image artefact (e.g. motion). Student's t-test and the Mann-Whitney U-test were used to compare mean SNR values. 39 patients were included [mean gestational age 39 weeks (range 30-42 weeks); mean postnatal age 13 days (range 1-56 days); mean weight 3.5 kg (range 1.4-4.5 kg)]. Following the introduction of the MR-compatible incubator, diagnostic quality scans increased from 50 to 89% and motion artefact decreased from 73 to 44% of studies. SNR did not increase initially, but, when using MR sequences and parameters specifically tailored for neonatal brain imaging, SNR increased from 70 to 213 (p=0.001). Use of an MR-compatible incubator in neonatal neuroimaging provides a safe environment for MRI of the neonate and also facilitates patient monitoring and transport. When specifically tailored MR protocols are used, this results in improved image quality.
Endoscopic laser range scanner for minimally invasive, image guided kidney surgery
NASA Astrophysics Data System (ADS)
Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.
2013-03-01
Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.
Hyper-spectral imager of the visible band for lunar observations
NASA Astrophysics Data System (ADS)
Lim, Y.-M.; Choi, Y.-J.; Jo, Y.-S.; Lim, T.-H.; Ham, J.; Min, K. W.; Choi, Y.-W.
2013-06-01
A prototype hyper-spectral imager in the visible spectral band was developed for the planned Korean lunar missions in the 2020s. The instrument is based on simple refractive optics that adopted a linear variable filter and an interline charge-coupled device. This prototype imager is capable of mapping the lunar surface at wavelengths ranging from 450 to 900 nm with a spectral resolution of ˜8 nm and selectable channels ranging from 5 to 252. The anticipated spatial resolution is 17.2 m from an altitude of 100 km with a swath width of 21 km
2015-10-23
This image of Kerberos was created by combining four individual Long Range Reconnaissance Imager (LORRI) pictures taken on July 14, 2015, approximately seven hours before NASA's New Horizons' closest approach to Pluto, at a range of 245,600 miles (396,100 km) from Kerberos. The image was deconvolved to recover the highest possible spatial resolution and oversampled by a factor of eight to reduce pixilation effects. Kerberos appears to have a double-lobed shape, approximately 7.4 miles (12 kilometers) across in its long dimension and 2.8 miles (4.5 kilometers) in its shortest dimension. http://photojournal.jpl.nasa.gov/catalog/PIA20034
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
Adaptive Optics For Imaging Bright Objects Next To Dim Ones
NASA Technical Reports Server (NTRS)
Shao, Michael; Yu, Jeffrey W.; Malbet, Fabien
1996-01-01
Adaptive optics used in imaging optical systems, according to proposal, to enhance high-dynamic-range images (images of bright objects next to dim objects). Designed to alter wavefronts to correct for effects of scattering of light from small bumps on imaging optics. Original intended application of concept in advanced camera installed on Hubble Space Telescope for imaging of such phenomena as large planets near stars other than Sun. Also applicable to other high-quality telescopes and cameras.
Ultrasound Imaging System Video
NASA Technical Reports Server (NTRS)
2002-01-01
In this video, astronaut Peggy Whitson uses the Human Research Facility (HRF) Ultrasound Imaging System in the Destiny Laboratory of the International Space Station (ISS) to image her own heart. The Ultrasound Imaging System provides three-dimension image enlargement of the heart and other organs, muscles, and blood vessels. It is capable of high resolution imaging in a wide range of applications, both research and diagnostic, such as Echocardiography (ultrasound of the heart), abdominal, vascular, gynecological, muscle, tendon, and transcranial ultrasound.
A-law/Mu-law Dynamic Range Compression Deconvolution (Preprint)
2008-02-04
noise filtering via the spectrum proportionality filter, and second the signal deblurring via the inverse filter. In this process for regions when...is the joint image of motion impulse response and the noisy blurred image with signal to noise ratio 5, 6(A’) is the gray level recovered image...joint image of motion impulse response and the noisy blurred image with signal to noise ratio 5, (A’) the gray level recovered image using the A-law
Retinex Image Processing: Improved Fidelity To Direct Visual Observation
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.
1996-01-01
Recorded color images differ from direct human viewing by the lack of dynamic range compression and color constancy. Research is summarized which develops the center/surround retinex concept originated by Edwin Land through a single scale design to a multi-scale design with color restoration (MSRCR). The MSRCR synthesizes dynamic range compression, color constancy, and color rendition and, thereby, approaches fidelity to direct observation.
Linear dynamic range enhancement in a CMOS imager
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor)
2008-01-01
A CMOS imager with increased linear dynamic range but without degradation in noise, responsivity, linearity, fixed-pattern noise, or photometric calibration comprises a linear calibrated dual gain pixel in which the gain is reduced after a pre-defined threshold level by switching in an additional capacitance. The pixel may include a novel on-pixel latch circuit that is used to switch in the additional capacitance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarroll, R; Rubinstein, A; Kingsley, C
Purpose: New small-animal irradiators include extremely precise IGRT capabilities. However, mouse immobilization and localization remains a challenge. In particular, unlike week-to-week translational displacements, rotational changes in positioning are not easily corrected for in subject setup. Using two methods of setup, we aim to quantify week-to-week rotational variation in mice for the purpose of IGRT planning in small animal studies. Methods: Ten mice were imaged weekly using breath-hold CBCT (X-RAD 225 Cx), with the mouse positioned in a half-pipe support, providing 40 scans. A second group of two mice were positioned in a 3D printed immobilization device, which was created usingmore » a CT from a similarly shaped mouse, providing 10 scans. For each mouse, the first image was taken to be the reference image. Subsequent CT images were then rigidly registered, based on bony anatomy. Rotations in the axial (roll), sagittal (pitch), and coronal (yaw) planes were recorded and used to quantify variation in angular setup. Results: For the mice imaged in the half pipe, average magnitude of roll was found to be 5.4±4.6° (range: −12.9:18.86°), of pitch 1.6±1.3° (range: −1.4:4.7°), and of yaw 1.9±1.5° (range −5.4:1.1°). For the mice imaged in the printed setup; average magnitude of roll was found to be 0.64±0.6° (range: −2.1:1.0°), of pitch 0.6±0.4° (range: 0.0:1.3°), and of yaw 0.2±0.1° (range: 0.0:0.4°). The printed setup provided reduction in roll, pitch, and yaw by 88, 62, and 90 percent, respectively. Conclusion: For the typical setup routine, roll in mouse position is the dominant source of rotational variation. However, when a printed device was used, drastic improvements in mouse immobilization were seen. This work provides a promising foundation for mouse immobilization, required for full scale small animal IGRT. Currently, we are making improvements to allo±w the use of a similar system for MR, PET, and bioluminescence.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, S; Lo, P; Hoffman, J
Purpose: To evaluate the robustness of CAD or Quantitative Imaging methods, they should be tested on a variety of cases and under a variety of image acquisition and reconstruction conditions that represent the heterogeneity encountered in clinical practice. The purpose of this work was to develop a fully-automated pipeline for generating CT images that represent a wide range of dose and reconstruction conditions. Methods: The pipeline consists of three main modules: reduced-dose simulation, image reconstruction, and quantitative analysis. The first two modules of the pipeline can be operated in a completely automated fashion, using configuration files and running the modulesmore » in a batch queue. The input to the pipeline is raw projection CT data; this data is used to simulate different levels of dose reduction using a previously-published algorithm. Filtered-backprojection reconstructions are then performed using FreeCT-wFBP, a freely-available reconstruction software for helical CT. We also added support for an in-house, model-based iterative reconstruction algorithm using iterative coordinate-descent optimization, which may be run in tandem with the more conventional recon methods. The reduced-dose simulations and image reconstructions are controlled automatically by a single script, and they can be run in parallel on our research cluster. The pipeline was tested on phantom and lung screening datasets from a clinical scanner (Definition AS, Siemens Healthcare). Results: The images generated from our test datasets appeared to represent a realistic range of acquisition and reconstruction conditions that we would expect to find clinically. The time to generate images was approximately 30 minutes per dose/reconstruction combination on a hybrid CPU/GPU architecture. Conclusion: The automated research pipeline promises to be a useful tool for either training or evaluating performance of quantitative imaging software such as classifiers and CAD algorithms across the range of acquisition and reconstruction parameters present in the clinical environment. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Beamlines of the biomedical imaging and therapy facility at the Canadian light source - part 3
NASA Astrophysics Data System (ADS)
Wysokinski, Tomasz W.; Chapman, Dean; Adams, Gregg; Renier, Michel; Suortti, Pekka; Thomlinson, William
2015-03-01
The BioMedical Imaging and Therapy (BMIT) facility provides synchrotron-specific imaging and radiation therapy capabilities [1-4]. We describe here the Insertion Device (ID) beamline 05ID-2 with the beam terminated in the SOE-1 (Secondary Optical Enclosure) experimental hutch. This endstation is designed for imaging and therapy research primarily in animals ranging in size from mice to humans to horses, as well as tissue specimens including plants. Core research programs include human and animal reproduction, cancer imaging and therapy, spinal cord injury and repair, cardiovascular and lung imaging and disease, bone and cartilage growth and deterioration, mammography, developmental biology, gene expression research as well as the introduction of new imaging methods. The source for the ID beamline is a multi-pole superconducting 4.3 T wiggler [5]. The high field gives a critical energy over 20 keV. The high critical energy presents shielding challenges and great care must be taken to assess shielding requirements [6-9]. The optics in the POE-1 and POE-3 hutches [4,10] prepare a monochromatic beam that is 22 cm wide in the last experimental hutch SOE-1. The double crystal bent-Laue or Bragg monochromator, or the single-crystal K-edge subtraction (KES) monochromator provide an energy range appropriate for imaging studies in animals (20-100+ keV). SOE-1 (excluding the basement structure 4 m below the experimental floor) is 6 m wide, 5 m tall and 10 m long with a removable back wall to accommodate installation and removal of the Large Animal Positioning System (LAPS) capable of positioning and manipulating animals as large as a horse [11]. This end-station also includes a unique detector positioner with a vertical travel range of 4.9 m which is required for the KES imaging angle range of +12.3° to -7.3°. The detector positioner also includes moveable shielding integrated with the safety shutters. An update on the status of the other two end-stations at BMIT, described in Part 1&2 [3,4] of this article, is included. 1PACS Codes: 07.85.Qe, 07.85.Tt, 87.62.+n, 87.59.-e