NASA Technical Reports Server (NTRS)
Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)
2011-01-01
A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.
A fast double shutter for CCD-based metrology
NASA Astrophysics Data System (ADS)
Geisler, R.
2017-02-01
Image based metrology such as Particle Image Velocimetry (PIV) depends on the comparison of two images of an object taken in fast succession. Cameras for these applications provide the so-called `double shutter' mode: One frame is captured with a short exposure time and in direct succession a second frame with a long exposure time can be recorded. The difference in the exposure times is typically no problem since illumination is provided by a pulsed light source such as a laser and the measurements are performed in a darkened environment to prevent ambient light from accumulating in the long second exposure time. However, measurements of self-luminous processes (e.g. plasma, combustion ...) as well as experiments in ambient light are difficult to perform and require special equipment (external shutters, highspeed image sensors, multi-sensor systems ...). Unfortunately, all these methods incorporate different drawbacks such as reduced resolution, degraded image quality, decreased light sensitivity or increased susceptibility to decalibration. In the solution presented here, off-the-shelf CCD sensors are used with a special timing to combine neighbouring pixels in a binning-like way. As a result, two frames of short exposure time can be captured in fast succession. They are stored in the on-chip vertical register in a line-interleaved pattern, read out in the common way and separated again by software. The two resultant frames are completely congruent; they expose no insensitive lines or line shifts and thus enable sub-pixel accurate measurements. A third frame can be captured at the full resolution analogue to the double shutter technique. Image based measurement techniques such as PIV can benefit from this mode when applied in bright environments. The third frame is useful e.g. for acceleration measurements or for particle tracking applications.
Chen, Hui; Palmer, N; Dayton, M; Carpenter, A; Schneider, M B; Bell, P M; Bradley, D K; Claus, L D; Fang, L; Hilsabeck, T; Hohenberger, M; Jones, O S; Kilkenny, J D; Kimmel, M W; Robertson, G; Rochau, G; Sanchez, M O; Stahoviak, J W; Trotter, D C; Porter, J L
2016-11-01
A novel x-ray imager, which takes time-resolved gated images along a single line-of-sight, has been successfully implemented at the National Ignition Facility (NIF). This Gated Laser Entrance Hole diagnostic, G-LEH, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories to upgrade the existing Static X-ray Imager diagnostic at NIF. The new diagnostic is capable of capturing two laser-entrance-hole images per shot on its 1024 × 448 pixels photo-detector array, with integration times as short as 1.6 ns per frame. Since its implementation on NIF, the G-LEH diagnostic has successfully acquired images from various experimental campaigns, providing critical new information for understanding the hohlraum performance in inertial confinement fusion (ICF) experiments, such as the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall.
High Contrast Ultrafast Imaging of the Human Heart
Papadacci, Clement; Pernot, Mathieu; Couade, Mathieu; Fink, Mathias; Tanter, Mickael
2014-01-01
Non-invasive ultrafast imaging for human cardiac applications is a big challenge to image intrinsic waves such as electromechanical waves or remotely induced shear waves in elastography imaging techniques. In this paper we propose to perform ultrafast imaging of the heart with adapted sector size by using diverging waves emitted from a classical transthoracic cardiac phased array probe. As in ultrafast imaging with plane wave coherent compounding, diverging waves can be summed coherently to obtain high-quality images of the entire heart at high frame rate in a full field-of-view. To image shear waves propagation at high SNR, the field-of-view can be adapted by changing the angular aperture of the transmitted wave. Backscattered echoes from successive circular wave acquisitions are coherently summed at every location in the image to improve the image quality while maintaining very high frame rates. The transmitted diverging waves, angular apertures and subapertures size are tested in simulation and ultrafast coherent compounding is implemented on a commercial scanner. The improvement of the imaging quality is quantified in phantom and in vivo on human heart. Imaging shear wave propagation at 2500 frame/s using 5 diverging waves provides a strong increase of the Signal to noise ratio of the tissue velocity estimates while maintaining a high frame rate. Finally, ultrafast imaging with a 1 to 5 diverging waves is used to image the human heart at a frame rate of 900 frames/s over an entire cardiac cycle. Thanks to spatial coherent compounding, a strong improvement of imaging quality is obtained with a small number of transmitted diverging waves and a high frame rate, which allows imaging the propagation of electromechanical and shear waves with good image quality. PMID:24474135
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2010-01-01
With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443
Multiport backside-illuminated CCD imagers for high-frame-rate camera applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.
1994-05-01
Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.
Multiplane wave imaging increases signal-to-noise ratio in ultrafast ultrasound imaging.
Tiran, Elodie; Deffieux, Thomas; Correia, Mafalda; Maresca, David; Osmanski, Bruno-Felix; Sieu, Lim-Anna; Bergel, Antoine; Cohen, Ivan; Pernot, Mathieu; Tanter, Mickael
2015-11-07
Ultrafast imaging using plane or diverging waves has recently enabled new ultrasound imaging modes with improved sensitivity and very high frame rates. Some of these new imaging modalities include shear wave elastography, ultrafast Doppler, ultrafast contrast-enhanced imaging and functional ultrasound imaging. Even though ultrafast imaging already encounters clinical success, increasing even more its penetration depth and signal-to-noise ratio for dedicated applications would be valuable. Ultrafast imaging relies on the coherent compounding of backscattered echoes resulting from successive tilted plane waves emissions; this produces high-resolution ultrasound images with a trade-off between final frame rate, contrast and resolution. In this work, we introduce multiplane wave imaging, a new method that strongly improves ultrafast images signal-to-noise ratio by virtually increasing the emission signal amplitude without compromising the frame rate. This method relies on the successive transmissions of multiple plane waves with differently coded amplitudes and emission angles in a single transmit event. Data from each single plane wave of increased amplitude can then be obtained, by recombining the received data of successive events with the proper coefficients. The benefits of multiplane wave for B-mode, shear wave elastography and ultrafast Doppler imaging are experimentally demonstrated. Multiplane wave with 4 plane waves emissions yields a 5.8 ± 0.5 dB increase in signal-to-noise ratio and approximately 10 mm in penetration in a calibrated ultrasound phantom (0.7 d MHz(-1) cm(-1)). In shear wave elastography, the same multiplane wave configuration yields a 2.07 ± 0.05 fold reduction of the particle velocity standard deviation and a two-fold reduction of the shear wave velocity maps standard deviation. In functional ultrasound imaging, the mapping of cerebral blood volume results in a 3 to 6 dB increase of the contrast-to-noise ratio in deep structures of the rodent brain.
Notes for Brazil sampling frame evaluation trip
NASA Technical Reports Server (NTRS)
Horvath, R. (Principal Investigator); Hicks, D. R. (Compiler)
1981-01-01
Field notes describing a trip conducted in Brazil are presented. This trip was conducted for the purpose of evaluating a sample frame developed using LANDSAT full frame images by the USDA Economic and Statistics Service for the eventual purpose of cropland production estimation with LANDSAT by the Foreign Commodity Production Forecasting Project of the AgRISTARS program. Six areas were analyzed on the basis of land use, crop land in corn and soybean, field size and soil type. The analysis indicated generally successful use of LANDSAT images for purposes of remote large area land use stratification.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
Improved frame-based estimation of head motion in PET brain imaging.
Mukherjee, J M; Lindsay, C; Mukherjee, A; Olivier, P; Shao, L; King, M A; Licho, R
2016-05-01
Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.
NASA Astrophysics Data System (ADS)
Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun
2012-10-01
Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.
2008-01-30
After NASA MESSENGER spacecraft completed its successful flyby of Mercury, the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS, took these images of the receding planet. This is a frame from an animation.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.
Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl
2015-01-01
The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.
Improved frame-based estimation of head motion in PET brain imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, J. M., E-mail: joyeeta.mitra@umassmed.edu; Lindsay, C.; King, M. A.
Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition ismore » uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.« less
Improved frame-based estimation of head motion in PET brain imaging
Mukherjee, J. M.; Lindsay, C.; Mukherjee, A.; Olivier, P.; Shao, L.; King, M. A.; Licho, R.
2016-01-01
Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type. PMID:27147355
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S
2016-01-01
Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Improved quality of intrafraction kilovoltage images by triggered readout of unexposed frames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poulsen, Per Rugaard, E-mail: per.poulsen@rm.dk; Jonassen, Johnny; Jensen, Carsten
2015-11-15
Purpose: The gantry-mounted kilovoltage (kV) imager of modern linear accelerators can be used for real-time tumor localization during radiation treatment delivery. However, the kV image quality often suffers from cross-scatter from the megavoltage (MV) treatment beam. This study investigates readout of unexposed kV frames as a means to improve the kV image quality in a series of experiments and a theoretical model of the observed image quality improvements. Methods: A series of fluoroscopic images were acquired of a solid water phantom with an embedded gold marker and an air cavity with and without simultaneous radiation of the phantom with amore » 6 MV beam delivered perpendicular to the kV beam with 300 and 600 monitor units per minute (MU/min). An in-house built device triggered readout of zero, one, or multiple unexposed frames between the kV exposures. The unexposed frames contained part of the MV scatter, consequently reducing the amount of MV scatter accumulated in the exposed frames. The image quality with and without unexposed frame readout was quantified as the contrast-to-noise ratio (CNR) of the gold marker and air cavity for a range of imaging frequencies from 1 to 15 Hz. To gain more insight into the observed CNR changes, the image lag of the kV imager was measured and used as input in a simple model that describes the CNR with unexposed frame readout in terms of the contrast, kV noise, and MV noise measured without readout of unexposed frames. Results: Without readout of unexposed kV frames, the quality of intratreatment kV images decreased dramatically with reduced kV frequencies due to MV scatter. The gold marker was only visible for imaging frequencies ≥3 Hz at 300 MU/min and ≥5 Hz for 600 MU/min. Visibility of the air cavity required even higher imaging frequencies. Readout of multiple unexposed frames ensured visibility of both structures at all imaging frequencies and a CNR that was independent of the kV frame rate. The image lag was 12.2%, 2.2%, and 0.9% in the first, second, and third frame after an exposure. The CNR model predicted the CNR with triggered image readout with a mean absolute error of 2.0% for the gold marker. Conclusions: A device that triggers readout of unexposed frames during kV fluoroscopy was built and shown to greatly improve the quality of intratreatment kV images. A simple theoretical model successfully described the CNR improvements with the device.« less
Feature Tracking for High Speed AFM Imaging of Biopolymers.
Hartman, Brett; Andersson, Sean B
2018-03-31
The scanning speed of atomic force microscopes continues to advance with some current commercial microscopes achieving on the order of one frame per second and at least one reaching 10 frames per second. Despite the success of these instruments, even higher frame rates are needed with scan ranges larger than are currently achievable. Moreover, there is a significant installed base of slower instruments that would benefit from algorithmic approaches to increasing their frame rate without requiring significant hardware modifications. In this paper, we present an experimental demonstration of high speed scanning on an existing, non-high speed instrument, through the use of a feedback-based, feature-tracking algorithm that reduces imaging time by focusing on features of interest to reduce the total imaging area. Experiments on both circular and square gratings, as well as silicon steps and DNA strands show a reduction in imaging time by a factor of 3-12 over raster scanning, depending on the parameters chosen.
NASA Astrophysics Data System (ADS)
Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi
2013-02-01
We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.
High-frame-rate digital radiographic videography
NASA Astrophysics Data System (ADS)
King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott
1994-10-01
High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.
Rehmert, Andrea E; Kisley, Michael A
2013-10-01
Older adults have demonstrated an avoidance of negative information, presumably with a goal of greater emotional satisfaction. Understanding whether avoidance of negative information is a voluntary, motivated choice or an involuntary, automatic response will be important to differentiate, as decision making often involves emotional factors. With the use of an emotional framing event-related potential (ERP) paradigm, the present study investigated whether older adults could alter neural responses to negative stimuli through verbal reframing of evaluative response options. The late positive potential (LPP) response of 50 older adults and 50 younger adults was recorded while participants categorized emotional images in one of two framing conditions: positive ("more or less positive") or negative ("more or less negative"). It was hypothesized that older adults would be able to overcome a presumed tendency to down-regulate neural responding to negative stimuli in the negative framing condition, thus leading to larger LPP wave amplitudes to negative images. A similar effect was predicted for younger adults, but for positively valenced images, such that LPP responses would be increased in the positive framing condition compared with the negative framing condition. Overall, younger adults' LPP wave amplitudes were modulated by framing condition, including a reduction in the negativity bias in the positive frame. Older adults' neural responses were not significantly modulated, even though task-related behavior supported the notion that older adults were able to successfully adopt the negative framing condition.
Rehmert, Andrea E.; Kisley, Michael A.
2014-01-01
Older adults have demonstrated an avoidance of negative information presumably with a goal of greater emotional satisfaction. Understanding whether avoidance of negative information is a voluntary, motivated choice, or an involuntary, automatic response will be important to differentiate, as decision-making often involves emotional factors. With the use of an emotional framing event-related potential (ERP) paradigm, the present study investigated whether older adults could alter neural responses to negative stimuli through verbal reframing of evaluative response options. The late-positive potential (LPP) response of 50 older adults and 50 younger adults was recorded while participants categorized emotional images in one of two framing conditions: positive (“more or less positive”) or negative (“more or less negative”). It was hypothesized that older adults would be able to overcome a presumed tendency to down-regulate neural responding to negative stimuli in the negative framing condition thus leading to larger LPP wave amplitudes to negative images. A similar effect was predicted for younger adults but for positively valenced images such that LPP responses would be increased in the positive framing condition compared to the negative framing condition. Overall, younger adults' LPP wave amplitudes were modulated by framing condition, including a reduction in the negativity bias in the positive frame. Older adults' neural responses were not significantly modulated even though task-related behavior supported the notion that older adults were able to successfully adopt the negative framing condition. PMID:23731435
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
Results From the New NIF Gated LEH imager
NASA Astrophysics Data System (ADS)
Chen, Hui; Amendt, P.; Barrios, M.; Bradley, D.; Casey, D.; Hinkel, D.; Berzak Hopkins, L.; Kilkenny, J.; Kritcher, A.; Landen, O.; Jones, O.; Ma, T.; Milovich, J.; Michel, P.; Moody, J.; Ralph, J.; Pak, A.; Palmer, N.; Schneider, M.
2016-10-01
A novel ns-gated Laser Entrance Hole (G-LEH) diagnostic has been successfully implemented at the National Ignition Facility (NIF). This diagnostic has successfully acquired images from various experimental campaigns, providing critical information for inertial confinement fusion experiments. The G-LEH diagnostic which takes time-resolved gated images along a single line-of-sight, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories into the existing Static X-ray Imager diagnostic at NIF. It is capable of capturing two laser-entrance-hole images per shot on its 1024x448 pixel photo-detector array, with integration times as short as 2 ns per frame. The results that will be presented include the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall. This work was performed under the auspices of the U.S. Department of Energy by LLNS, LLC, under Contract No. DE-AC52- 07NA27344.
NASA Astrophysics Data System (ADS)
Magri, Alphonso; Krol, Andrzej; Lipson, Edward; Mandel, James; McGraw, Wendy; Lee, Wei; Tillapaugh-Fay, Gwen; Feiglin, David
2009-02-01
This study was undertaken to register 3D parametric breast images derived from Gd-DTPA MR and F-18-FDG PET/CT dynamic image series. Nonlinear curve fitting (Levenburg-Marquardt algorithm) based on realistic two-compartment models was performed voxel-by-voxel separately for MR (Brix) and PET (Patlak). PET dynamic series consists of 50 frames of 1-minute duration. Each consecutive PET image was nonrigidly registered to the first frame using a finite element method and fiducial skin markers. The 12 post-contrast MR images were nonrigidly registered to the precontrast frame using a free-form deformation (FFD) method. Parametric MR images were registered to parametric PET images via CT using FFD because the first PET time frame was acquired immediately after the CT image on a PET/CT scanner and is considered registered to the CT image. We conclude that nonrigid registration of PET and MR parametric images using CT data acquired during PET/CT scan and the FFD method resulted in their improved spatial coregistration. The success of this procedure was limited due to relatively large target registration error, TRE = 15.1+/-7.7 mm, as compared to spatial resolution of PET (6-7 mm), and swirling image artifacts created in MR parametric images by the FFD. Further refinement of nonrigid registration of PET and MR parametric images is necessary to enhance visualization and integration of complex diagnostic information provided by both modalities that will lead to improved diagnostic performance.
Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects
NASA Astrophysics Data System (ADS)
Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.
2013-06-01
High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system "UPMC Cam," to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system.
Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects
Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.
2013-01-01
High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system “UPMC Cam,” to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system. PMID:23822346
Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.
Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H
1999-01-01
A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058
Vehicle counting system using real-time video processing
NASA Astrophysics Data System (ADS)
Crisóstomo-Romero, Pedro M.
2006-02-01
Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.
Highway 3D model from image and lidar data
NASA Astrophysics Data System (ADS)
Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan
2014-05-01
We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.
Detecting fluorescence hot-spots using mosaic maps generated from multimodal endoscope imaging
NASA Astrophysics Data System (ADS)
Yang, Chenying; Soper, Timothy D.; Seibel, Eric J.
2013-03-01
Fluorescence labeled biomarkers can be detected during endoscopy to guide early cancer biopsies, such as high-grade dysplasia in Barrett's Esophagus. To enhance intraoperative visualization of the fluorescence hot-spots, a mosaicking technique was developed to create full anatomical maps of the lower esophagus and associated fluorescent hot-spots. The resultant mosaic map contains overlaid reflectance and fluorescence images. It can be used to assist biopsy and document findings. The mosaicking algorithm uses reflectance images to calculate image registration between successive frames, and apply this registration to simultaneously acquired fluorescence images. During this mosaicking process, the fluorescence signal is enhanced through multi-frame averaging. Preliminary results showed that the technique promises to enhance the detectability of the hot-spots due to enhanced fluorescence signal.
Data analysis for GOPEX image frames
NASA Technical Reports Server (NTRS)
Levine, B. M.; Shaik, K. S.; Yan, T.-Y.
1993-01-01
The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.
From image captioning to video summary using deep recurrent networks and unsupervised segmentation
NASA Astrophysics Data System (ADS)
Morosanu, Bogdan-Andrei; Lemnaru, Camelia
2018-04-01
Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.
NASA Astrophysics Data System (ADS)
Baca, Michael J.
1990-09-01
A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.
NASA Astrophysics Data System (ADS)
Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.
2007-04-01
AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.
Improved Discrete Approximation of Laplacian of Gaussian
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2004-01-01
An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.
Image-guided surgery and therapy: current status and future directions
NASA Astrophysics Data System (ADS)
Peters, Terence M.
2001-05-01
Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.
Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect
NASA Astrophysics Data System (ADS)
Artyukhin, S. G.; Mestetskiy, L. M.
2015-05-01
This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.
Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl
2015-11-01
Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.
Multi-frame X-ray Phase Contrast Imaging (MPCI) for Dynamic Experiments
NASA Astrophysics Data System (ADS)
Iverson, Adam; Carlson, Carl; Sanchez, Nathaniel; Jensen, Brian
2017-06-01
Recent advances in coupling synchrotron X-ray diagnostics to dynamic experiments are providing new information about the response of materials at extremes. For example, propagation based X-ray Phase Contrast Imaging (PCI) which is sensitive to differences in density has been successfully used to study a wide range of phenomena, e.g. jet-formation, compression of additive manufactured (AM) materials, and detonator dynamics. In this talk, we describe the current multi-frame X-ray phase contrast imaging (MPCI) system which allows up to eight frames per experiment, remote optimization, and an improved optical design that increases optical efficiency and accommodates dual-magnification during a dynamic event. Data will be presented that used the dual-magnification feature to obtain multiple images of an exploding foil initiator. In addition, results from static testing will be presented that used a multiple scintillator configuration required to extend the density retrieval to multi-constituent, or heterogeneous systems. The continued development of this diagnostic is fundamentally important to capabilities at the APS including IMPULSE and the Dynamic Compression Sector (DCS), and will benefit future facilities such as MaRIE at Los Alamos National Laboratory.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Practical low-cost visual communication using binary images for deaf sign language.
Manoranjan, M D; Robinson, J A
2000-03-01
Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.
A fuzzy measure approach to motion frame analysis for scene detection. M.S. Thesis - Houston Univ.
NASA Technical Reports Server (NTRS)
Leigh, Albert B.; Pal, Sankar K.
1992-01-01
This paper addresses a solution to the problem of scene estimation of motion video data in the fuzzy set theoretic framework. Using fuzzy image feature extractors, a new algorithm is developed to compute the change of information in each of two successive frames to classify scenes. This classification process of raw input visual data can be used to establish structure for correlation. The algorithm attempts to fulfill the need for nonlinear, frame-accurate access to video data for applications such as video editing and visual document archival/retrieval systems in multimedia environments.
Siamese convolutional networks for tracking the spine motion
NASA Astrophysics Data System (ADS)
Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong
2017-09-01
Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.
Method for Enhancing a Three Dimensional Image from a Plurality of Frames of Flash LIDAR Data
NASA Technical Reports Server (NTRS)
Bulyshev, Alexander (Inventor); Vanek, Michael D. (Inventor); Amzajerdian, Farzin (Inventor)
2013-01-01
A method for enhancing a three dimensional image from frames of flash LIDAR data includes generating a first distance R(sub i) from a first detector i to a first point on a surface S(sub i). After defining a map with a mesh theta having cells k, a first array S(k), a second array M(k), and a third array D(k) are initialized. The first array corresponds to the surface, the second array corresponds to the elevation map, and the third array D(k) receives an output for the DEM. The surface is projected onto the mesh theta, so that a second distance R(sub k) from a second point on the mesh theta to the detector can be found. From this, a height may be calculated, which permits the generation of a digital elevation map. Also, using sequential frames of flash LIDAR data, vehicle control is possible using an offset between successive frames.
A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.
Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B
2013-09-01
To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.
Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
Nondestructive Evaluation of Carbon Fiber Bicycle Frames Using Infrared Thermography
Ibarra-Castanedo, Clemente; Klein, Matthieu; Maldague, Xavier; Sanchez-Beato, Alvaro
2017-01-01
Bicycle frames made of carbon fibre are extremely popular for high-performance cycling due to the stiffness-to-weight ratio, which enables greater power transfer. However, products manufactured using carbon fibre are sensitive to impact damage. Therefore, intelligent nondestructive evaluation is a required step to prevent failures and ensure a secure usage of the bicycle. This work proposes an inspection method based on active thermography, a proven technique successfully applied to other materials. Different configurations for the inspection are tested, including power and heating time. Moreover, experiments are applied to a real bicycle frame with generated impact damage of different energies. Tests show excellent results, detecting the generated damage during the inspection. When the results are combined with advanced image post-processing methods, the SNR is greatly increased, and the size and localization of the defects are clearly visible in the images. PMID:29156650
NASA Technical Reports Server (NTRS)
Gaskell, R. W.; Synnott, S. P.
1987-01-01
To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.
Computer image processing: Geologic applications
NASA Technical Reports Server (NTRS)
Abrams, M. J.
1978-01-01
Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.
A liquid-crystal-on-silicon color sequential display using frame buffer pixel circuits
NASA Astrophysics Data System (ADS)
Lee, Sangrok
Next generation liquid-crystal-on-silicon (LCOS) high definition (HD) televisions and image projection displays will need to be low-cost and high quality to compete with existing systems based on digital micromirror devices (DMDs), plasma displays, and direct view liquid crystal displays. In this thesis, a novel frame buffer pixel architecture that buffers data for the next image frame while displaying the current frame, offers such a competitive solution is presented. The primary goal of the thesis is to demonstrate the LCOS microdisplay architecture for high quality image projection displays and at potentially low cost. The thesis covers four main research areas: new frame buffer pixel circuits to improve the LCOS performance, backplane architecture design and testing, liquid crystal modes for the LCOS microdisplay, and system integration and demonstration. The design requirements for the LCOS backplane with a 64 x 32 pixel array are addressed and measured electrical characteristics matches to computer simulation results. Various liquid crystal (LC) modes applicable for LCOS microdisplays and their physical properties are discussed. One- and two-dimensional director simulations are performed for the selected LC modes. Test liquid crystal cells with the selected LC modes are made and their electro-optic effects are characterized. The 64 x 32 LCOS microdisplays fabricated with the best LC mode are optically tested with interface circuitry. The characteristics of the LCOS microdisplays are summarized with the successful demonstration.
NASA Technical Reports Server (NTRS)
Bourda, Geraldine; Collioud, Arnaud; Charlot, Patrick; Porcas, Richard; Garrington, Simon
2010-01-01
The space astrometry mission Gaia will construct a dense optical QSO-based celestial reference frame. For consistency between optical and radio positions, it will be important to align the Gaia and VLBI frames (International Celestial Reference Frame) with the highest accuracy. In this respect, it is found that only 10% of the ICRF sources are suitable to establish this link (70 sources), either because most of the ICRF sources are not bright enough at optical wavelengths or because they show extended radio emission which precludes reaching the highest astrometric accuracy. In order to improve the situation, we initiated a multi-step VLBI observational project, dedicated to finding additional suitable radio sources for aligning the two frames. The sample consists of about 450 optically-bright radio sources, typically 20 times weaker than the ICRF sources, which have been selected by cross-correlating optical and radio catalogs. The initial observations, aimed at checking whether these sources are detectable with VLBI, and conducted with the European VLBI Network (EVN) in 2007, showed an excellent 90% detection rate. This paper reports on global VLBI observations carried out in March 2008 to image 105 from the 398 previously detected sources. All sources were successfully imaged, revealing compact VLBI structure for about half of them, which is very promising for the future.
Atmospheric Science Data Center
2014-08-01
... between successive frames is not uniform. The flow of the glacier, widening of the rift, and subsequent break-off of the iceberg are ... a gap in image acquisition during Antarctic winter, when the glacier was in continuous darkness. MISR was built and is managed by NASA's ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Theodore W.; Sultan, Laith R.; Sehgal, Chandra M., E-mail: sehgalc@uphs.upenn.edu
Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced eachmore » phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.« less
Cary, Theodore W; Reamer, Courtney B; Sultan, Laith R; Mohler, Emile R; Sehgal, Chandra M
2014-02-01
To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.
Cary, Theodore W.; Reamer, Courtney B.; Sultan, Laith R.; Mohler, Emile R.; Sehgal, Chandra M.
2014-01-01
Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging. PMID:24506648
NASA Astrophysics Data System (ADS)
Jerram, P. A.; Fryer, M.; Pratlong, J.; Pike, A.; Walker, A.; Dierickx, B.; Dupont, B.; Defernez, A.
2017-11-01
CCDs have been used for many years for Hyperspectral imaging missions and have been extremely successful. These include the Medium Resolution Imaging Spectrometer (MERIS) [1] on Envisat, the Compact High Resolution Imaging Spectrometer (CHRIS) on Proba and the Ozone Monitoring Instrument operating in the UV spectral region. ESA are also planning a number of further missions that are likely to use CCD technology (Sentinel 3, 4 and 5). However CMOS sensors have a number of advantages which means that they will probably be used for hyperspectral applications in the longer term. There are two main advantages with CMOS sensors: First a hyperspectral image consists of spectral lines with a large difference in intensity; in a frame transfer CCD the faint spectral lines have to be transferred through the part of the imager illuminated by intense lines. This can lead to cross-talk and whilst this problem can be reduced by the use of split frame transfer and faster line rates CMOS sensors do not require a frame transfer and hence inherently will not suffer from this problem. Second, with a CMOS sensor the intense spectral lines can be read multiple times within a frame to give a significant increase in dynamic range. We will describe the design, and initial test of a CMOS sensor for use in hyperspectral applications. This device has been designed to give as high a dynamic range as possible with minimum cross-talk. The sensor has been manufactured on high resistivity epitaxial silicon wafers and is be back-thinned and left relatively thick in order to obtain the maximum quantum efficiency across the entire spectral range
a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging
NASA Astrophysics Data System (ADS)
Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.
2017-08-01
Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.
NASA Astrophysics Data System (ADS)
Takashima, Ichiro; Kajiwara, Riichi; Murano, Kiyo; Iijima, Toshio; Morinaka, Yasuhiro; Komobuchi, Hiroyoshi
2001-04-01
We have designed and built a high-speed CCD imaging system for monitoring neural activity in an exposed animal cortex stained with a voltage-sensitive dye. Two types of custom-made CCD sensors were developed for this system. The type I chip has a resolution of 2664 (H) X 1200 (V) pixels and a wide imaging area of 28.1 X 13.8 mm, while the type II chip has 1776 X 1626 pixels and an active imaging area of 20.4 X 18.7 mm. The CCD arrays were constructed with multiple output amplifiers in order to accelerate the readout rate. The two chips were divided into either 24 (I) or 16 (II) distinct areas that were driven in parallel. The parallel CCD outputs were digitized by 12-bit A/D converters and then stored in the frame memory. The frame memory was constructed with synchronous DRAM modules, which provided a capacity of 128 MB per channel. On-chip and on-memory binning methods were incorporated into the system, e.g., this enabled us to capture 444 X 200 pixel-images for periods of 36 seconds at a rate of 500 frames/second. This system was successfully used to visualize neural activity in the cortices of rats, guinea pigs, and monkeys.
Faurie, Julia; Baudet, Mathilde; Assi, Kondo Claude; Auger, Dominique; Gilbert, Guillaume; Tournoux, Francois; Garcia, Damien
2017-02-01
Recent studies have suggested that intracardiac vortex flow imaging could be of clinical interest to early diagnose the diastolic heart function. Doppler vortography has been introduced as a simple color Doppler method to detect and quantify intraventricular vortices. This method is able to locate a vortex core based on the recognition of an antisymmetric pattern in the Doppler velocity field. Because the heart is a fast-moving organ, high frame rates are needed to decipher the whole blood vortex dynamics during diastole. In this paper, we adapted the vortography method to high-frame-rate echocardiography using circular waves. Time-resolved Doppler vortography was first validated in vitro in an ideal forced vortex. We observed a strong correlation between the core vorticity determined by high-frame-rate vortography and the ground-truth vorticity. Vortography was also tested in vivo in ten healthy volunteers using high-frame-rate duplex ultrasonography. The main vortex that forms during left ventricular filling was tracked during two-three successive cardiac cycles, and its core vorticity was determined at a sampling rate up to 80 duplex images per heartbeat. Three echocardiographic apical views were evaluated. Vortography-derived vorticities were compared with those returned by the 2-D vector flow mapping approach. Comparison with 4-D flow magnetic resonance imaging was also performed in four of the ten volunteers. Strong intermethod agreements were observed when determining the peak vorticity during early filling. It is concluded that high-frame-rate Doppler vortography can accurately investigate the diastolic vortex dynamics.
Holographic optical coherence imaging of tumor spheroids
NASA Astrophysics Data System (ADS)
Yu, P.; Mustata, M.; Turek, J. J.; French, P. M. W.; Melloch, M. R.; Nolte, D. D.
2003-07-01
We present depth-resolved coherence-domain images of living tissue using a dynamic holographic semiconductor film. An AlGaAs photorefractive quantum-well device is used in an adaptive interferometer that records coherent backscattered (image-bearing) light from inside rat osteogenic sarcoma tumor spheroids up to 1 mm in diameter in vitro. The data consist of sequential holographic image frames at successive depths through the tumor represented as a visual video "fly-through." The images from the tumor spheroids reveal heterogeneous structures presumably caused by necrosis and microcalcifications characteristic of human tumors in their early avascular growth.
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
High-Frame-Rate Speckle-Tracking Echocardiography.
Joos, Philippe; Poree, Jonathan; Liebgott, Herve; Vray, Didier; Baudet, Mathilde; Faurie, Julia; Tournoux, Francois; Cloutier, Guy; Nicolas, Barbara; Garcia, Damien; Baudet, Mathilde; Tournoux, Francois; Joos, Philippe; Poree, Jonathan; Cloutier, Guy; Liebgott, Herve; Faurie, Julia; Vray, Didier; Nicolas, Barbara; Garcia, Damien
2018-05-01
Conventional echocardiography is the leading modality for noninvasive cardiac imaging. It has been recently illustrated that high-frame-rate echocardiography using diverging waves could improve cardiac assessment. The spatial resolution and contrast associated with this method are commonly improved by coherent compounding of steered beams. However, owing to fast tissue velocities in the myocardium, the summation process of successive diverging waves can lead to destructive interferences if motion compensation (MoCo) is not considered. Coherent compounding methods based on MoCo have demonstrated their potential to provide high-contrast B-mode cardiac images. Ultrafast speckle-tracking echocardiography (STE) based on common speckle-tracking algorithms could substantially benefit from this original approach. In this paper, we applied STE on high-frame-rate B-mode images obtained with a specific MoCo technique to quantify the 2-D motion and tissue velocities of the left ventricle. The method was first validated in vitro and then evaluated in vivo in the four-chamber view of 10 volunteers. High-contrast high-resolution B-mode images were constructed at 500 frames/s. The sequences were generated with a Verasonics scanner and a 2.5-MHz phased array. The 2-D motion was estimated with standard cross correlation combined with three different subpixel adjustment techniques. The estimated in vitro velocity vectors derived from STE were consistent with the expected values, with normalized errors ranging from 4% to 12% in the radial direction and from 10% to 20% in the cross-range direction. Global longitudinal strain of the left ventricle was also obtained from STE in 10 subjects and compared to the results provided by a clinical scanner: group means were not statistically different ( value = 0.33). The in vitro and in vivo results showed that MoCo enables preservation of the myocardial speckles and in turn allows high-frame-rate STE.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
Staggered Multiple-PRF Ultrafast Color Doppler.
Posada, Daniel; Poree, Jonathan; Pellissier, Arnaud; Chayer, Boris; Tournoux, Francois; Cloutier, Guy; Garcia, Damien
2016-06-01
Color Doppler imaging is an established pulsed ultrasound technique to visualize blood flow non-invasively. High-frame-rate (ultrafast) color Doppler, by emissions of plane or circular wavefronts, allows severalfold increase in frame rates. Conventional and ultrafast color Doppler are both limited by the range-velocity dilemma, which may result in velocity folding (aliasing) for large depths and/or large velocities. We investigated multiple pulse-repetition-frequency (PRF) emissions arranged in a series of staggered intervals to remove aliasing in ultrafast color Doppler. Staggered PRF is an emission process where time delays between successive pulse transmissions change in an alternating way. We tested staggered dual- and triple-PRF ultrafast color Doppler, 1) in vitro in a spinning disc and a free jet flow, and 2) in vivo in a human left ventricle. The in vitro results showed that the Nyquist velocity could be extended to up to 6 times the conventional limit. We found coefficients of determination r(2) ≥ 0.98 between the de-aliased and ground-truth velocities. Consistent de-aliased Doppler images were also obtained in the human left heart. Our results demonstrate that staggered multiple-PRF ultrafast color Doppler is efficient for high-velocity high-frame-rate blood flow imaging. This is particularly relevant for new developments in ultrasound imaging relying on accurate velocity measurements.
Robust Small Target Co-Detection from Airborne Infrared Image Sequences.
Gao, Jingli; Wen, Chenglin; Liu, Meiqin
2017-09-29
In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.
High-speed X-ray imaging pixel array detector for synchrotron bunch isolation
Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; ...
2016-01-28
A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less
High-speed X-ray imaging pixel array detector for synchrotron bunch isolation
Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; Shanks, Katherine S.; Weiss, Joel T.; Gruner, Sol M.
2016-01-01
A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed. PMID:26917125
High-speed X-ray imaging pixel array detector for synchrotron bunch isolation.
Philipp, Hugh T; Tate, Mark W; Purohit, Prafull; Shanks, Katherine S; Weiss, Joel T; Gruner, Sol M
2016-03-01
A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8-12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10-100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
Dynamic phase-sensitive optical coherence elastography at a true kilohertz frame-rate
NASA Astrophysics Data System (ADS)
Singh, Manmohan; Wu, Chen; Liu, Chih-Hao; Li, Jiasong; Schill, Alexander; Nair, Achuth; Larin, Kirill V.
2016-03-01
Dynamic optical coherence elastography (OCE) techniques have rapidly emerged as a noninvasive way to characterize the biomechanical properties of tissue. However, clinical applications of the majority of these techniques have been unfeasible due to the extended acquisition time because of multiple temporal OCT acquisitions (M-B mode). Moreover, multiple excitations, large datasets, and prolonged laser exposure prohibit their translation to the clinic, where patient discomfort and safety are critical criteria. Here, we demonstrate the feasibility of noncontact true kilohertz frame-rate dynamic optical coherence elastography by directly imaging a focused air-pulse induced elastic wave with a home-built phase-sensitive OCE system. The OCE system was based on a 4X buffered Fourier Domain Mode Locked swept source laser with an A-scan rate of ~1.5 MHz, and imaged the elastic wave propagation at a frame rate of ~7.3 kHz. Because the elastic wave directly imaged, only a single excitation was utilized for one line scan measurement. Rather than acquiring multiple temporal scans at successive spatial locations as with previous techniques, here, successive B-scans were acquired over the measurement region (B-M mode). Preliminary measurements were taken on tissue-mimicking agar phantoms of various concentrations, and the results showed good agreement with uniaxial mechanical compression testing. Then, the elasticity of an in situ porcine cornea in the whole eye-globe configuration at various intraocular pressures was measured. The results showed that this technique can acquire a depth-resolved elastogram in milliseconds. Furthermore, the ultra-fast acquisition ensured that the laser safety exposure limit for the cornea was not exceeded.
Retinal slit lamp video mosaicking.
De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael
2016-06-01
To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.
Speidel, Michael A; Tomkowiak, Michael T; Raval, Amish N; Dunkerley, David A P; Slagowski, Jordan M; Kahn, Paul; Ku, Jamie; Funk, Tobias
Scanning-beam digital x-ray (SBDX) is an inverse geometry fluoroscopy system for low dose cardiac imaging. The use of a narrow scanned x-ray beam in SBDX reduces detected x-ray scatter and improves dose efficiency, however the tight beam collimation also limits the maximum achievable x-ray fluence. To increase the fluence available for imaging, we have constructed a new SBDX prototype with a wider x-ray beam, larger-area detector, and new real-time image reconstructor. Imaging is performed with a scanning source that generates 40,328 narrow overlapping projections from 71 × 71 focal spot positions for every 1/15 s scan period. A high speed 2-mm thick CdTe photon counting detector was constructed with 320×160 elements and 10.6 cm × 5.3 cm area (full readout every 1.28 μs), providing an 86% increase in area over the previous SBDX prototype. A matching multihole collimator was fabricated from layers of tungsten, brass, and lead, and a multi-GPU reconstructor was assembled to reconstruct the stream of captured detector images into full field-of-view images in real time. Thirty-two tomosynthetic planes spaced by 5 mm plus a multiplane composite image are produced for each scan frame. Noise equivalent quanta on the new SBDX prototype measured 63%-71% higher than the previous prototype. X-ray scatter fraction was 3.9-7.8% when imaging 23.3-32.6 cm acrylic phantoms, versus 2.3-4.2% with the previous prototype. Coronary angiographic imaging at 15 frame/s was successfully performed on the new SBDX prototype, with live display of either a multiplane composite or single plane image.
Flash trajectory imaging of target 3D motion
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang
2011-03-01
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.
Interference-free ultrasound imaging during HIFU therapy, using software tools
NASA Technical Reports Server (NTRS)
Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)
2010-01-01
Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.
Artificial Neural Network applied to lightning flashes
NASA Astrophysics Data System (ADS)
Gin, R. B.; Guedes, D.; Bianchi, R.
2013-05-01
The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.
NASA Astrophysics Data System (ADS)
Wallace, D.; Ng, J. A.; Keall, P. J.; O'Brien, R. T.; Poulsen, P. R.; Juneja, P.; Booth, J. T.
2015-06-01
Kilovoltage intrafraction monitoring (KIM) utilises the kV imager during treatment for real-time tracking of prostate fiducial markers. However, its effectiveness relies on sufficient image quality for the fiducial tracking task. To guide the performance characterisation of KIM under different clinically relevant conditions, the effect of different kV parameters and patient size on image quality, and quantification of MV scatter from the patient to the kV detector panel were investigated in this study. Image quality was determined for a range of kV acquisition frame rates, kV exposure, MV dose rates and patient sizes. Two methods were used to determine image quality; the ratio of kV signal through the patient to the MV scatter from the patient incident on the kilovoltage detector, and the signal-to-noise ratio (SNR). The effect of patient size and frame rate on MV scatter was evaluated in a homogeneous CIRS pelvis phantom and marker segmentation was determined utilising the Rando phantom with embedded markers. MV scatter incident on the detector was shown to be dependent on patient thickness and frame rate. The segmentation code was shown to be successful for all frame rates above 3 Hz for the Rando phantom corresponding to a kV to MV ratio of 0.16 and an SNR of 1.67. For a maximum patient dimension less than 36.4 cm the conservative kV parameters of 5 Hz at 1 mAs can be used to reduce dose while retaining image quality, where the current baseline kV parameters of 10 Hz at 1 mAs is shown to be adequate for marker segmentation up to a patient dimension of 40 cm. In conclusion, the MV scatter component of image quality noise for KIM has been quantified. For most prostate patients, use of KIM with 10 Hz imaging at 1 mAs is adequate however image quality can be maintained and imaging dose reduced by altering existing acquisition parameters.
High-Frame-Rate Doppler Ultrasound Using a Repeated Transmit Sequence
Podkowa, Anthony S.; Oelze, Michael L.; Ketterling, Jeffrey A.
2018-01-01
The maximum detectable velocity of high-frame-rate color flow Doppler ultrasound is limited by the imaging frame rate when using coherent compounding techniques. Traditionally, high quality ultrasonic images are produced at a high frame rate via coherent compounding of steered plane wave reconstructions. However, this compounding operation results in an effective downsampling of the slow-time signal, thereby artificially reducing the frame rate. To alleviate this effect, a new transmit sequence is introduced where each transmit angle is repeated in succession. This transmit sequence allows for direct comparison between low resolution, pre-compounded frames at a short time interval in ways that are resistent to sidelobe motion. Use of this transmit sequence increases the maximum detectable velocity by a scale factor of the transmit sequence length. The performance of this new transmit sequence was evaluated using a rotating cylindrical phantom and compared with traditional methods using a 15-MHz linear array transducer. Axial velocity estimates were recorded for a range of ±300 mm/s and compared to the known ground truth. Using these new techniques, the root mean square error was reduced from over 400 mm/s to below 50 mm/s in the high-velocity regime compared to traditional techniques. The standard deviation of the velocity estimate in the same velocity range was reduced from 250 mm/s to 30 mm/s. This result demonstrates the viability of the repeated transmit sequence methods in detecting and quantifying high-velocity flow. PMID:29910966
Specialized CCDs for high-frame-rate visible imaging and UV imaging applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Taylor, Gordon C.; Shallcross, Frank V.; Tower, John R.; Lawler, William B.; Harrison, Lorna J.; Socker, Dennis G.; Marchywka, Mike
1993-11-01
This paper reports recent progress by the authors in two distinct charge coupled device (CCD) technology areas. The first technology area is high frame rate, multi-port, frame transfer imagers. A 16-port, 512 X 512, split frame transfer imager and a 32-port, 1024 X 1024, split frame transfer imager are described. The thinned, backside illuminated devices feature on-chip correlated double sampling, buried blooming drains, and a room temperature dark current of less than 50 pA/cm2, without surface accumulation. The second technology area is vacuum ultraviolet (UV) frame transfer imagers. A developmental 1024 X 640 frame transfer imager with 20% quantum efficiency at 140 nm is described. The device is fabricated in a p-channel CCD process, thinned for backside illumination, and utilizes special packaging to achieve stable UV response.
Behavioral model of visual perception and recognition
NASA Astrophysics Data System (ADS)
Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.
1993-09-01
In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
Benioff, Paul
2009-01-01
Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices. he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames. he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...
NASA Astrophysics Data System (ADS)
Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao
2013-12-01
A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
Shear wave speed and dispersion measurements using crawling wave chirps.
Hah, Zaegyoo; Partin, Alexander; Parker, Kevin J
2014-10-01
This article demonstrates the measurement of shear wave speed and shear speed dispersion of biomaterials using a chirp signal that launches waves over a range of frequencies. A biomaterial is vibrated by two vibration sources that generate shear waves inside the medium, which is scanned by an ultrasound imaging system. Doppler processing of the acquired signal produces an image of the square of vibration amplitude that shows repetitive constructive and destructive interference patterns called "crawling waves." With a chirp vibration signal, successive Doppler frames are generated from different source frequencies. Collected frames generate a distinctive pattern which is used to calculate the shear speed and shear speed dispersion. A special reciprocal chirp is designed such that the equi-phase lines of a motion slice image are straight lines. Detailed analysis is provided to generate a closed-form solution for calculating the shear wave speed and the dispersion. Also several phantoms and an ex vivo human liver sample are scanned and the estimation results are presented. © The Author(s) 2014.
Fast regional readout CMOS Image Sensor for dynamic MLC tracking
NASA Astrophysics Data System (ADS)
Zin, H.; Harris, E.; Osmond, J.; Evans, P.
2014-03-01
Advanced radiotherapy techniques such as volumetric modulated arc therapy (VMAT) require verification of the complex beam delivery including tracking of multileaf collimators (MLC) and monitoring the dose rate. This work explores the feasibility of a prototype Complementary metal-oxide semiconductor Image Sensor (CIS) for tracking these complex treatments by utilising fast, region of interest (ROI) read out functionality. An automatic edge tracking algorithm was used to locate the MLC leaves edges moving at various speeds (from a moving triangle field shape) and imaged with various sensor frame rates. The CIS demonstrates successful edge detection of the dynamic MLC motion within accuracy of 1.0 mm. This demonstrates the feasibility of the sensor to verify treatment delivery involving dynamic MLC up to ~400 frames per second (equivalent to the linac pulse rate), which is superior to any current techniques such as using electronic portal imaging devices (EPID). CIS provides the basis to an essential real-time verification tool, useful in accessing accurate delivery of complex high energy radiation to the tumour and ultimately to achieve better cure rates for cancer patients.
NASA Technical Reports Server (NTRS)
Papanyan, Valeri; Oshle, Edward; Adamo, Daniel
2008-01-01
Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.
Development of high-speed video cameras
NASA Astrophysics Data System (ADS)
Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk
2001-04-01
Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.
Quantitation of Fine Displacement in Echography
NASA Astrophysics Data System (ADS)
Masuda, Kohji; Ishihara, Ken; Yoshii, Ken; Furukawa, Toshiyuki; Kumagai, Sadatoshi; Maeda, Hajime; Kodama, Shinzo
1993-05-01
A High-speed Digital Subtraction Echography was developed to visualize the fine displacement of human internal organs. This method indicates differences in position through time series images of high-frame-rate echography. Fine displacement less than ultrasonic wavelength can be observed. This method, however, lacks the ability to quantitatively measure displacement length. The subtraction between two successive images was affected by displacement direction in spite of the displacement length being the same. To solve this problem, convolution of an echogram with Gaussian distribution was used. To express displacement length as brightness quantitatively, normalization using a brightness gradient was applied. The quantitation algorithm was applied to successive B-mode images. Compared to the simply subtracted images, quantitated images express more precisely the motion of organs. Expansion of the carotid artery and fine motion of ventricular walls can be visualized more easily. Displacement length can be quantitated with wavelength. Under more static conditions, this system quantitates displacement length that is much less than wavelength.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull
A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
500 x 1Byte x 136 images. So each 500 bytes from this dataset represents one scan line of the slice image. For example, using PBM: Get frame one: rawtopgm 256 256 < tomato.data > frame1 Get frames one to four into a single image: rawtopgm 256 1024 < tomato.data >frame1-4 Get frame two (skip
NASA Astrophysics Data System (ADS)
Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.
2004-02-01
This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
The Multimission Image Processing Laboratory's virtual frame buffer interface
NASA Technical Reports Server (NTRS)
Wolfe, T.
1984-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
Application of motion analysis in the study of the effect of botulinum toxin to rat vocal folds
NASA Astrophysics Data System (ADS)
Saadah, Abdul K.; Galatsanos, Nikolas P.; Inagi, K.; Bless, D.
1997-05-01
In the past we have proposed a system that measures the deformations of the vocal folds from videostroboscopic images of the larynx, in that system: (1) we extract the boundaries of the vocal folds, (2) we register elastically the vocal fold boundaries in successive frames. This yields the displacement vector field (DVF) between adjacent frames, and (3) we fit using a least-squares approach an affine transformation model to succinctly describe the deformations between adjacent frames. In this paper, we present as an example of the capabilities of this system, an initial study of the deformation changes in rat vocal folds pre and post injection with Botulinum toxin. For this application the generated DVF was segmented into right DVF and left DVF and the deformation of each segment is studied separately.
Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao
2015-09-01
The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.
Snorkelling between the stars: submarine methods for astronomical observations.
NASA Astrophysics Data System (ADS)
Velasco, S.; Quevedo, E.; Font, J.; Oscoz, A.; López, R. L.; Puga, M.; Rebolo, R.; Hernáandez Brito, J.; Llinas, O.; Marrero Callico, G.; Sarmiento, R.
2017-03-01
Trying to reach diffraction-limited astronomical observations from ground-based telescopes is very challenging due to the atmospheric effects contributing to a general blurring of the images. However, astronomy is not the only science facing turbulence problems; obtaining quality images of the undersea world is as ambitious as it is on the sky. One of the solutions contemplated to reach high-resolution images is the use of multiple frames of the same target, known as fusion super-resolution (Quevedo et al. 2015), which is the principle for Lucky Imaging (Velasco et al. 2016). Here we present the successful result of joining efforts between the undersea and the astronomical research done at the Canary Islands.
Optical head tracking for functional magnetic resonance imaging using structured light.
Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D
2008-07-01
An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.
Sub-10-ms X-ray tomography using a grating interferometer
NASA Astrophysics Data System (ADS)
Yashiro, Wataru; Noda, Daiji; Kajiwara, Kentaro
2017-05-01
An X-ray phase tomogram was successfully obtained with an exposure time of less than 10 ms by X-ray grating interferometry, an X-ray phase imaging technique that enables high-sensitivity X-ray imaging even of materials consisting of light elements. This high-speed X-ray imaging experiment was performed at BL28B2, SPring-8, where a white X-ray beam is available, and the tomogram was reconstructed from projection images recorded at a frame rate of 100,000 fps. The setup of the experiment will make it possible to realize three-dimensional observation of unrepeatable high-speed phenomena with a time resolution of less than 10 ms.
Lin, Zhicheng
2013-11-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.
Lin, Zhicheng
2013-01-01
Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, human performance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. PMID:23942348
Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator
NASA Technical Reports Server (NTRS)
Ranadive, V.; Sheridan, T. B.
1981-01-01
The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.
2001-01-24
The potential for investigating combustion at the limits of flammability, and the implications for spacecraft fire safety, led to the Structures Of Flame Balls At Low Lewis-number (SOFBALL) experiment flown twice aboard the Space Shuttle in 1997. The success there led to reflight on STS-107 Research 1 mission plarned for 2002. This image is a video frame which shows MSL-1 flameballs which are intrinsically dim, thus requiring the use of image intensifiers on video cameras. The principal investigator is Dr. Paul Ronney of the University of Southern California, Los Angeles. Glenn Research in Cleveland, OH, manages the project.
Multiple-Event, Single-Photon Counting Imaging Sensor
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.
2011-01-01
The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.
Truly hybrid interventional MR/X-ray system: investigation of in vivo applications.
Fahrig, R; Butts, K; Wen, Z; Saunders, R; Kee, S T; Sze, D Y; Daniel, B L; Laerum, F; Pelc, N J
2001-12-01
The purpose of this study was to provide in vivo demonstrations of the functionality of a truly hybrid interventional x-ray/magnetic resonance (MR) system. A digital flat-panel x-ray system (1,024(2) array of 200 microm pixels, 30 frames per second) was integrated into an interventional 0.5-T magnet. The hybrid system is capable of MR and x-ray imaging of the same field of view without patient movement. Two intravascular procedures were performed in a 22-kg porcine model: placement of a transjugular intrahepatic portosystemic shunt (TIPS) (x-ray-guided catheterization of the hepatic vein, MR fluoroscopy-guided portal puncture, and x-ray-guided stent placement) and mock chemoembolization (x-ray-guided subselective catheterization of a renal artery branch and MR evaluation of perfused volume). The resolution and frame rate of the x-ray fluoroscopy images were sufficient to visualize and place devices, including nitinol guidewires (0.016-0.035-inch diameter) and stents and a 2.3-F catheter. Fifth-order branches of the renal artery could be seen. The quality of both real-time (3.5 frames per second) and standard MR images was not affected by the x-ray system. During MR-guided TIPS placement, the trocar and the portal vein could be easily visualized, allowing successful puncture from hepatic to portal vein. Switching back and forth between x-ray and MR imaging modalities without requiring movement of the patient was demonstrated. The integrated nature of the system could be especially beneficial when x-ray and MR image guidance are used iteratively.
Time multiplexing for increased FOV and resolution in virtual reality
NASA Astrophysics Data System (ADS)
Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj
2017-06-01
We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.
Frame Rate Considerations for Real-Time Abdominal Acoustic Radiation Force Impulse Imaging
Fahey, Brian J.; Palmeri, Mark L.; Trahey, Gregg E.
2008-01-01
With the advent of real-time Acoustic Radiation Force Impulse (ARFI) imaging, elevated frame rates are both desirable and relevant from a clinical perspective. However, fundamental limitations on frame rates are imposed by thermal safety concerns related to incident radiation force pulses. Abdominal ARFI imaging utilizes a curvilinear scanning geometry that results in markedly different tissue heating patterns than those previously studied for linear arrays or mechanically-translated concave transducers. Finite Element Method (FEM) models were used to simulate these tissue heating patterns and to analyze the impact of tissue heating on frame rates available for abdominal ARFI imaging. A perfusion model was implemented to account for cooling effects due to blood flow and frame rate limitations were evaluated in the presence of normal, reduced and negligible tissue perfusions. Conventional ARFI acquisition techniques were also compared to ARFI imaging with parallel receive tracking in terms of thermal efficiency. Additionally, thermocouple measurements of transducer face temperature increases were acquired to assess the frame rate limitations imposed by cumulative heating of the imaging array. Frame rates sufficient for many abdominal imaging applications were found to be safely achievable utilizing available ARFI imaging techniques. PMID:17521042
Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images
NASA Astrophysics Data System (ADS)
Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.
2014-12-01
Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
Tracking scanning laser ophthalmoscope (TSLO)
NASA Astrophysics Data System (ADS)
Hammer, Daniel X.; Ferguson, R. Daniel; Magill, John C.; White, Michael A.; Elsner, Ann E.; Webb, Robert H.
2003-07-01
The effectiveness of image stabilization with a retinal tracker in a multi-function, compact scanning laser ophthalmoscope (TSLO) was demonstrated in initial human subject tests. The retinal tracking system uses a confocal reflectometer with a closed loop optical servo system to lock onto features in the fundus. The system is modular to allow configuration for many research and clinical applications, including hyperspectral imaging, multifocal electroretinography (MFERG), perimetry, quantification of macular and photo-pigmentation, imaging of neovascularization and other subretinal structures (drusen, hyper-, and hypo-pigmentation), and endogenous fluorescence imaging. Optical hardware features include dual wavelength imaging and detection, integrated monochromator, higher-order motion control, and a stimulus source. The system software consists of a real-time feedback control algorithm and a user interface. Software enhancements include automatic bias correction, asymmetric feature tracking, image averaging, automatic track re-lock, and acquisition and logging of uncompressed images and video files. Normal adult subjects were tested without mydriasis to optimize the tracking instrumentation and to characterize imaging performance. The retinal tracking system achieves a bandwidth of greater than 1 kHz, which permits tracking at rates that greatly exceed the maximum rate of motion of the human eye. The TSLO stabilized images in all test subjects during ordinary saccades up to 500 deg/sec with an inter-frame accuracy better than 0.05 deg. Feature lock was maintained for minutes despite subject eye blinking. Successful frame averaging allowed image acquisition with decreased noise in low-light applications. The retinal tracking system significantly enhances the imaging capabilities of the scanning laser ophthalmoscope.
A novel approach to automatic threat detection in MMW imagery of people scanned in portals
NASA Astrophysics Data System (ADS)
Vaidya, Nitin M.; Williams, Thomas
2008-04-01
We have developed a novel approach to performing automatic detection of concealed threat objects in passive MMW imagery of people scanned in a portal setting. It is applicable to the significant class of imaging scanners that use the protocol of having the subject rotate in front of the camera in order to image them from several closely spaced directions. Customary methods of dealing with MMW sequences rely on the analysis of the spatial images in a frame-by-frame manner, with information extracted from separate frames combined by some subsequent technique of data association and tracking over time. We contend that the pooling of information over time in traditional methods is not as direct as can be and potentially less efficient in distinguishing threats from clutter. We have formulated a more direct approach to extracting information about the scene as it evolves over time. We propose an atypical spatio-temporal arrangement of the MMW image data - to which we give the descriptive name Row Evolution Image (REI) sequence. This representation exploits the singular aspect of having the subject rotate in front of the camera. We point out which features in REIs are most relevant to detecting threats, and describe the algorithms we have developed to extract them. We demonstrate results of successful automatic detection of threats, including ones whose faint image contrast renders their disambiguation from clutter very challenging. We highlight the ease afforded by the REI approach in permitting specialization of the detection algorithms to different parts of the subject body. Finally, we describe the execution efficiency advantages of our approach, given its natural fit to parallel processing. mage
WE-AB-BRA-12: Virtual Endoscope Tracking for Endoscopy-CT Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, W; Rao, A; Wendt, R
Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT-space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2-mm-diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom’s luminal surface on CT. We tested registration accuracy by tracking the endoscope’s 6-degree-of-freedom coordinates frame-to-frame in a video recorded asmore » it moved through the phantom, and using these coordinates to measure CT-space positions of markers visible in the final frame. To track the endoscope we used the Nelder-Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope’s initial-frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT-space marker positions were measured by projecting their final-frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker’s manually-selected CT-space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy-CT registration framework that is clinically valuable and requires no specialized equipment.« less
A Kinect based sign language recognition system using spatio-temporal features
NASA Astrophysics Data System (ADS)
Memiş, Abbas; Albayrak, Songül
2013-12-01
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Liang, Zhongwei; Zhou, Liang; Liu, Xiaochu; Wang, Xiaogang
2014-01-01
It is obvious that tablet image tracking exerts a notable influence on the efficiency and reliability of high-speed drug mass production, and, simultaneously, it also emerges as a big difficult problem and targeted focus during production monitoring in recent years, due to the high similarity shape and random position distribution of those objectives to be searched for. For the purpose of tracking tablets accurately in random distribution, through using surface fitting approach and transitional vector determination, the calibrated surface of light intensity reflective energy can be established, describing the shape topology and topography details of objective tablet. On this basis, the mathematical properties of these established surfaces have been proposed, and thereafter artificial neural network (ANN) has been employed for classifying those moving targeted tablets by recognizing their different surface properties; therefore, the instantaneous coordinate positions of those drug tablets on one image frame can then be determined. By repeating identical pattern recognition on the next image frame, the real-time movements of objective tablet templates were successfully tracked in sequence. This paper provides reliable references and new research ideas for the real-time objective tracking in the case of drug production practices. PMID:25143781
Guidance of aortic ablation using optical coherence tomography.
Patel, Nirlep A; Li, Xingde; Stamper, Debra L; Fujimoto, James G; Brezinski, Mark E
2003-04-01
There is a significant need for an imaging modality that is capable of providing guidance for intravascular procedures, as current technologies suffer from significant limitations. In particular, laser ablation of in-stent restenosis, revascularization of chronic total occlusions, and pulmonary vein ablation could benefit from guidance. Optical coherence tomography (OCT), a recently introduced technology, is similar to ultrasound except that it measures the back-reflection of infrared light instead of sound. This study examines the ability of OCT to guide vascular laser ablation. Aorta samples underwent laser ablation using an argon laser at varying power outputs and were monitored with OCT collecting images at 4 frames. Samples were compared to the corresponding histopathology. Arterial layers could be differentiated in the images sequences. This allowed correlation of changes in the OCT image with power and duration in addition to histopathology. OCT provides real-time guidance of arterial ablation. At 4 frames, OCT was successfully able to show the microstructural changes in the vessel wall during laser ablation. Since current ablation procedures often injure surrounding tissue, the ability to minimize collateral damage to the adjoining tissue represents a useful advantage of this system. This study suggests a possible role for OCT in the guidance of intravascular procedures.
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
Constructing spherical panoramas of a bladder phantom from endoscopic video using bundle adjustment
NASA Astrophysics Data System (ADS)
Soper, Timothy D.; Chandler, John E.; Porter, Michael P.; Seibel, Eric J.
2011-03-01
The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.
Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach
NASA Astrophysics Data System (ADS)
Kaelldahl, A.; Duranti, P.
The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.
Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Chang-hui; Wei, Kai
Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Image Display Devices and Components Thereof; Issuance of a Limited Exclusion Order and Cease and Desist... within the United States after importation of certain digital photo frames and image display devices and...: (1) The unlicensed entry of digital photo frames and image display devices and components thereof...
X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers
Willey, T. M.; Champley, K.; Hodgin, R.; ...
2016-06-17
Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. The work described here outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ~80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst,more » the 2 nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3 rd frame captures the flyer in flight, while the 4 th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less
X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers
NASA Astrophysics Data System (ADS)
Willey, T. M.; Champley, K.; Hodgin, R.; Lauderbach, L.; Bagge-Hansen, M.; May, C.; Sanchez, N.; Jensen, B. J.; Iverson, A.; van Buuren, T.
2016-06-01
Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ˜80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.
X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willey, T. M., E-mail: willey1@llnl.gov; Champley, K., E-mail: champley1@llnl.gov; Hodgin, R.
Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ∼80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images themore » flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less
X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willey, T. M.; Champley, K.; Hodgin, R.
Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. The work described here outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ~80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst,more » the 2 nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3 rd frame captures the flyer in flight, while the 4 th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less
Blind subjects construct conscious mental images of visual scenes encoded in musical form.
Cronly-Dillon, J; Persaud, K C; Blore, R
2000-01-01
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637
Redies, Christoph; Groß, Franziska
2013-01-01
Frames provide a visual link between artworks and their surround. We asked how image properties change as an observer zooms out from viewing a painting alone, to viewing the painting with its frame and, finally, the framed painting in its museum environment (museum scene). To address this question, we determined three higher-order image properties that are based on histograms of oriented luminance gradients. First, complexity was measured as the sum of the strengths of all gradients in the image. Second, we determined the self-similarity of histograms of the orientated gradients at different levels of spatial analysis. Third, we analyzed how much gradient strength varied across orientations (anisotropy). Results were obtained for three art museums that exhibited paintings from three major periods of Western art. In all three museums, the mean complexity of the frames was higher than that of the paintings or the museum scenes. Frames thus provide a barrier of complexity between the paintings and their exterior. By contrast, self-similarity and anisotropy values of images of framed paintings were intermediate between the images of the paintings and the museum scenes, i.e., the frames provided a transition between the paintings and their surround. We also observed differences between the three museums that may reflect modified frame usage in different art periods. For example, frames in the museum for 20th century art tended to be smaller and less complex than in the two other two museums that exhibit paintings from earlier art periods (13th–18th century and 19th century, respectively). Finally, we found that the three properties did not depend on the type of reproduction of the paintings (photographs in museums, scans from books or images from the Google Art Project). To the best of our knowledge, this study is the first to investigate the relation between frames and paintings by measuring physically defined, higher-order image properties. PMID:24265625
Point spread function engineering for iris recognition system design.
Ashok, Amit; Neifeld, Mark A
2010-04-01
Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.
[Improvement of Digital Capsule Endoscopy System and Image Interpolation].
Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai
2016-01-01
Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation
Markerless EPID image guided dynamic multi-leaf collimator tracking for lung tumors
NASA Astrophysics Data System (ADS)
Rottmann, J.; Keall, P.; Berbeco, R.
2013-06-01
Compensation of target motion during the delivery of radiotherapy has the potential to improve treatment accuracy, dose conformity and sparing of healthy tissue. We implement an online image guided therapy system based on soft tissue localization (STiL) of the target from electronic portal images and treatment aperture adaptation with a dynamic multi-leaf collimator (DMLC). The treatment aperture is moved synchronously and in real time with the tumor during the entire breathing cycle. The system is implemented and tested on a Varian TX clinical linear accelerator featuring an AS-1000 electronic portal imaging device (EPID) acquiring images at a frame rate of 12.86 Hz throughout the treatment. A position update cycle for the treatment aperture consists of four steps: in the first step at time t = t0 a frame is grabbed, in the second step the frame is processed with the STiL algorithm to get the tumor position at t = t0, in a third step the tumor position at t = ti + δt is predicted to overcome system latencies and in the fourth step, the DMLC control software calculates the required leaf motions and applies them at time t = ti + δt. The prediction model is trained before the start of the treatment with data representing the tumor motion. We analyze the system latency with a dynamic chest phantom (4D motion phantom, Washington University). We estimate the average planar position deviation between target and treatment aperture in a clinical setting by driving the phantom with several lung tumor trajectories (recorded from fiducial tracking during radiotherapy delivery to the lung). DMLC tracking for lung stereotactic body radiation therapy without fiducial markers was successfully demonstrated. The inherent system latency is found to be δt = (230 ± 11) ms for a MV portal image acquisition frame rate of 12.86 Hz. The root mean square deviation between tumor and aperture position is smaller than 1 mm. We demonstrate the feasibility of real-time markerless DMLC tracking with a standard LINAC-mounted (EPID).
High speed color imaging through scattering media with a large field of view
NASA Astrophysics Data System (ADS)
Zhuang, Huichang; He, Hexiang; Xie, Xiangsheng; Zhou, Jianying
2016-09-01
Optical imaging through complex media has many important applications. Although research progresses have been made to recover optical image through various turbid media, the widespread application of the technology is hampered by the recovery speed, requirement on specific illumination, poor image quality and limited field of view. Here we demonstrate that above-mentioned drawbacks can be essentially overcome. The realization of high speed color imaging through turbid media is successfully carried out by taking into account the media memory effect, the point spread function, the exit pupil of the optical system, and the optimized signal to noise ratio. By retrieving selected speckles with enlarged field of view, high quality image is recovered with a responding speed only determined by the frame rates of the image capturing devices. The immediate application of the technique is expected to register static and dynamic imaging under human skin to recover information with a wearable device.
Ruschin, Mark; Komljenovic, Philip T; Ansell, Steve; Ménard, Cynthia; Bootsma, Gregory; Cho, Young-Bin; Chung, Caroline; Jaffray, David
2013-01-01
Image guidance has improved the precision of fractionated radiation treatment delivery on linear accelerators. Precise radiation delivery is particularly critical when high doses are delivered to complex shapes with steep dose gradients near critical structures, as is the case for intracranial radiosurgery. To reduce potential geometric uncertainties, a cone beam computed tomography (CT) image guidance system was developed in-house to generate high-resolution images of the head at the time of treatment, using a dedicated radiosurgery unit. The performance and initial clinical use of this imaging system are described. A kilovoltage cone beam CT system was integrated with a Leksell Gamma Knife Perfexion radiosurgery unit. The X-ray tube and flat-panel detector are mounted on a translational arm, which is parked above the treatment unit when not in use. Upon descent, a rotational axis provides 210° of rotation for cone beam CT scans. Mechanical integrity of the system was evaluated over a 6-month period. Subsequent clinical commissioning included end-to-end testing of targeting performance and subjective image quality performance in phantoms. The system has been used to image 2 patients, 1 of whom received single-fraction radiosurgery and 1 who received 3 fractions, using a relocatable head frame. Images of phantoms demonstrated soft tissue contrast visibility and submillimeter spatial resolution. A contrast difference of 35 HU was easily detected at a calibration dose of 1.2 cGy (center of head phantom). The shape of the mechanical flex vs scan angle was highly reproducible and exhibited <0.2 mm peak-to-peak variation. With a 0.5-mm voxel pitch, the maximum targeting error was 0.4 mm. Images of 2 patients were analyzed offline and submillimeter agreement was confirmed with conventional frame. A cone beam CT image guidance system was successfully adapted to a radiosurgery unit. The system is capable of producing high-resolution images of bone and soft tissue. The system is in clinical use and provides excellent image guidance without invasive frames. Copyright © 2013 Elsevier Inc. All rights reserved.
Terrain Aided Navigation for Remus Autonomous Underwater Vehicle
2014-06-01
22 Figure 11. Several successive sonar pings displayed together in the LTP frame .............23 Figure 12. The linear interpolation of...the sonar pings from Figure 11 .............................24 Figure 13. SIR particle filter algorithm, after [19... ping — |p k ky x .........46 Figure 26. Correlation probability distributions for four different sonar images ..............47 Figure 27. Particle
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
Camera array based light field microscopy
Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai
2015-01-01
This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490
Special raster scanning for reduction of charging effects in scanning electron microscopy.
Suzuki, Kazuhiko; Oho, Eisaku
2014-01-01
A special raster scanning (SRS) method for reduction of charging effects is developed for the field of SEM. Both a conventional fast scan (horizontal direction) and an unusual scan (vertical direction) are adopted for acquiring raw data consisting of many sub-images. These data are converted to a proper SEM image using digital image processing techniques. About sharpness of the image and reduction of charging effects, the SRS is compared with the conventional fast scan (with frame-averaging) and the conventional slow scan. Experimental results show the effectiveness of SRS images. By a successful combination of the proposed scanning method and low accelerating voltage (LV)-SEMs, it is expected that higher-quality SEM images can be more easily acquired by the considerable reduction of charging effects, while maintaining the resolution. © 2013 Wiley Periodicals, Inc.
Mesh quality oriented 3D geometric vascular modeling based on parallel transport frame.
Guo, Jixiang; Li, Shun; Chui, Yim Pan; Qin, Jing; Heng, Pheng Ann
2013-08-01
While a number of methods have been proposed to reconstruct geometrically and topologically accurate 3D vascular models from medical images, little attention has been paid to constantly maintain high mesh quality of these models during the reconstruction procedure, which is essential for many subsequent applications such as simulation-based surgical training and planning. We propose a set of methods to bridge this gap based on parallel transport frame. An improved bifurcation modeling method and two novel trifurcation modeling methods are developed based on 3D Bézier curve segments in order to ensure the continuous surface transition at furcations. In addition, a frame blending scheme is implemented to solve the twisting problem caused by frame mismatch of two successive furcations. A curvature based adaptive sampling scheme combined with a mesh quality guided frame tilting algorithm is developed to construct an evenly distributed, non-concave and self-intersection free surface mesh for vessels with distinct radius and high curvature. Extensive experiments demonstrate that our methodology can generate vascular models with better mesh quality than previous methods in terms of surface mesh quality criteria. Copyright © 2013 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-807] Certain Digital Photo Frames and Image Display Devices and Components Thereof; Commission Determination Not To Review an Initial... importation, and the sale within the United States after importation of certain digital photo frames and image...
High-speed varifocal imaging with a tunable acoustic gradient index of refraction lens.
Mermillod-Blondin, Alexandre; McLeod, Euan; Arnold, Craig B
2008-09-15
Fluidic lenses allow for varifocal optical elements, but current approaches are limited by the speed at which focal length can be changed. Here we demonstrate the use of a tunable acoustic gradient (TAG) index of refraction lens as a fast varifocal element. The optical power of the TAG lens varies continuously, allowing for rapid selection and modification of the effective focal length at time scales of 1 mus and shorter. The wavefront curvature applied to the incident light is experimentally quantified as a function of time, and single-frame imaging is demonstrated. Results indicate that the TAG lens can successfully be employed to perform high-rate imaging at multiple locations.
Sequential detection of web defects
Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.
2001-01-01
A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
TH-AB-202-01: Daily Lung Tumor Motion Characterization On EPIDs Using a Markerless Tiling Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozario, T; University of Texas at Dallas, Richardson, TX; Chiu, T
Purpose: Tracking lung tumor motion in real time allows for target dose escalation while simultaneously reducing dose to sensitive structures, thus increasing local control without increasing toxicity. We present a novel intra-fractional markerless lung tumor tracking algorithm using MV treatment beam images acquired during treatment delivery. Strong signals superimposed on the tumor significantly reduced the soft tissue resolution; while different imaging modalities involved introduce global imaging discrepancies. This reduced the comparison accuracies. A simple yet elegant Tiling algorithm is reported to overcome the aforementioned issues. Methods: MV treatment beam images were acquired continuously in beam’s eye view (BEV) by anmore » electronic portal imaging device (EPID) during treatment and analyzed to obtain tumor positions on every frame. Every frame of the MV image was simulated by a composite of two components with separate digitally reconstructed radiographs (DRRs): all non-moving structures and the tumor. This Titling algorithm divides the global composite DRR and the corresponding MV projection into sub-images called tiles. Rigid registration is performed independently on tile-pairs in order to improve local soft tissue resolution. This enables the composite DRR to be transformed accurately to match the MV projection and attain a high correlation value through a pixel-based linear transformation. The highest cumulative correlation for all tile-pairs achieved over a user-defined search range indicates the 2-D coordinates of the tumor location on the MV projection. Results: This algorithm was successfully applied to cine-mode BEV images acquired during two SBRT plans delivered five times with different motion patterns to each of two phantoms. Approximately 15000 beam’s eye view images were analyzed and tumor locations were successfully identified on every projection with a maximum/average error of 1.8 mm / 1.0 mm. Conclusion: Despite the presence of strong anatomical signal overlapping with tumor images, this markerless detection algorithm accurately tracks intrafractional lung tumor motions. This project is partially supported by an Elekta research grant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCowan, P. M., E-mail: pmccowan@cancercare.mb.ca; McCurdy, B. M. C.; Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9
Purpose: The in vivo 3D dose delivered to a patient during volumetric modulated arc therapy (VMAT) delivery can be calculated using electronic portal imaging device (EPID) images. These images must be acquired in cine-mode (i.e., “movie” mode) in order to capture the time-dependent delivery information. The angle subtended by each cine-mode EPID image during an arc can be changed via the frame averaging number selected within the image acquisition software. A large frame average number will decrease the EPID’s angular resolution and will result in a decrease in the accuracy of the dose information contained within each image. Alternatively, lessmore » EPID images acquired per delivery will decrease the overall 3D patient dose calculation time, which is appealing for large-scale clinical implementation. Therefore, the purpose of this study was to determine the optimal frame average value per EPID image, defined as the highest frame averaging that can be used without an appreciable loss in 3D dose reconstruction accuracy for VMAT treatments. Methods: Six different VMAT plans and six different SBRT-VMAT plans were delivered to an anthropomorphic phantom. Delivery was carried out on a Varian 2300ix model linear accelerator (Linac) equipped with an aS1000 EPID running at a frame acquisition rate of 7.5 Hz. An additional PC was set up at the Linac console area, equipped with specialized frame-grabber hardware and software packages allowing continuous acquisition of all EPID frames during delivery. Frames were averaged into “frame-averaged” EPID images using MATLAB. Each frame-averaged data set was used to calculate the in vivo dose to the patient and then compared to the single EPID frame in vivo dose calculation (the single frame calculation represents the highest possible angular resolution per EPID image). A mean percentage dose difference of low dose (<20% prescription dose) and high dose regions (>80% prescription dose) was calculated for each frame averaged scenario for each plan. The authors defined their unacceptable loss of accuracy as no more than a ±1% mean dose difference in the high dose region. Optimal frame average numbers were then determined as a function of the Linac’s average gantry speed and the dose per fraction. Results: The authors found that 9 and 11 frame averages were suitable for all VMAT and SBRT-VMAT treatments, respectively. This resulted in no more than a 1% loss to any of the dose region’s mean percentage difference when compared to the single frame reconstruction. The optimized number was dependent on the treatment’s dose per fraction and was determined to be as high as 14 for 12 Gy/fraction (fx), 15 for 8 Gy/fx, 11 for 6 Gy/fx, and 9 for 2 Gy/fx. Conclusions: The authors have determined an optimal EPID frame averaging number for multiple VMAT-type treatments. These are given as a function of the dose per fraction and average gantry speed. These optimized values are now used in the authors’ clinical, 3D, in vivo patient dosimetry program. This provides a reduction in calculation time while maintaining the authors’ required level of accuracy in the dose reconstruction.« less
Mitigating fluorescence spectral overlap in wide-field endoscopic imaging
Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.
2013-01-01
Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, W; Hrycushko, B; Yan, Y
Purpose: Traditional external beam radiotherapy for cervical cancer requires setup by external skin marks. In order to improve treatment accuracy and reduce planning margin for more conformal therapy, it is essential to monitor tumor positions interfractionally and intrafractionally. We demonstrate feasibility of monitoring cervical tumor motion online using EPID imaging from Beam’s Eye View. Methods: Prior to treatment, 1∼2 cylindrical radio opaque markers were implanted into inferior aspect of cervix tumor. During external beam treatments on a Varian 2100C by 4-field 3D plans, treatment beam images were acquired continuously by an EPID. A Matlab program was developed to locate internalmore » markers on MV images. Based on 2D marker positions obtained from different treatment fields, their 3D positions were estimated for every treatment fraction. Results: There were 398 images acquired during different treatment fractions of three cervical cancer patients. Markers were successfully located on every frame of image at an analysis speed of about 1 second per frame. Intrafraction motions were evaluated by comparing marker positions relative to the position on the first frame of image. The maximum intrafraction motion of the markers was 1.6 mm. Interfraction motions were evaluated by comparing 3D marker positions at different treatment fractions. The maximum interfraction motion was up to 10 mm. Careful comparison found that this is due to patient positioning since the bony structures shifted with the markers. Conclusion: This method provides a cost-free and simple solution for online tumor tracking for cervical cancer treatment since it is feasible to acquire and export EPID images with fast analysis in real time. This method does not need any extra equipment or deliver extra dose to patients. The online tumor motion information will be very useful to reduce planning margins and improve treatment accuracy, which is particularly important for SBRT treatment with long delivery time.« less
Dangerous gas detection based on infrared video
NASA Astrophysics Data System (ADS)
Ding, Kang; Hong, Hanyu; Huang, Likun
2018-03-01
As the gas leak infrared imaging detection technology has significant advantages of high efficiency and remote imaging detection, in order to enhance the detail perception of observers and equivalently improve the detection limit, we propose a new type of gas leak infrared image detection method, which combines background difference methods and multi-frame interval difference method. Compared to the traditional frame methods, the multi-frame interval difference method we proposed can extract a more complete target image. By fusing the background difference image and the multi-frame interval difference image, we can accumulate the information of infrared target image of the gas leak in many aspect. The experiment demonstrate that the completeness of the gas leakage trace information is enhanced significantly, and the real-time detection effect can be achieved.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
A knowledge-based object recognition system for applications in the space station
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1988-01-01
A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.
GPU-based multi-volume ray casting within VTK for medical applications.
Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-03-01
Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
High frame-rate en face optical coherence tomography system using KTN optical beam deflector
NASA Astrophysics Data System (ADS)
Ohmi, Masato; Shinya, Yusuke; Imai, Tadayuki; Toyoda, Seiji; Kobayashi, Junya; Sakamoto, Tadashi
2017-02-01
We developed high frame-rate en face optical coherence tomography (OCT) system using KTa1-xNbxO3 (KTN) optical beam deflector. In the imaging system, the fast scanning was performed at 200 kHz by the KTN optical beam deflector, while the slow scanning was performed at 800 Hz by the galvanometer mirror. As a preliminary experiment, we succeeded in obtaining en face OCT images of human fingerprint with a frame rate of 800 fps. This is the highest frame-rate obtained using time-domain (TD) en face OCT imaging. The 3D-OCT image of sweat gland was also obtained by our imaging system.
Face landmark point tracking using LK pyramid optical flow
NASA Astrophysics Data System (ADS)
Zhang, Gang; Tang, Sikan; Li, Jiaquan
2018-04-01
LK pyramid optical flow is an effective method to implement object tracking in a video. It is used for face landmark point tracking in a video in the paper. The landmark points, i.e. outer corner of left eye, inner corner of left eye, inner corner of right eye, outer corner of right eye, tip of a nose, left corner of mouth, right corner of mouth, are considered. It is in the first frame that the landmark points are marked by hand. For subsequent frames, performance of tracking is analyzed. Two kinds of conditions are considered, i.e. single factors such as normalized case, pose variation and slowly moving, expression variation, illumination variation, occlusion, front face and rapidly moving, pose face and rapidly moving, and combination of the factors such as pose and illumination variation, pose and expression variation, pose variation and occlusion, illumination and expression variation, expression variation and occlusion. Global measures and local ones are introduced to evaluate performance of tracking under different factors or combination of the factors. The global measures contain the number of images aligned successfully, average alignment error, the number of images aligned before failure, and the local ones contain the number of images aligned successfully for components of a face, average alignment error for the components. To testify performance of tracking for face landmark points under different cases, tests are carried out for image sequences gathered by us. Results show that the LK pyramid optical flow method can implement face landmark point tracking under normalized case, expression variation, illumination variation which does not affect facial details, pose variation, and that different factors or combination of the factors have different effect on performance of alignment for different landmark points.
2015-07-01
IMAGE FRAME RATE (R-x\\ IFR -n) PRE-TRIGGER FRAMES (R-x\\PTG-n) TOTAL FRAMES (R-x\\TOTF-n) EXPOSURE TIME (R-x\\EXP-n) SENSOR ROTATION (R-x...0” (Single frame). “1” (Multi-frame). “2” (Continuous). Allowed when: When R\\CDT is “IMGIN” IMAGE FRAME RATE R-x\\ IFR -n R/R Ch 10 Status: RO...the settings that the user wishes to modify. Return Value The impact : A partial IHAL <configuration> element containing only the new settings for
NASA Astrophysics Data System (ADS)
da Silva Nunes, L. C.; dos Santos, Paulo Acioly M.
2004-10-01
We present an application of the use of stereoscope to recovering obliterated firearms serial number. We investigate a promising new combined cheap method using both non-destructive and destructive techniques. With the use of a stereomicroscope coupled with a digital camera and a flexible cold light source, we can capture the image of the damaged area, and with continuous polishing and sometimes with the help of image processing techniques we could enhance the observed images and they can also be recorded as evidence. This method has already proven to be useful, in certain cases, in aluminum dotted pistol frames, whose serial number is printed with a laser, when etching techniques are not successful. We can also observe acid treated steel surfaces and enhance the images of recovered serial numbers, which sometimes lack of definition.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Tsukui, Hideki
2002-09-01
ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.
Modified Mean-Pyramid Coding Scheme
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Romer, Richard
1996-01-01
Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.
Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography
NASA Astrophysics Data System (ADS)
Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy
2014-09-01
A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.
All-optical framing photography based on hyperspectral imaging method
NASA Astrophysics Data System (ADS)
Liu, Shouxian; Li, Yu; Li, Zeren; Chen, Guanghua; Peng, Qixian; Lei, Jiangbo; Liu, Jun; Yuan, Shuyun
2017-02-01
We propose and experimentally demonstrate a new all optical-framing photography that uses hyperspectral imaging methods to record a chirped pulse's temporal-spatial information. This proposed method consists of three parts: (1) a chirped laser pulse encodes temporal phenomena onto wavelengths; (2) a lenslet array generates a series of integral pupil images;(3) a dispersive device disperses the integral images at void space of image sensor. Compared with Ultrafast All-Optical Framing Technology(Daniel Frayer,2013,2014) and Sequentially Time All-Optical Mapping Photography( Nakagawa 2014, 2015), our method is convenient to adjust the temporal resolution and to flexibly increase the numbers of frames. Theoretically, the temporal resolution of our scheme is limited by the amount of dispersion that is added to a Fourier transform limited femtosecond laser pulse. Correspondingly, the optimal number of frames is decided by the ratio of the observational time window to the temporal resolution, and the effective pixels of each frame are mostly limited by the dimensions M×N of the lenslet array. For example, if a 40fs Fourier transform limited femtosecond pulse is stretched to 10ps, a CCD camera with 2048×3072 pixels can record 15 framing images with temporal resolution of 650fs and image size of 100×100 pixels. As spectrometer structure, our recording part has another advantage that not only amplitude images but also frequency domain interferograms can be imaged. Therefore, it is comparatively easy to capture fast dynamics in the refractive index change of materials. A further dynamic experiment is being conducted.
Butail, Sachit; Salerno, Philip; Bollt, Erik M; Porfiri, Maurizio
2015-12-01
Traditional approaches for the analysis of collective behavior entail digitizing the position of each individual, followed by evaluation of pertinent group observables, such as cohesion and polarization. Machine learning may enable considerable advancements in this area by affording the classification of these observables directly from images. While such methods have been successfully implemented in the classification of individual behavior, their potential in the study collective behavior is largely untested. In this paper, we compare three methods for the analysis of collective behavior: simple tracking (ST) without resolving occlusions, machine learning with real data (MLR), and machine learning with synthetic data (MLS). These methods are evaluated on videos recorded from an experiment studying the effect of ambient light on the shoaling tendency of Giant danios. In particular, we compute average nearest-neighbor distance (ANND) and polarization using the three methods and compare the values with manually-verified ground-truth data. To further assess possible dependence on sampling rate for computing ANND, the comparison is also performed at a low frame rate. Results show that while ST is the most accurate at higher frame rate for both ANND and polarization, at low frame rate for ANND there is no significant difference in accuracy between the three methods. In terms of computational speed, MLR and MLS take significantly less time to process an image, with MLS better addressing constraints related to generation of training data. Finally, all methods are able to successfully detect a significant difference in ANND as the ambient light intensity is varied irrespective of the direction of intensity change.
Joint Transform Correlation for face tracking: elderly fall detection application
NASA Astrophysics Data System (ADS)
Katz, Philippe; Aron, Michael; Alfalou, Ayman
2013-03-01
In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
Time-series animation techniques for visualizing urban growth
Acevedo, W.; Masuoka, P.
1997-01-01
Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.
The impact of verbal framing on brain activity evoked by emotional images.
Kisley, Michael A; Campbell, Alana M; Larson, Jenna M; Naftz, Andrea E; Regnier, Jesse T; Davalos, Deana B
2011-12-01
Emotional stimuli generally command more brain processing resources than non-emotional stimuli, but the magnitude of this effect is subject to voluntary control. Cognitive reappraisal represents one type of emotion regulation that can be voluntarily employed to modulate responses to emotional stimuli. Here, the late positive potential (LPP), a specific event-related brain potential (ERP) component, was measured in response to neutral, positive and negative images while participants performed an evaluative categorization task. One experimental group adopted a "negative frame" in which images were categorized as negative or not. The other adopted a "positive frame" in which the exact same images were categorized as positive or not. Behavioral performance confirmed compliance with random group assignment, and peak LPP amplitude to negative images was affected by group membership: brain responses to negative images were significantly reduced in the "positive frame" group. This suggests that adopting a more positive appraisal frame can modulate brain activity elicited by negative stimuli in the environment.
Correction of projective distortion in long-image-sequence mosaics without prior information
NASA Astrophysics Data System (ADS)
Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie
2010-04-01
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.
Jing, Bowen; Tang, Shanshan; Wu, Liang; Wang, Supin; Wan, Mingxi
2016-12-01
Ultrafast plane wave ultrasonography is employed in this study to visualize the vibration of the larynx and quantify the vibration phase as well as the vibration amplitude of the laryngeal tissue. Ultrasonic images were obtained at 5000 to 10,000 frames/s in the coronal plane at the level of the glottis. Although the image quality degraded when the imaging mode was switched from conventional ultrasonography to ultrafast plane wave ultrasonography, certain anatomic structures such as the vocal folds, as well as the sub- and supraglottic structures, including the false vocal folds, can be identified in the ultrafast plane wave ultrasonic image. The periodic vibration of the vocal fold edge could be visualized in the recorded image sequence during phonation. Furthermore, a motion estimation method was used to quantify the displacement of laryngeal tissue from hundreds of frames of ultrasonic data acquired. Vibratory displacement waveforms of the sub- and supraglottic structures were successfully obtained at a high level of ultrasonic signal correlation. Moreover, statistically significant differences in vibration pattern between the sub- and supraglottic structures were found. Variation of vibration amplitude along the subglottic mucosal surface is significantly smaller than that along the supraglottic mucosal surface. Phase delay of vibration along the subglottic mucosal surface is significantly smaller than that along the supraglottic mucosal surface. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mikic, I.; Krucinski, S.; Thomas, J. D.
1998-01-01
This paper presents a method for segmentation and tracking of cardiac structures in ultrasound image sequences. The developed algorithm is based on the active contour framework. This approach requires initial placement of the contour close to the desired position in the image, usually an object outline. Best contour shape and position are then calculated, assuming that at this configuration a global energy function, associated with a contour, attains its minimum. Active contours can be used for tracking by selecting a solution from a previous frame as an initial position in a present frame. Such an approach, however, fails for large displacements of the object of interest. This paper presents a technique that incorporates the information on pixel velocities (optical flow) into the estimate of initial contour to enable tracking of fast-moving objects. The algorithm was tested on several ultrasound image sequences, each covering one complete cardiac cycle. The contour successfully tracked boundaries of mitral valve leaflets, aortic root and endocardial borders of the left ventricle. The algorithm-generated outlines were compared against manual tracings by expert physicians. The automated method resulted in contours that were within the boundaries of intraobserver variability.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
Optical flow estimation on image sequences with differently exposed frames
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-09-01
Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.
Using Digital Radiography To Image Liquid Nitrogen in Voids
NASA Technical Reports Server (NTRS)
Cox, Dwight; Blevins, Elana
2007-01-01
Digital radiography by use of (1) a field-portable x-ray tube that emits low-energy x rays and (2) an electronic imaging x-ray detector has been found to be an effective technique for detecting liquid nitrogen inside voids in thermal-insulation panels. The technique was conceived as a means of investigating cryopumping (including cryoingestion) as a potential cause of loss of thermal insulation foam from space-shuttle external fuel tanks. The technique could just as well be used to investigate cryopumping and cryoingestion in other settings. In images formed by use of low-energy x-rays, one can clearly distinguish between voids filled with liquid nitrogen and those filled with gaseous nitrogen or other gases. Conventional film radiography is of some value, but yields only non-real-time still images that do not show time dependences of levels of liquids in voids. In contrast, the present digital radiographic technique yields a succession of images in real time at a rate of about 10 frames per second. The digitized images can be saved for subsequent analysis to extract data on time dependencies of levels of liquids and, hence, of flow paths and rates of filling and draining. The succession of images also amounts to a real-time motion picture that can be used as a guide to adjustment of test conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Olsen, J.; Parikh, P.
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation
NASA Astrophysics Data System (ADS)
Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas
2013-03-01
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
NASA Astrophysics Data System (ADS)
Hong, Inki; Cho, Sanghee; Michel, Christian J.; Casey, Michael E.; Schaefferkoetter, Joshua D.
2014-09-01
A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues.
Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs).
McGarry, C K; Grattan, M W D; Cosgrove, V P
2007-12-07
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
An adaptive enhancement algorithm for infrared video based on modified k-means clustering
NASA Astrophysics Data System (ADS)
Zhang, Linze; Wang, Jingqi; Wu, Wen
2016-09-01
In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
NASA Technical Reports Server (NTRS)
Vanek, Michael D. (Inventor)
2014-01-01
A method for creating a digital elevation map ("DEM") from frames of flash LIDAR data includes generating a first distance R(sub i) from a first detector i to a first point on a surface S(sub i). After defining a map with a mesh THETA having cells k, a first array S(k), a second array M(k), and a third array D(k) are initialized. The first array corresponds to the surface, the second array corresponds to the elevation map, and the third array D(k) receives an output for the DEM. The surface is projected onto the mesh THETA, so that a second distance R(sub k) from a second point on the mesh THETA to the detector can be found. From this, a height may be calculated, which permits the generation of a digital elevation map. Also, using sequential frames of flash LIDAR data, vehicle control is possible using an offset between successive frames.
NASA Astrophysics Data System (ADS)
Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2015-12-01
In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.
1.0 T open-configuration magnetic resonance-guided microwave ablation of pig livers in real time
Dong, Jun; Zhang, Liang; Li, Wang; Mao, Siyue; Wang, Yiqi; Wang, Deling; Shen, Lujun; Dong, Annan; Wu, Peihong
2015-01-01
The current fastest frame rate of each single image slice in MR-guided ablation is 1.3 seconds, which means delayed imaging for human at an average reaction time: 0.33 seconds. The delayed imaging greatly limits the accuracy of puncture and ablation, and results in puncture injury or incomplete ablation. To overcome delayed imaging and obtain real-time imaging, the study was performed using a 1.0-T whole-body open configuration MR scanner in the livers of 10 Wuzhishan pigs. A respiratory-triggered liver matrix array was explored to guide and monitor microwave ablation in real-time. We successfully performed the entire ablation procedure under MR real-time guidance at 0.202 s, the fastest frame rate for each single image slice. The puncture time ranged from 23 min to 3 min. For the pigs, the mean puncture time was shorted to 4.75 minutes and the mean ablation time was 11.25 minutes at power 70 W. The mean length and widths were 4.62 ± 0.24 cm and 2.64 ± 0.13 cm, respectively. No complications or ablation related deaths during or after ablation were observed. In the current study, MR is able to guide microwave ablation like ultrasound in real-time guidance showing great potential for the treatment of liver tumors. PMID:26315365
First video rate imagery from a 32-channel 22-GHz aperture synthesis passive millimetre wave imager
NASA Astrophysics Data System (ADS)
Salmon, Neil A.; Macpherson, Rod; Harvey, Andy; Hall, Peter; Hayward, Steve; Wilkinson, Peter; Taylor, Chris
2011-11-01
The first video rate imagery from a proof-of-concept 32-channel 22 GHz aperture synthesis imager is reported. This imager has been brought into operation over the first half of year 2011. Receiver noise temperatures have been measured to be ~453 K, close to original specifications, and the measured radiometric sensitivity agrees with the theoretical predictions for aperture synthesis imagers (2 K for a 40 ms integration time). The short term (few seconds) magnitude stability in the cross-correlations expressed as a fraction was measured to have a mean of 3.45×10-4 with a standard deviation of ~2.30×10-4, whilst the figure for the phase was found to have a mean of essentially zero with a standard deviation of 0.0181°. The susceptibility of the system to aliasing for point sources in the scene was examined and found to be well understood. The system was calibrated and security-relevant indoor near-field and out-door far-field imagery was created, at frame rates ranging from 1 to 200 frames per second. The results prove that an aperture synthesis imager can generate imagery in the near-field regime, successfully coping with the curved wave-fronts. The original objective of the project, to deliver a Technology Readiness Level (TRL) 4 laboratory demonstrator for aperture synthesis passive millimetre wave (PMMW) imaging, has been achieved. The project was co-funded by the Technology Strategy Board and the Royal Society of the United Kingdom.
NASA Technical Reports Server (NTRS)
Lawson, R. Paul
2000-01-01
SPEC incorporated designed, built and operated a new instrument, called a pi-Nephelometer, on the NASA DC-8 for the SUCCESS field project. The pi-Nephelometer casts an image of a particle on a 400,000 pixel solid-state camera by freezing the motion of the particle using a 25 ns pulsed, high-power (60 W) laser diode. Unique optical imaging and particle detection systems precisely detect particles and define the depth-of-field so that at least one particle in the image is almost always in focus. A powerful image processing engine processes frames from the solid-state camera, identifies and records regions of interest (i.e. particle images) in real time. Images of ice crystals are displayed and recorded with 5 micron pixel resolution. In addition, a scattered light system simultaneously measures the scattering phase function of the imaged particle. The system consists of twenty-eight 1-mm optical fibers connected to microlenses bonded on the surface of avalanche photo diodes (APDs). Data collected with the pi-Nephelometer during the SUCCESS field project was reported in a special issue of Geophysical Research Letters. The pi-Nephelometer provided the basis for development of a commercial imaging probe, called the cloud particle imager (CPI), which has been installed on several research aircraft and used in More than a dozen field programs.
Development Of A Dynamic Radiographic Capability Using High-Speed Video
NASA Astrophysics Data System (ADS)
Bryant, Lawrence E.
1985-02-01
High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.
Smear correction of highly variable, frame-transfer CCD images with application to polarimetry.
Iglesias, Francisco A; Feller, Alex; Nagaraju, Krishnappa
2015-07-01
Image smear, produced by the shutterless operation of frame-transfer CCD detectors, can be detrimental for many imaging applications. Existing algorithms used to numerically remove smear do not contemplate cases where intensity levels change considerably between consecutive frame exposures. In this report, we reformulate the smearing model to include specific variations of the sensor illumination. The corresponding desmearing expression and its noise properties are also presented and demonstrated in the context of fast imaging polarimetry.
High frame rate imaging systems developed in Northwest Institute of Nuclear Technology
NASA Astrophysics Data System (ADS)
Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli
2007-01-01
This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.
Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.
2014-01-01
We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.
2014-06-23
ISS040-E-017377 (23 June 2014) --- One of the Expedition 40 crew members aboard the International Space Station recorded this image showing several states in the USA and a small part of Mexico, including Baja California, on June 23, 2014. Parts of Nevada are visible in the bottom of the frame. The area in the Mojave Desert where many space shuttle missions successfully ended is visible near the scene's center. The Gulf of Cortez and several hundred miles of the Pacific coast line of Mexico and California are visible in the top portion of the photo. The heavily populated Los Angeles Basin is just above the Mojave site of shuttle landings, with the San Diego area partially obscured by the docked Russian Soyuz vehicle in the foreground. The Salton Sea is just above left center frame.
Approach for counting vehicles in congested traffic flow
NASA Astrophysics Data System (ADS)
Tan, Xiaojun; Li, Jun; Liu, Wei
2005-02-01
More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.
NASA Astrophysics Data System (ADS)
Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo
2018-02-01
Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L = N = 9), i.e. ⩽0.24 cm s-1, for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L = 4). These results indicate that the proposed HUSDI method can improve flow visualization and quantification with a higher frame rate, PRF and flow sensitivity in cardiovascular imaging.
Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo
2018-02-09
Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L = N = 9), i.e. ⩽0.24 cm s -1 , for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L = 4). These results indicate that the proposed HUSDI method can improve flow visualization and quantification with a higher frame rate, PRF and flow sensitivity in cardiovascular imaging.
Algorithms for High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Morookian, John-Michael; Lambert, James
2010-01-01
Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate
Pappas, E P; Seimenis, I; Moutsatsos, A; Georgiou, E; Nomikos, P; Karaiskos, P
2016-10-07
This work provides characterization of system-related geometric distortions present in MRIs used in Gamma Knife (GK) stereotactic radiosurgery (SRS) treatment planning. A custom-made phantom, compatible with the Leksell stereotactic frame model G and encompassing 947 control points (CPs), was utilized. MR images were obtained with and without the frame, thus allowing discrimination of frame-induced distortions. In the absence of the frame and following compensation for field inhomogeneities, measured average CP disposition owing to gradient nonlinearities was 0.53 mm. In presence of the frame, contrarily, detected distortion was greatly increased (up to about 5 mm) in the vicinity of the frame base due to eddy currents induced in the closed loop of its aluminum material. Frame-related distortion was obliterated at approximately 90 mm from the frame base. Although the region with the maximum observed distortion may not lie within the GK treatable volume, the presence of the frame results in distortion of the order of 1.5 mm at a 7 cm distance from the center of the Leksell space. Additionally, severe distortions observed outside the treatable volume could possibly impinge on the delivery accuracy mainly by adversely affecting the registration process (e.g. the position of the lower part of the N-shaped fiducials used to define the stereotactic space may be miss-registered). Images acquired with a modified version of the frame developed by replacing its front side with an acrylic bar, thus interrupting the closed aluminum loop and reducing the induced eddy currents, were shown to benefit from relatively reduced distortion. System-related distortion was also identified in patient MR images. Using corresponding CT angiography images as a reference, an offset of 1.1 mm was detected for two vessels lying in close proximity to the frame base, while excellent spatial agreement was observed for a vessel far apart from the frame base.
NASA Astrophysics Data System (ADS)
Pappas, E. P.; Seimenis, I.; Moutsatsos, A.; Georgiou, E.; Nomikos, P.; Karaiskos, P.
2016-10-01
This work provides characterization of system-related geometric distortions present in MRIs used in Gamma Knife (GK) stereotactic radiosurgery (SRS) treatment planning. A custom-made phantom, compatible with the Leksell stereotactic frame model G and encompassing 947 control points (CPs), was utilized. MR images were obtained with and without the frame, thus allowing discrimination of frame-induced distortions. In the absence of the frame and following compensation for field inhomogeneities, measured average CP disposition owing to gradient nonlinearities was 0.53 mm. In presence of the frame, contrarily, detected distortion was greatly increased (up to about 5 mm) in the vicinity of the frame base due to eddy currents induced in the closed loop of its aluminum material. Frame-related distortion was obliterated at approximately 90 mm from the frame base. Although the region with the maximum observed distortion may not lie within the GK treatable volume, the presence of the frame results in distortion of the order of 1.5 mm at a 7 cm distance from the center of the Leksell space. Additionally, severe distortions observed outside the treatable volume could possibly impinge on the delivery accuracy mainly by adversely affecting the registration process (e.g. the position of the lower part of the N-shaped fiducials used to define the stereotactic space may be miss-registered). Images acquired with a modified version of the frame developed by replacing its front side with an acrylic bar, thus interrupting the closed aluminum loop and reducing the induced eddy currents, were shown to benefit from relatively reduced distortion. System-related distortion was also identified in patient MR images. Using corresponding CT angiography images as a reference, an offset of 1.1 mm was detected for two vessels lying in close proximity to the frame base, while excellent spatial agreement was observed for a vessel far apart from the frame base.
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.
2014-01-01
In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248
NASA Technical Reports Server (NTRS)
Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.
2010-01-01
NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.
High-speed adaptive optics line scan confocal retinal imaging for human eye
Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458
High-speed adaptive optics line scan confocal retinal imaging for human eye.
Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.
Stray light calibration of the Dawn Framing Camera
NASA Astrophysics Data System (ADS)
Kovacs, Gabor; Sierks, Holger; Nathues, Andreas; Richards, Michael; Gutierrez-Marques, Pablo
2013-10-01
Sensitive imaging systems with high dynamic range onboard spacecrafts are susceptible to ghost and stray-light effects. During the design phase, the Dawn Framing Camera was laid out and optimized to minimize those unwanted, parasitic effects. However, the requirement of low distortion to the optical design and use of a front-lit focal plane array induced an additional stray light component. This paper presents the ground-based and in-flight procedures characterizing the stray-light artifacts. The in-flight test used the Sun as the stray light source, at different angles of incidence. The spacecraft was commanded to point predefined solar elongation positions, and long exposure images were recorded. The PSNIT function was calculated by the known illumination and the ground based calibration information. In the ground based calibration, several extended and point sources were used with long exposure times in dedicated imaging setups. The tests revealed that the major contribution to the stray light is coming from the ghost reflections between the focal plan array and the band pass interference filters. Various laboratory experiments and computer modeling simulations were carried out to quantify the amount of this effect, including the analysis of the diffractive reflection pattern generated by the imaging sensor. The accurate characterization of the detector reflection pattern is the key to successfully predict the intensity distribution of the ghost image. Based on the results, and the properties of the optical system, a novel correction method is applied in the image processing pipeline. The effect of this correction procedure is also demonstrated with the first images of asteroid Vesta.
Role of "the frame cycle time" in portal dose imaging using an aS500-II EPID.
Al Kattar Elbalaa, Zeina; Foulquier, Jean Noel; Orthuon, Alexandre; Elbalaa, Hanna; Touboul, Emmanuel
2009-09-01
This paper evaluates the role of an acquisition parameter, the frame cycle time "FCT", in the performance of an aS500-II EPID. The work presented rests on the study of the Varian EPID aS500-II and the image acquisition system 3 (IAS3). We are interested in integrated acquisition using asynchronous mode. For better understanding the image acquisition operation, we investigated the influence of the "frame cycle time" on the speed of acquisition, the pixel value of the averaged gray-scale frame and the noise, using 6 and 15MV X-ray beams and dose rates of 1-6Gy/min on 2100 C/D Linacs. In the integrated mode not synchronized to beam pulses, only one parameter the frame cycle time "FCT" influences the pixel value. The pixel value of the averaged gray-scale frame is proportional to this parameter. When the FCT <55ms (speed of acquisition V(f/s)>18 frames/s), the speed of acquisition becomes unstable and leads to a fluctuation of the portal dose response. A timing instability and saturation are detected when the dose per frame exceeds 1.53MU/frame. Rules were deduced to avoid saturation and to optimize this dosimetric mode. The choice of the acquisition parameter is essential for the accurate portal dose imaging.
Temporal compressive imaging for video
NASA Astrophysics Data System (ADS)
Zhou, Qun; Zhang, Linxia; Ke, Jun
2018-01-01
In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
Framing Successful School Leadership as a Moral and Democratic Enterprise
ERIC Educational Resources Information Center
Møller, Jorunn
2005-01-01
The article aims at exploring what counts as successful leadership and what the key questions in exploring successful school leadership across countries should be. A main argument is that successful leadership is a contestable concept, and I argue for framing school leadership as a moral and democratic enterprise, which implies a need to protect…
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
NASA Technical Reports Server (NTRS)
Czaja, Wojciech; Le Moigne-Stewart, Jacqueline
2014-01-01
In recent years, sophisticated mathematical techniques have been successfully applied to the field of remote sensing to produce significant advances in applications such as registration, integration and fusion of remotely sensed data. Registration, integration and fusion of multiple source imagery are the most important issues when dealing with Earth Science remote sensing data where information from multiple sensors, exhibiting various resolutions, must be integrated. Issues ranging from different sensor geometries, different spectral responses, differing illumination conditions, different seasons, and various amounts of noise need to be dealt with when designing an image registration, integration or fusion method. This tutorial will first define the problems and challenges associated with these applications and then will review some mathematical techniques that have been successfully utilized to solve them. In particular, we will cover topics on geometric multiscale representations, redundant representations and fusion frames, graph operators, diffusion wavelets, as well as spatial-spectral and operator-based data fusion. All the algorithms will be illustrated using remotely sensed data, with an emphasis on current and operational instruments.
Speckle imaging with the MAMA detector: Preliminary results
NASA Technical Reports Server (NTRS)
Horch, E.; Heanue, J. F.; Morgan, J. S.; Timothy, J. G.
1994-01-01
We report on the first successful speckle imaging studies using the Stanford University speckle interferometry system, an instrument that uses a multianode microchannel array (MAMA) detector as the imaging device. The method of producing high-resolution images is based on the analysis of so-called 'near-axis' bispectral subplanes and follows the work of Lohmann et al. (1983). In order to improve the signal-to-noise ratio in the bispectrum, the frame-oversampling technique of Nakajima et al. (1989) is also employed. We present speckle imaging results of binary stars and other objects from V magnitude 5.5 to 11, and the quality of these images is studied. While the Stanford system is capable of good speckle imaging results, it is limited by the overall quantum efficiency of the current MAMA detector (which is due to the response of the photocathode at visible wavelengths and other detector properties) and by channel saturation of the microchannel plate. Both affect the signal-to-noise ratio of the power spectrum and bispectrum.
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
Do we understand high-level vision?
Cox, David Daniel
2014-04-01
'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision. Copyright © 2014. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Udomkesmalee, Suraphol; Padgett, Curtis; Zhu, David; Lung, Gerald; Howard, Ayanna
2000-01-01
A three-dimensional microelectronic device (3DANN-R) capable of performing general image convolution at the speed of 1012 operations/second (ops) in a volume of less than 1.5 cubic centimeter has been successfully built under the BMDO/JPL VIGILANTE program. 3DANN-R was developed in partnership with Irvine Sensors Corp., Costa Mesa, California. 3DANN-R is a sugar-cube-sized, low power image convolution engine that in its core computation circuitry is capable of performing 64 image convolutions with large (64x64) windows at video frame rates. This paper explores potential applications of 3DANN-R such as target recognition, SAR and hyperspectral data processing, and general machine vision using real data and discuss technical challenges for providing deployable systems for BMDO surveillance and interceptor programs.
Full-Frame Reference for Test Photo of Moon
2005-09-10
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
A novel framework for objective detection and tracking of TC center from noisy satellite imagery
NASA Astrophysics Data System (ADS)
Johnson, Bibin; Thomas, Sachin; Rani, J. Sheeba
2018-07-01
This paper proposes a novel framework for automatically determining and tracking the center of a tropical cyclone (TC) during its entire life-cycle from the Thermal infrared (TIR) channel data of the geostationary satellite. The proposed method handles meteorological images with noise, missing or partial information due to the seasonal variability and lack of significant spatial or vortex features. To retrieve the cyclone center from these circumstances, a synergistic approach based on objective measures and Numerical Weather Prediction (NWP) model is being proposed. This method employs a spatial gradient scheme to process missing and noisy frames or a spatio-temporal gradient scheme for image sequences that are continuous and contain less noise. The initial estimate of the TC center from the missing imagery is corrected by exploiting a NWP model based post-processing scheme. The validity of the framework is tested on Infrared images of different cyclones obtained from various Geostationary satellites such as the Meteosat-7, INSAT- 3 D , Kalpana-1 etc. The computed track is compared with the actual track data obtained from Joint Typhoon Warning Center (JTWC), and it shows a reduction of mean track error by 11 % as compared to the other state of the art methods in the presence of missing and noisy frames. The proposed method is also successfully tested for simultaneous retrieval of the TC center from images containing multiple non-overlapping cyclones.
Deconvolution of astronomical images using SOR with adaptive relaxation.
Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J
2011-07-04
We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.
Using the ATL HDI 1000 to collect demodulated RF data for monitoring HIFU lesion formation
NASA Astrophysics Data System (ADS)
Anand, Ajay; Kaczkowski, Peter J.; Daigle, Ron E.; Huang, Lingyun; Paun, Marla; Beach, Kirk W.; Crum, Lawrence A.
2003-05-01
The ability to accurately track and monitor the progress of lesion formation during HIFU (High Intensity Focused Ultrasound) therapy is important for the success of HIFU-based treatment protocols. To aid in the development of algorithms for accurately targeting and monitoring formation of HIFU induced lesions, we have developed a software system to perform RF data acquisition during HIFU therapy using a commercially available clinical ultrasound scanner (ATL HDI 1000, Philips Medical Systems, Bothell, WA). The HDI 1000 scanner functions on a software dominant architecture, permitting straightforward external control of its operation and relatively easy access to quadrature demodulated RF data. A PC running a custom developed program sends control signals to the HIFU module via GPIB and to the HDI 1000 via Telnet, alternately interleaving HIFU exposures and RF frame acquisitions. The system was tested during experiments in which HIFU lesions were created in excised animal tissue. No crosstalk between the HIFU beam and the ultrasound imager was detected, thus demonstrating synchronization. Newly developed acquisition modes allow greater user control in setting the image geometry and scanline density, and enables high frame rate acquisition. This system facilitates rapid development of signal-processing based HIFU therapy monitoring algorithms and their implementation in image-guided thermal therapy systems. In addition, the HDI 1000 system can be easily customized for use with other emerging imaging modalities that require access to the RF data such as elastographic methods and new Doppler-based imaging and tissue characterization techniques.
SU-E-T-171: Missing Dose in Integrated EPID Images.
King, B; Seymour, E; Nitschke, K
2012-06-01
A dosimetric artifact has been observed with Varian EPIDs in the presence of beam interrupts. This work determines the root cause and significance of this artifact. Integrated mode EPID images were acquired both with and without a manual beam interrupt for rectangular, sliding gap IMRT fields. Simultaneously, the individual frames were captured on a separate computer using a frame-grabber system. Synchronization of the individual frames with the integrated images allowed the determination of precisely how the EPID behaved during regular operation as well as when a beam interrupt was triggered. The ability of the EPID to reliably monitor a treatment in the presence of beam interrupts was tested by comparing the difference between the interrupt and non-interrupt images. The interrupted images acquired in integrated acquisition mode displayed unanticipated behaviour in the region of the image where the leaves were located when the beam interrupt was triggered. Differences greater than 5% were observed as a result of the interrupt in some cases, with the discrepancies occurring in a non-uniform manner across the imager. The differences measured were not repeatable from one measurement to another. Examination of the individual frames showed that the EPID was consistently losing a small amount of dose at the termination of every exposure. Inclusion of one additional frame in every image rectified the unexpected behaviour, reducing the differences to 1% or less. Although integrated EPID images nominally capture the entire dose delivered during an exposure, a small amount of dose is consistently being lost at the end of every exposure. The amount of missing dose is random, depending on the exact beam termination time within a frame. Inclusion of an extra frame at the end of each exposure effectively rectifies the problem, making the EPID more suitable for clinical dosimetry applications. The authors received support from Varian Medical Systems in the form of software and equipment loans as well as technical support. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2016-03-01
Cerebral CT perfusion (CTP) imaging is playing an important role in the diagnosis and treatment of acute ischemic strokes. Meanwhile, the reliability of CTP-based ischemic lesion detection has been challenged due to the noisy appearance and low signal-to-noise ratio of CTP maps. To reduce noise and improve image quality, a rigorous study on the noise transfer properties of CTP systems is highly desirable to provide the needed scientific guidance. This paper concerns how noise in the CTP source images propagates to the final CTP maps. Both theoretical deviations and subsequent validation experiments demonstrated that, the noise level of background frames plays a dominant role in the noise of the cerebral blood volume (CBV) maps. This is in direct contradiction with the general belief that noise of non-background image frames is of greater importance in CTP imaging. The study found that when radiation doses delivered to the background frames and to all non-background frames are equal, lowest noise variance is achieved in the final CBV maps. This novel equality condition provides a practical means to optimize radiation dose delivery in CTP data acquisition: radiation exposures should be modulated between background frames and non-background frames so that the above equality condition is satisïnAed. For several typical CTP acquisition protocols, numerical simulations and in vivo canine experiment demonstrated that noise of CBV can be effectively reduced using the proposed exposure modulation method.
Composite ultrasound imaging apparatus and method
Morimoto, Alan K.; Bow, Jr., Wallace J.; Strong, David Scott; Dickey, Fred M.
1998-01-01
An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image.
Composite ultrasound imaging apparatus and method
Morimoto, A.K.; Bow, W.J. Jr.; Strong, D.S.; Dickey, F.M.
1998-09-15
An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image. 37 figs.
Keyhole imaging method for dynamic objects behind the occlusion area
NASA Astrophysics Data System (ADS)
Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong
2018-01-01
A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Improved optical flow motion estimation for digital image stabilization
NASA Astrophysics Data System (ADS)
Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao
2015-11-01
Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.
Takeda, Jun; Ishida, Akihiro; Makishima, Yoshinori; Katayama, Ikufumi
2010-01-01
In this review, we demonstrate a real-time time-frequency two-dimensional (2D) pump-probe imaging spectroscopy implemented on a single shot basis applicable to excited-state dynamics in solid-state organic and biological materials. Using this technique, we could successfully map ultrafast time-frequency 2D transient absorption signals of β-carotene in solid films with wide temporal and spectral ranges having very short accumulation time of 20 ms per unit frame. The results obtained indicate the high potential of this technique as a powerful and unique spectroscopic tool to observe ultrafast excited-state dynamics of organic and biological materials in solid-state, which undergo rapid photodegradation. PMID:22399879
Frequency-locked pulse sequencer for high-frame-rate monochromatic tissue motion imaging.
Azar, Reza Zahiri; Baghani, Ali; Salcudean, Septimiu E; Rohling, Robert
2011-04-01
To overcome the inherent low frame rate of conventional ultrasound, we have previously presented a system that can be implemented on conventional ultrasound scanners for high-frame-rate imaging of monochromatic tissue motion. The system employs a sector subdivision technique in the sequencer to increase the acquisition rate. To eliminate the delays introduced during data acquisition, a motion phase correction algorithm has also been introduced to create in-phase displacement images. Previous experimental results from tissue- mimicking phantoms showed that the system can achieve effective frame rates of up to a few kilohertz on conventional ultrasound systems. In this short communication, we present a new pulse sequencing strategy that facilitates high-frame-rate imaging of monochromatic motion such that the acquired echo signals are inherently in-phase. The sequencer uses the knowledge of the excitation frequency to synchronize the acquisition of the entire imaging plane to that of an external exciter. This sequencing approach eliminates any need for synchronization or phase correction and has applications in tissue elastography, which we demonstrate with tissue-mimicking phantoms. © 2011 IEEE
UWGSP4: an imaging and graphics superworkstation and its medical applications
NASA Astrophysics Data System (ADS)
Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin
1992-05-01
UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.
Optical joint correlator for real-time image tracking and retinal surgery
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor)
1991-01-01
A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina.
New Subarray Readout Patterns for the ACS Wide Field Channel
NASA Astrophysics Data System (ADS)
Golimowski, D.; Anderson, J.; Arslanian, S.; Chiaberge, M.; Grogin, N.; Lim, Pey Lian; Lupie, O.; McMaster, M.; Reinhart, M.; Schiffer, F.; Serrano, B.; Van Marshall, M.; Welty, A.
2017-04-01
At the start of Cycle 24, the original CCD-readout timing patterns used to generate ACS Wide Field Channel (WFC) subarray images were replaced with new patterns adapted from the four-quadrant readout pattern used to generate full-frame WFC images. The primary motivation for this replacement was a substantial reduction of observatory and staff resources needed to support WFC subarray bias calibration, which became a new and challenging obligation after the installation of the ACS CCD Electronics Box Replacement during Servicing Mission 4. The new readout patterns also improve the overall efficiency of observing with WFC subarrays and enable the processing of subarray images through stages of the ACS data calibration pipeline (calacs) that were previously restricted to full-frame WFC images. The new readout patterns replace the original 512×512, 1024×1024, and 2048×2046-pixel subarrays with subarrays having 2048 columns and 512, 1024, and 2048 rows, respectively. Whereas the original square subarrays were limited to certain WFC quadrants, the new rectangular subarrays are available in all four quadrants. The underlying bias structure of the new subarrays now conforms with those of the corresponding regions of the full-frame image, which allows raw frames in all image formats to be calibrated using one contemporaneous full-frame "superbias" reference image. The original subarrays remain available for scientific use, but calibration of these image formats is no longer supported by STScI.
3D range-gated super-resolution imaging based on stereo matching for moving platforms and targets
NASA Astrophysics Data System (ADS)
Sun, Liang; Wang, Xinwei; Zhou, Yan
2017-11-01
3D range-gated superresolution imaging is a novel 3D reconstruction technique for target detection and recognition with good real-time performance. However, for moving targets or platforms such as airborne, shipborne, remote operated vehicle and autonomous vehicle, 3D reconstruction has a large error or failure. In order to overcome this drawback, we propose a method of stereo matching for 3D range-gated superresolution reconstruction algorithm. In experiment, the target is a doll of Mario with a height of 38cm at the location of 34m, and we obtain two successive frame images of the Mario. To confirm our method is effective, we transform the original images with translation, rotation, scale and perspective, respectively. The experimental result shows that our method has a good result of 3D reconstruction for moving targets or platforms.
Infrared imaging spectrometry by the use of bundled chalcogenide glass fibers and a PtSi CCD camera
NASA Astrophysics Data System (ADS)
Saito, Mitsunori; Kikuchi, Katsuhiro; Tanaka, Chinari; Sone, Hiroshi; Morimoto, Shozo; Yamashita, Toshiharu T.; Nishii, Junji
1999-10-01
A coherent fiber bundle for infrared image transmission was prepared by arranging 8400 chalcogenide (AsS) glass fibers. The fiber bundle, 1 m in length, is transmissive in the infrared spectral region of 1 - 6 micrometer. A remote spectroscopic imaging system was constructed with the fiber bundle and an infrared PtSi CCD camera. The system was used for the real-time observation (frame time: 1/60 s) of gas distribution. Infrared light from a SiC heater was delivered to a gas cell through a chalcogenide fiber, and transmitted light was observed through the fiber bundle. A band-pass filter was used for the selection of gas species. A He-Ne laser of 3.4 micrometer wavelength was also used for the observation of hydrocarbon gases. Gases bursting from a nozzle were observed successfully by a remote imaging system.
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
Romanek, Kathleen M; McCaul, Kevin D; Sandgren, Ann K
2005-07-01
To examine the effects of age, body image, and risk framing on treatment decision making for breast cancer using a healthy population. An experimental 2 (younger women, older women) X 2 (survival, mortality frame) between-groups design. Midwestern university. Two groups of healthy women: 56 women ages 18-24 from undergraduate psychology courses and 60 women ages 35-60 from the university community. Healthy women imagined that they had been diagnosed with breast cancer and received information regarding lumpectomy versus mastectomy and recurrence rates. Participants indicated whether they would choose lumpectomy or mastectomy and why. Age, framing condition, treatment choice, body image, and reasons for treatment decision. The difference in treatment selection between younger and older women was mediated by concern for appearance. No main effect for risk framing was found; however, older women were somewhat less likely to select lumpectomy when given a mortality frame. Age, mediated by body image, influences treatment selection of lumpectomy versus mastectomy. Framing has no direct effect on treatment decisions, but younger and older women may be affected by risk information differently. Nurses should provide women who recently have been diagnosed with breast cancer with age-appropriate information regarding treatment alternatives to ensure women's active participation in the decision-making process. Women who have different levels of investment in body image also may have different concerns about treatment, and healthcare professionals should be alert to and empathetic of such concerns.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...
2015-08-13
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Robotically-adjustable microstereotactic frames for image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Kratchman, Louis B.; Fitzpatrick, J. Michael
2013-03-01
Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI
Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.
2017-01-01
Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574
The Hot Hand Belief and Framing Effects
ERIC Educational Resources Information Center
MacMahon, Clare; Köppen, Jörn; Raab, Markus
2014-01-01
Purpose: Recent evidence of the hot hand in sport--where success breeds success in a positive recency of successful shots, for instance--indicates that this pattern does not actually exist. Yet the belief persists. We used 2 studies to explore the effects of framing on the hot hand belief in sport. We looked at the effect of sport experience and…
ERIC Educational Resources Information Center
Serna, Gabriel
2014-01-01
This essay examines normative aspects of the gainful employment rule and how the policy frame and image miss important implications for student aid policy. Because the economic and social burdens associated with the policy are typically borne by certain socioeconomic and ethnic groups, the policy frame and image do not identify possible negative…
ERIC Educational Resources Information Center
Putwain, David W.; Symes, Wendy
2016-01-01
Previous research has examined how subjective task-value and expectancy of success influence the appraisal of value-promoting messages used by teachers prior to high-stakes examinations. The aim of this study was to examine whether message-frame (gain or loss-framed messages) also influences the appraisal of value-promoting messages. Two hundred…
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popple, R; Bredel, M; Brezovich, I
Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less
Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates
NASA Astrophysics Data System (ADS)
Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.
2002-03-01
A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.
Graphics-Printing Program For The HP Paintjet Printer
NASA Technical Reports Server (NTRS)
Atkins, Victor R.
1993-01-01
IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panfil, J; Patel, R; Surucu, M
Purpose: To compare markerless template-based tracking of lung tumors using dual energy (DE) cone-beam computed tomography (CBCT) projections versus single energy (SE) CBCT projections. Methods: A RANDO chest phantom with a simulated tumor in the upper right lung was used to investigate the effectiveness of tumor tracking using DE and SE CBCT projections. Planar kV projections from CBCT acquisitions were captured at 60 kVp (4 mAs) and 120 kVp (1 mAs) using the Varian TrueBeam and non-commercial iTools Capture software. Projections were taken at approximately every 0.53° while the gantry rotated. Due to limitations of the phantom, angles for whichmore » the shoulders blocked the tumor were excluded from tracking analysis. DE images were constructed using a weighted logarithmic subtraction that removed bony anatomy while preserving soft tissue structures. The tumors were tracked separately on DE and SE (120 kVp) images using a template-based tracking algorithm. The tracking results were compared to ground truth coordinates designated by a physician. Matches with a distance of greater than 3 mm from ground truth were designated as failing to track. Results: 363 frames were analyzed. The algorithm successfully tracked the tumor on 89.8% (326/363) of DE frames compared to 54.3% (197/363) of SE frames (p<0.0001). Average distance between tracking and ground truth coordinates was 1.27 +/− 0.67 mm for DE versus 1.83+/−0.74 mm for SE (p<0.0001). Conclusion: This study demonstrates the effectiveness of markerless template-based tracking using DE CBCT. DE imaging resulted in better detectability with more accurate localization on average versus SE. Supported by a grant from Varian Medical Systems.« less
Report on recent results of the PERCIVAL soft X-ray imager
NASA Astrophysics Data System (ADS)
Khromova, A.; Cautero, G.; Giuressi, D.; Menk, R.; Pinaroli, G.; Stebel, L.; Correa, J.; Marras, A.; Wunderer, C. B.; Lange, S.; Tennert, M.; Niemann, M.; Hirsemann, H.; Smoljanin, S.; Reza, S.; Graafsma, H.; Göttlicher, P.; Shevyakov, I.; Supra, J.; Xia, Q.; Zimmer, M.; Guerrini, N.; Marsh, B.; Sedgwick, I.; Nicholls, T.; Turchetta, R.; Pedersen, U.; Tartoni, N.; Hyun, H. J.; Kim, K. S.; Rah, S. Y.; Hoenk, M. E.; Jewell, A. D.; Jones, T. J.; Nikzad, S.
2016-11-01
The PERCIVAL (Pixelated Energy Resolving CMOS Imager, Versatile And Large) soft X-ray 2D imaging detector is based on stitched, wafer-scale sensors possessing a thick epi-layer, which together with back-thinning and back-side illumination yields elevated quantum efficiency in the photon energy range of 125-1000 eV. Main application fields of PERCIVAL are foreseen in photon science with FELs and synchrotron radiation. This requires high dynamic range up to 105 ph @ 250 eV paired with single photon sensitivity with high confidence at moderate frame rates in the range of 10-120 Hz. These figures imply the availability of dynamic gain switching on a pixel-by-pixel basis and a highly parallel, low noise analog and digital readout, which has been realized in the PERCIVAL sensor layout. Different aspects of the detector performance have been assessed using prototype sensors with different pixel and ADC types. This work will report on the recent test results performed on the newest chip prototypes with the improved pixel and ADC architecture. For the target frame rates in the 10-120 Hz range an average noise floor of 14e- has been determined, indicating the ability of detecting single photons with energies above 250 eV. Owing to the successfully implemented adaptive 3-stage multiple-gain switching, the integrated charge level exceeds 4 · 106 e- or 57000 X-ray photons at 250 eV per frame at 120 Hz. For all gains the noise level remains below the Poisson limit also in high-flux conditions. Additionally, a short overview over the updates on an oncoming 2 Mpixel (P2M) detector system (expected at the end of 2016) will be reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-12-15
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
High-frame-rate full-vocal-tract 3D dynamic speech imaging.
Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P
2017-04-01
To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Processing Near-Infrared Imagery of the Orion Heatshield During EFT-1 Hypersonic Reentry
NASA Technical Reports Server (NTRS)
Spisz, Thomas S.; Taylor, Jeff C.; Gibson, David M.; Kennerly, Steve; Osei-Wusu, Kwame; Horvath, Thomas J.; Schwartz, Richard J.; Tack, Steven; Bush, Brett C.; Oliver, A. Brandon
2016-01-01
The Scientifically Calibrated In-Flight Imagery (SCIFLI) team captured high-resolution, calibrated, near-infrared imagery of the Orion capsule during atmospheric reentry of the EFT-1 mission. A US Navy NP-3D aircraft equipped with a multi-band optical sensor package, referred to as Cast Glance, acquired imagery of the Orion capsule's heatshield during a period when Orion was slowing from approximately Mach 10 to Mach 7. The line-of-sight distance ranged from approximately 65 to 40 nmi. Global surface temperatures of the capsule's thermal heatshield derived from the near-infrared intensity measurements complemented the in-depth (embedded) thermocouple measurements. Moreover, these derived surface temperatures are essential to the assessment of the thermocouples' reliance on inverse heat transfer methods and material response codes to infer the surface temperature from the in-depth measurements. The paper describes the image processing challenges associated with a manually-tracked, high-angular rate air-to-air observation. Issues included management of significant frame-to-frame motions due to both tracking jerk and jitter as well as distortions due to atmospheric effects. Corrections for changing sky backgrounds (including some cirrus clouds), atmospheric attenuation, and target orientations and ranges also had to be made. The image processing goal is to reduce the detrimental effects due to motion (both sensor and capsule), vibration (jitter), and atmospherics for image quality improvement, without compromising the quantitative integrity of the data, especially local intensity (temperature) variations. The paper will detail the approach of selecting and utilizing only the highest quality images, registering several co-temporal image frames to a single image frame to the extent frame-to-frame distortions would allow, and then co-adding the registered frames to improve image quality and reduce noise. Using preflight calibration data, the registered and averaged infrared intensity images were converted to surface temperatures on the Orion capsule's heatshield. Temperature uncertainties will be discussed relative to uncertainties of surface emissivity and atmospheric transmission loss. Comparison of limited onboard surface thermocouple data to the image derived surface temperature will be presented.
High-speed transport-of-intensity phase microscopy with an electrically tunable lens.
Zuo, Chao; Chen, Qian; Qu, Weijuan; Asundi, Anand
2013-10-07
We present a high-speed transport-of-intensity equation (TIE) quantitative phase microscopy technique, named TL-TIE, by combining an electrically tunable lens with a conventional transmission microscope. This permits the specimen at different focus position to be imaged in rapid succession, with constant magnification and no physically moving parts. The simplified image stack collection significantly reduces the acquisition time, allows for the diffraction-limited through-focus intensity stack collection at 15 frames per second, making dynamic TIE phase imaging possible. The technique is demonstrated by profiling of microlens array using optimal frequency selection scheme, and time-lapse imaging of live breast cancer cells by inversion the defocused phase optical transfer function to correct the phase blurring in traditional TIE. Experimental results illustrate its outstanding capability of the technique for quantitative phase imaging, through a simple, non-interferometric, high-speed, high-resolution, and unwrapping-free approach with prosperous applications in micro-optics, life sciences and bio-photonics.
Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light
NASA Astrophysics Data System (ADS)
Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander
2018-02-01
A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.
Frame by Frame II: A Filmography of the African American Image, 1978-1994.
ERIC Educational Resources Information Center
Klotman, Phyllis R.; Gibson, Gloria J.
A reference guide on African American film professionals, this book is a companion volume to the earlier "Frame by Frame I." It focuses on giving credit to African Americans who have contributed their talents to a film industry that has scarcely recognized their contributions, building on the aforementioned "Frame by Frame I,"…
ERIC Educational Resources Information Center
Allweiss, Alexandra; Grant, Carl A.; Manning, Karla
2015-01-01
This critical article provides insights into how media frames influence our understandings of school reform in urban spaces by examining images of students during the 2013 school closings in Chicago. Using visual framing analysis and informed by framing theory and critiques of neoliberalism we seek to explore two questions: (1) What role do media…
A framed, 16-image Kirkpatrick–Baez x-ray microscope
Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; ...
2017-09-08
A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less
Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K
2018-03-21
Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.
A framed, 16-image Kirkpatrick–Baez x-ray microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.
A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Image Based Synthesis for Airborne Minefield Data
2005-12-01
Jia, and C-K. Tang, "Image repairing: robust image synthesis by adaptive ND tensor voting ", Proceedings of the IEEE, Computer Society Conference on...utility is capable to synthesize a single frame data as well as list of frames along a flight path. The application is developed in MATLAB -6.5 using the
Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L
2016-04-01
Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.
GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array
Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.
2014-01-01
Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080
1997-08-27
This image of the rock "Wedge" was taken from the Sojourner rover's rear color camera on Sol 37. The position of the rover relative to Wedge is seen in MRPS 83349. The segmented rod visible in the middle of the frame is the deployment arm for the Alpha Proton X-Ray Spectrometer (APXS). The APXS, the bright, cylindrical object at the end of the arm, is positioned against Wedge and is designed to measure the rock's chemical composition. This was done successfully on the night of Sol 37. http://photojournal.jpl.nasa.gov/catalog/PIA00906
License Plate Recognition System for Indian Vehicles
NASA Astrophysics Data System (ADS)
Sanap, P. R.; Narote, S. P.
2010-11-01
We consider the task of recognition of Indian vehicle number plates (also called license plates or registration plates in other countries). A system for Indian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. Also, vehicle owners may place the plates inside glass covered frames or use plates made of nonstandard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Indian vehicle number plates in digital images. Commercial application of the system is envisaged.
NASA Astrophysics Data System (ADS)
Cunningham, Cindy C.; Peloquin, Tracy D.
1999-02-01
Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
A wavelet-based Bayesian framework for 3D object segmentation in microscopy
NASA Astrophysics Data System (ADS)
Pan, Kangyu; Corrigan, David; Hillebrand, Jens; Ramaswami, Mani; Kokaram, Anil
2012-03-01
In confocal microscopy, target objects are labeled with fluorescent markers in the living specimen, and usually appear with irregular brightness in the observed images. Also, due to the existence of out-of-focus objects in the image, the segmentation of 3-D objects in the stack of image slices captured at different depth levels of the specimen is still heavily relied on manual analysis. In this paper, a novel Bayesian model is proposed for segmenting 3-D synaptic objects from given image stack. In order to solve the irregular brightness and out-offocus problems, the segmentation model employs a likelihood using the luminance-invariant 'wavelet features' of image objects in the dual-tree complex wavelet domain as well as a likelihood based on the vertical intensity profile of the image stack in 3-D. Furthermore, a smoothness 'frame' prior based on the a priori knowledge of the connections of the synapses is introduced to the model for enhancing the connectivity of the synapses. As a result, our model can successfully segment the in-focus target synaptic object from a 3D image stack with irregular brightness.
Descent Through Clouds to Surface
2005-01-18
This frame from an animation is made up from a sequence of images taken by the Descent Imager/Spectral Radiometer (DISR) instrument on board ESA's Huygens probe, during its successful descent to Titan on Jan. 14, 2005. The animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA07234 It shows what a passenger riding on Huygens would have seen. The sequence starts from an altitude of 152 kilometers (about 95 miles) and initially only shows a hazy view looking into thick cloud. As the probe descends, ground features can be discerned and Huygens emerges from the clouds at around 30 kilometers (about 19 miles) altitude. The ground features seem to rotate as Huygens spins slowly underits parachute. The DISR consists of a downward-looking High Resolution Imager (HRI), a Medium Resolution Imager (MRI), which looks out at an angle, and a Side Looking Imager (SLI). For this animation, most images used were captured by the HRI and MRI. Once on the ground, the final landing scene was captured by the SLI. The Descent Imager/Spectral Radiometer is one of two NASA instruments on the probe.
SkySat-1: very high-resolution imagery from a small satellite
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk
2014-10-01
This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.
A trillion frames per second: the techniques and applications of light-in-flight photography.
Faccio, Daniele; Velten, Andreas
2018-06-14
Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.
Motion Detection in Ultrasound Image-Sequences Using Tensor Voting
NASA Astrophysics Data System (ADS)
Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka
2008-05-01
Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.
Frames and counter-frames giving meaning to dementia: a framing analysis of media content.
Van Gorp, Baldwin; Vercruysse, Tom
2012-04-01
Media tend to reinforce the stigmatization of dementia as one of the most dreaded diseases in western society, which may have repercussions on the quality of life of those with the illness. The persons with dementia, but also those around them become imbued with the idea that life comes to an end as soon as the diagnosis is pronounced. The aim of this paper is to understand the dominant images related to dementia by means of an inductive framing analysis. The sample is composed of newspaper articles from six Belgian newspapers (2008-2010) and a convenience sample of popular images of the condition in movies, documentaries, literature and health care communications. The results demonstrate that the most dominant frame postulates that a human being is composed of two distinct parts: a material body and an immaterial mind. If this frame is used, the person with dementia ends up with no identity, which is in opposition to the Western ideals of personal self-fulfilment and individualism. For each dominant frame an alternative counter-frame is defined. It is concluded that the relative absence of counter-frames confirms the negative image of dementia. The inventory might be a help for caregivers and other professionals who want to evaluate their communication strategy. It is discussed that a more resolute use of counter-frames in communication about dementia might mitigate the stigma that surrounds dementia. Copyright © 2012 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-31
... Certain Digital Photo Frames and Image Display Devices and Components Thereof, DN 2842; the Commission is... importation of certain digital photo frames and image display devices and components thereof. The complaint...
Pemp, Berthold; Kardon, Randy H; Kircher, Karl; Pernicka, Elisabeth; Schmidt-Erfurth, Ursula; Reitner, Andreas
2013-07-01
Automated detection of subtle changes in peripapillary retinal nerve fibre layer thickness (RNFLT) over time using optical coherence tomography (OCT) is limited by inherent image quality before layer segmentation, stabilization of the scan on the peripapillary retina and its precise placement on repeated scans. The present study evaluates image quality and reproducibility of spectral domain (SD)-OCT comparing different rates of automatic real-time tracking (ART). Peripapillary RNFLT was measured in 40 healthy eyes on six different days using SD-OCT with an eye-tracking system. Image brightness of OCT with unaveraged single frame B-scans was compared to images using ART of 16 B-scans and 100 averaged frames. Short-term and day-to-day reproducibility was evaluated by calculation of intraindividual coefficients of variation (CV) and intraclass correlation coefficients (ICC) for single measurements as well as for seven repeated measurements per study day. Image brightness, short-term reproducibility, and day-to-day reproducibility were significantly improved using ART of 100 frames compared to one and 16 frames. Short-term CV was reduced from 0.94 ± 0.31 % and 0.91 ± 0.54 % in scans of one and 16 frames to 0.56 ± 0.42 % in scans of 100 averaged frames (P ≤ 0.003 each). Day-to-day CV was reduced from 0.98 ± 0.86 % and 0.78 ± 0.56 % to 0.53 ± 0.43 % (P ≤ 0.022 each). The range of ICC was 0.94 to 0.99. Sample size calculations for detecting changes of RNFLT over time in the range of 2 to 5 μm were performed based on intraindividual variability. Image quality and reproducibility of mean peripapillary RNFLT measurements using SD-OCT is improved by averaging OCT images with eye-tracking compared to unaveraged single frame images. Further improvement is achieved by increasing the amount of frames per measurement, and by averaging values of repeated measurements per session. These strategies may allow a more accurate evaluation of RNFLT reduction in clinical trials observing optic nerve degeneration.
Contrast-enhanced MR Angiography of the Abdomen with Highly Accelerated Acquisition Techniques
Mostardi, Petrice M.; Glockner, James F.; Young, Phillip M.
2011-01-01
Purpose: To demonstrate that highly accelerated (net acceleration factor [Rnet] ≥ 10) acquisition techniques can be used to generate three-dimensional (3D) subsecond timing images, as well as diagnostic-quality high-spatial-resolution contrast material–enhanced (CE) renal magnetic resonance (MR) angiograms with a single split dose of contrast material. Materials and Methods: All studies were approved by the institutional review board and were HIPAA compliant; written consent was obtained from all participants. Twenty-two studies were performed in 10 female volunteers (average age, 47 years; range, 27–62 years) and six patients with renovascular disease (three women; average age, 48 years; range, 37–68 years; three men; average age, 60 years; range, 50–67 years; composite average age, 54 years; range, 38–68 years). The two-part protocol consisted of a low-dose (2 mL contrast material) 3D timing image with approximate 1-second frame time, followed by a high-spatial-resolution (1.0–1.6-mm isotropic voxels) breath-hold 3D renal MR angiogram (18 mL) over the full abdominal field of view. Both acquisitions used two-dimensional (2D) sensitivity encoding acceleration factor (R) of eight and 2D homodyne (HD) acceleration (RHD) of 1.4–1.8 for Rnet = R · RHD of 10 or higher. Statistical analysis included determination of mean values and standard deviations of image quality scores performed by two experienced reviewers with use of eight evaluation criteria. Results: The 2-mL 3D time-resolved image successfully portrayed progressive arterial filling in all 22 studies and provided an anatomic overview of the vasculature. Successful timing was also demonstrated in that the renal MR angiogram showed adequate or excellent portrayal of the main renal arteries in 21 of 22 studies. Conclusion: Two-dimensional acceleration techniques with Rnet of 10 or higher can be used in CE MR angiography to acquire (a) a 3D image series with 1-second frame time, allowing accurate bolus timing, and (b) a high-spatial-resolution renal angiogram. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11110242/-/DC1 PMID:21900616
Kura, Sreekanth; Xie, Hongyu; Fu, Buyin; Ayata, Cenk; Boas, David A; Sakadžić, Sava
2018-06-01
Resting state functional connectivity (RSFC) allows the study of functional organization in normal and diseased brain by measuring the spontaneous brain activity generated under resting conditions. Intrinsic optical signal imaging (IOSI) based on multiple illumination wavelengths has been used successfully to compute RSFC maps in animal studies. The IOSI setup complexity would be greatly reduced if only a single wavelength can be used to obtain comparable RSFC maps. We used anesthetized mice and performed various comparisons between the RSFC maps based on single wavelength as well as oxy-, deoxy- and total hemoglobin concentration changes. The RSFC maps based on IOSI at a single wavelength selected for sensitivity to the blood volume changes are quantitatively comparable to the RSFC maps based on oxy- and total hemoglobin concentration changes obtained by the more complex IOSI setups. Moreover, RSFC maps do not require CCD cameras with very high frame acquisition rates, since our results demonstrate that they can be computed from the data obtained at frame rates as low as 5 Hz. Our results will have general utility for guiding future RSFC studies based on IOSI and making decisions about the IOSI system designs.
Mcclellan, James H.; Ravichandran, Lakshminarayan; Tridandapani, Srini
2013-01-01
Two novel methods for detecting cardiac quiescent phases from B-mode echocardiography using a correlation-based frame-to-frame deviation measure were developed. Accurate knowledge of cardiac quiescence is crucial to the performance of many imaging modalities, including computed tomography coronary angiography (CTCA). Synchronous electrocardiography (ECG) and echocardiography data were obtained from 10 healthy human subjects (four male, six female, 23–45 years) and the interventricular septum (IVS) was observed using the apical four-chamber echocardiographic view. The velocity of the IVS was derived from active contour tracking and verified using tissue Doppler imaging echocardiography methods. In turn, the frame-to-frame deviation methods for identifying quiescence of the IVS were verified using active contour tracking. The timing of the diastolic quiescent phase was found to exhibit both inter- and intra-subject variability, suggesting that the current method of CTCA gating based on the ECG is suboptimal and that gating based on signals derived from cardiac motion are likely more accurate in predicting quiescence for cardiac imaging. Two robust and efficient methods for identifying cardiac quiescent phases from B-mode echocardiographic data were developed and verified. The methods presented in this paper will be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:26609501
NASA Astrophysics Data System (ADS)
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Podkowinski, Dominika; Sharian Varnousfaderani, Ehsan; Simader, Christian; Bogunovic, Hrvoje; Philip, Ana-Maria; Gerendas, Bianca S.
2017-01-01
Background and Objective To determine optimal image averaging settings for Spectralis optical coherence tomography (OCT) in patients with and without cataract. Study Design/Material and Methods In a prospective study, the eyes were imaged before and after cataract surgery using seven different image averaging settings. Image quality was quantitatively evaluated using signal-to-noise ratio, distinction between retinal layer image intensity distributions, and retinal layer segmentation performance. Measures were compared pre- and postoperatively across different degrees of averaging. Results 13 eyes of 13 patients were included and 1092 layer boundaries analyzed. Preoperatively, increasing image averaging led to a logarithmic growth in all image quality measures up to 96 frames. Postoperatively, increasing averaging beyond 16 images resulted in a plateau without further benefits to image quality. Averaging 16 frames postoperatively provided comparable image quality to 96 frames preoperatively. Conclusion In patients with clear media, averaging 16 images provided optimal signal quality. A further increase in averaging was only beneficial in the eyes with senile cataract. However, prolonged acquisition time and possible loss of details have to be taken into account. PMID:28630764
High-Speed Video Observations of a Natural Lightning Stepped Leader
NASA Astrophysics Data System (ADS)
Jordan, D. M.; Hill, J. D.; Uman, M. A.; Yoshida, S.; Kawasaki, Z.
2010-12-01
High-speed video images of one branch of a natural negative lightning stepped leader were obtained at a frame rate of 300 kfps (3.33 us exposure) on June 18th, 2010 at the International Center for Lightning Research and Testing (ICLRT) located on the Camp Blanding Army National Guard Base in north-central Florida. The images were acquired using a 20 mm Nikon lens mounted on a Photron SA1.1 high-speed camera. A total of 225 frames (about 0.75 ms) of the downward stepped leader were captured, followed by 45 frames of the leader channel re-illumination by the return stroke and subsequent decay following the ground attachment of the primary leader channel. Luminous characteristics of dart-stepped leader propagation in triggered lightning obtained by Biagi et al. [2009, 2010] and of long laboratory spark formation [e.g., Bazelyan and Raizer, 1998; Gallimberti et al., 2002] are evident in the frames of the natural lightning stepped leader. Space stems/leaders are imaged in twelve different frames at various distances in front of the descending leader tip, which branches into two distinct components 125 frames after the channel enters the field of view. In each case, the space stem/leader appears to connect to the leader tip above in the subsequent frame, forming a new step. Each connection is associated with significant isolated brightening of the channel at the connection point followed by typically three or four frames of upward propagating re-illumination of the existing leader channel. In total, at least 80 individual steps were imaged.
36 CFR 1194.22 - Web-based intranet and internet information and applications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...
36 CFR 1194.22 - Web-based intranet and internet information and applications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...
36 CFR § 1194.22 - Web-based intranet and internet information and applications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-12
... Image Display Devices and Components Thereof; Notice of Request for Written Submissions on Remedy, the... importation, and the sale within the United States after importation of certain digital photo frames and image... the President, has 60 days to approve or disapprove the Commission's action. See section 337(j), 19 U...
NASA Astrophysics Data System (ADS)
Megens, Remco T. A.; Reitsma, Sietze; Prinzen, Lenneke; Oude Egbrink, Mirjam G. A.; Engels, Wim; Leenders, Peter J. A.; Brunenberg, Ellen J. L.; Reesink, Koen D.; Janssen, Ben J. A.; Ter Haar Romeny, Bart M.; Slaaf, Dick W.; van Zandvoort, Marc A. M. J.
2010-01-01
In vivo (molecular) imaging of the vessel wall of large arteries at subcellular resolution is crucial for unraveling vascular pathophysiology. We previously showed the applicability of two-photon laser scanning microscopy (TPLSM) in mounted arteries ex vivo. However, in vivo TPLSM has thus far suffered from in-frame and between-frame motion artifacts due to arterial movement with cardiac and respiratory activity. Now, motion artifacts are suppressed by accelerated image acquisition triggered on cardiac and respiratory activity. In vivo TPLSM is performed on rat renal and mouse carotid arteries, both surgically exposed and labeled fluorescently (cell nuclei, elastin, and collagen). The use of short acquisition times consistently limit in-frame motion artifacts. Additionally, triggered imaging reduces between-frame artifacts. Indeed, structures in the vessel wall (cell nuclei, elastic laminae) can be imaged at subcellular resolution. In mechanically damaged carotid arteries, even the subendothelial collagen sheet (~1 μm) is visualized using collagen-targeted quantum dots. We demonstrate stable in vivo imaging of large arteries at subcellular resolution using TPLSM triggered on cardiac and respiratory cycles. This creates great opportunities for studying (diseased) arteries in vivo or immediate validation of in vivo molecular imaging techniques such as magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET).
Woo, Jonghye; Tamarappoo, Balaji; Dey, Damini; Nakazato, Ryo; Le Meunier, Ludovic; Ramesh, Amit; Lazewatsky, Joel; Germano, Guido; Berman, Daniel S; Slomka, Piotr J
2011-11-01
The authors aimed to develop an image-based registration scheme to detect and correct patient motion in stress and rest cardiac positron emission tomography (PET)/CT images. The patient motion correction was of primary interest and the effects of patient motion with the use of flurpiridaz F 18 and (82)Rb were demonstrated. The authors evaluated stress/rest PET myocardial perfusion imaging datasets in 30 patients (60 datasets in total, 21 male and 9 female) using a new perfusion agent (flurpiridaz F 18) (n = 16) and (82)Rb (n = 14), acquired on a Siemens Biograph-64 scanner in list mode. Stress and rest images were reconstructed into 4 ((82)Rb) or 10 (flurpiridaz F 18) dynamic frames (60 s each) using standard reconstruction (2D attenuation weighted ordered subsets expectation maximization). Patient motion correction was achieved by an image-based registration scheme optimizing a cost function using modified normalized cross-correlation that combined global and local features. For comparison, visual scoring of motion was performed on the scale of 0 to 2 (no motion, moderate motion, and large motion) by two experienced observers. The proposed registration technique had a 93% success rate in removing left ventricular motion, as visually assessed. The maximum detected motion extent for stress and rest were 5.2 mm and 4.9 mm for flurpiridaz F 18 perfusion and 3.0 mm and 4.3 mm for (82)Rb perfusion studies, respectively. Motion extent (maximum frame-to-frame displacement) obtained for stress and rest were (2.2 ± 1.1, 1.4 ± 0.7, 1.9 ± 1.3) mm and (2.0 ± 1.1, 1.2 ±0 .9, 1.9 ± 0.9) mm for flurpiridaz F 18 perfusion studies and (1.9 ± 0.7, 0.7 ± 0.6, 1.3 ± 0.6) mm and (2.0 ± 0.9, 0.6 ± 0.4, 1.2 ± 1.2) mm for (82)Rb perfusion studies, respectively. A visually detectable patient motion threshold was established to be ≥2.2 mm, corresponding to visual user scores of 1 and 2. After motion correction, the average increases in contrast-to-noise ratio (CNR) from all frames for larger than the motion threshold were 16.2% in stress flurpiridaz F 18 and 12.2% in rest flurpiridaz F 18 studies. The average increases in CNR were 4.6% in stress (82)Rb studies and 4.3% in rest (82)Rb studies. Fully automatic motion correction of dynamic PET frames can be performed accurately, potentially allowing improved image quantification of cardiac PET data.
Evaluation of Skybox Video and Still Image products
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Kuschk, G.; Reinartz, P.
2014-11-01
The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.
Deep brain stimulation with a pre-existing cochlear implant: Surgical technique and outcome.
Eddelman, Daniel; Wewel, Joshua; Wiet, R Mark; Metman, Leo V; Sani, Sepehr
2017-01-01
Patients with previously implanted cranial devices pose a special challenge in deep brain stimulation (DBS) surgery. We report the implantation of bilateral DBS leads in a patient with a cochlear implant. Technical nuances and long-term interdevice functionality are presented. A 70-year-old patient with advancing Parkinson's disease and a previously placed cochlear implant for sensorineural hearing loss was referred for placement of bilateral DBS in the subthalamic nucleus (STN). Prior to DBS, the patient underwent surgical removal of the subgaleal cochlear magnet, followed by stereotactic MRI, frame placement, stereotactic computed tomography (CT), and merging of imaging studies. This technique allowed for successful computational merging, MRI-guided targeting, and lead implantation with acceptable accuracy. Formal testing and programming of both the devices were successful without electrical interference. Successful DBS implantation with high resolution MRI-guided targeting is technically feasible in patients with previously implanted cochlear implants by following proper precautions.
A programmable display layer for virtual reality system architectures.
Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd
2010-01-01
Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
Liu, Chunbo; Chen, Jingqiu; Liu, Jiaxin; Han, Xiang'e
2018-04-16
To obtain a high imaging frame rate, a computational ghost imaging system scheme is proposed based on optical fiber phased array (OFPA). Through high-speed electro-optic modulators, the randomly modulated OFPA can provide much faster speckle projection, which can be precomputed according to the geometry of the fiber array and the known phases for modulation. Receiving the signal light with a low-pixel APD array can effectively decrease the requirement on sampling quantity and computation complexity owing to the reduced data dimensionality while avoiding the image aliasing due to the spatial periodicity of the speckles. The results of analysis and simulation show that the frame rate of the proposed imaging system can be significantly improved compared with traditional systems.
Integrated sensor with frame memory and programmable resolution for light adaptive imaging
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2004-01-01
An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.
Automated surface inspection for steel products using computer vision approach.
Xi, Jiaqi; Shentu, Lifeng; Hu, Jikang; Li, Mian
2017-01-10
Surface inspection is a critical step in ensuring the product quality in the steel-making industry. In order to relieve inspectors of laborious work and improve the consistency of inspection, much effort has been dedicated to the automated inspection using computer vision approaches over the past decades. However, due to non-uniform illumination conditions and similarity between the surface textures and defects, the present methods are usually applicable to very specific cases. In this paper a new framework for surface inspection has been proposed to overcome these limitations. By investigating the image formation process, a quantitative model characterizing the impact of illumination on the image quality is developed, based on which the non-uniform brightness in the image can be effectively removed. Then a simple classifier is designed to identify the defects among the surface textures. The significance of this approach lies in its robustness to illumination changes and wide applicability to different inspection scenarios. The proposed approach has been successfully applied to the real-time surface inspection of round billets in real manufacturing. Implemented on a conventional industrial PC, the algorithm can proceed at 12.5 frames per second with the successful detection rate being over 90% for turned and skinned billets.
Tan, Chaowei; Wang, Bo; Liu, Paul; Liu, Dong
2008-01-01
Wide field of view (WFOV) imaging mode obtains an ultrasound image over an area much larger than the real time window normally available. As the probe is moved over the region of interest, new image frames are combined with prior frames to form a panorama image. Image registration techniques are used to recover the probe motion, eliminating the need for a position sensor. Speckle patterns, which are inherent in ultrasound imaging, change, or become decorrelated, as the scan plane moves, so we pre-smooth the image to reduce the effects of speckle in registration, as well as reducing effects from thermal noise. Because we wish to track the movement of features such as structural boundaries, we use an adaptive mesh over the entire smoothed image to home in on areas with feature. Motion estimation using blocks centered at the individual mesh nodes generates a field of motion vectors. After angular correction of motion vectors, we model the overall movement between frames as a nonrigid deformation. The polygon filling algorithm for precise, persistence-based spatial compounding constructs the final speckle reduced WFOV image.
Huang, Yong; Furtmüller, Georg J.; Tong, Dedi; Zhu, Shan; Lee, W. P. Andrew; Brandacher, Gerald; Kang, Jin U.
2014-01-01
Purpose To demonstrate the feasibility of a miniature handheld optical coherence tomography (OCT) imager for real time intraoperative vascular patency evaluation in the setting of super-microsurgical vessel anastomosis. Methods A novel handheld imager Fourier domain Doppler optical coherence tomography based on a 1.3-µm central wavelength swept source for extravascular imaging was developed. The imager was minimized through the adoption of a 2.4-mm diameter microelectromechanical systems (MEMS) scanning mirror, additionally a 12.7-mm diameter lens system was designed and combined with the MEMS mirror to achieve a small form factor that optimize functionality as a handheld extravascular OCT imager. To evaluate in-vivo applicability, super-microsurgical vessel anastomosis was performed in a mouse femoral vessel cut and repair model employing conventional interrupted suture technique as well as a novel non-suture cuff technique. Vascular anastomosis patency after clinically successful repair was evaluated using the novel handheld OCT imager. Results With an adjustable lateral image field of view up to 1.5 mm by 1.5 mm, high-resolution simultaneous structural and flow imaging of the blood vessels were successfully acquired for BALB/C mouse after orthotopic hind limb transplantation using a non-suture cuff technique and BALB/C mouse after femoral artery anastomosis using a suture technique. We experimentally quantify the axial and lateral resolution of the OCT to be 12.6 µm in air and 17.5 µm respectively. The OCT has a sensitivity of 84 dB and sensitivity roll-off of 5.7 dB/mm over an imaging range of 5 mm. Imaging with a frame rate of 36 Hz for an image size of 1000(lateral)×512(axial) pixels using a 50,000 A-lines per second swept source was achieved. Quantitative vessel lumen patency, lumen narrowing and thrombosis analysis were performed based on acquired structure and Doppler images. Conclusions A miniature handheld OCT imager that can be used for intraoperative evaluation of microvascular anastomosis was successfully demonstrated. PMID:25474742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabouri, P; Sawant, A; Arai, T
Purpose: MRI has become an attractive tool for tumor motion management. Current MR-compatible phantoms are only capable of reproducing translational motion. This study describes the construction and validation of a more realistic, MRI-compatible lung phantom that is deformable internally as well as externally. We demonstrate a radiotherapy application of this phantom by validating the geometric accuracy of the open-source deformable image registration software NiftyReg (UCL, UK). Methods: The outer shell of a commercially-available dynamic breathing torso phantom was filled with natural latex foam with eleven water tubes. A rigid foam cut-out served as the diaphragm. A high-precision programmable, in-house, MRI-compatiblemore » motion platform was used to drive the diaphragm. The phantom was imaged on a 3T scanner (Philips, Ingenia). Twenty seven tumor traces previously recorded from lung cancer patients were programmed into the phantom and 2D+t image sequences were acquired using a sparse-sampling sequence k-t BLAST (accn=3, resolution=0.66×0.66×5mm3; acquisition-time=110ms/slice). The geometric fidelity of the MRI-derived trajectories was validated against those obtained via fluoroscopy using the on board kV imager on a Truebeam linac. NiftyReg was used to perform frame by frame deformable image registration. The location of each marker predicted by using NiftyReg was compared with the values calculated by intensity-based segmentation on each frame. Results: In all cases, MR trajectories were within 1 mm of corresponding fluoroscopy trajectories. RMSE between centroid positions obtained from segmentation with those obtained by NiftyReg varies from 0.1 to 0.21 mm in the SI direction and 0.08 to 0.13 mm in the LR direction showing the high accuracy of deformable registration. Conclusion: We have successfully designed and demonstrated a phantom that can accurately reproduce deformable motion under a variety of imaging modalities including MRI, CT and x-ray fluodoscopy, making it an invaluable research tool for validating novel motion management strategies. This work was partially supported through research funding from National Institutes of Health (R01CA169102).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zwan, B J; University of Newcastle, Newcastle, NSW; Barnes, M
2016-06-15
Purpose: To automate gantry-resolved linear accelerator (linac) quality assurance (QA) for volumetric modulated arc therapy (VMAT) using an electronic portal imaging device (EPID). Methods: A QA system for VMAT was developed that uses an EPID, frame-grabber assembly and in-house developed image processing software. The system relies solely on the analysis of EPID image frames acquired without the presence of a phantom. Images were acquired at 8.41 frames per second using a frame grabber and ancillary acquisition computer. Each image frame was tagged with a gantry angle from the linac’s on-board gantry angle encoder. Arc-dynamic QA plans were designed to assessmore » the performance of each individual linac component during VMAT. By analysing each image frame acquired during the QA deliveries the following eight machine performance characteristics were measured as a function of gantry angle: MLC positional accuracy, MLC speed constancy, MLC acceleration constancy, MLC-gantry synchronisation, beam profile constancy, dose rate constancy, gantry speed constancy, dose-gantry angle synchronisation and mechanical sag. All tests were performed on a Varian iX linear accelerator equipped with a 120 leaf Millennium MLC and an aS1000 EPID (Varian Medical Systems, Palo Alto, CA, USA). Results: Machine performance parameters were measured as a function of gantry angle using EPID imaging and compared to machine log files and the treatment plan. Data acquisition is currently underway at 3 centres, incorporating 7 treatment units, at 2 weekly measurement intervals. Conclusion: The proposed system can be applied for streamlined linac QA and commissioning for VMAT. The set of test plans developed can be used to assess the performance of each individual components of the treatment machine during VMAT deliveries as a function of gantry angle. The methodology does not require the setup of any additional phantom or measurement equipment and the analysis is fully automated to allow for regular routine testing.« less
Lanotte, M; Cavallo, M; Franzini, A; Grifi, M; Marchese, E; Pantaleoni, M; Piacentino, M; Servello, D
2010-09-01
Deep brain stimulation (DBS) alleviates symptoms of many neurological disorders by applying electrical impulses to the brain by means of implanted electrodes, generally put in place using a conventional stereotactic frame. A new image guided disposable mini-stereotactic system has been designed to help shorten and simplify DBS procedures when compared to standard stereotaxy. A small number of studies have been conducted which demonstrate localization accuracies of the system similar to those achievable by the conventional frame. However no data are available to date on the economic impact of this new frame. The aim of this paper was to develop a computational model to evaluate the investment required to introduce the image guided mini-stereotactic technology for stereotactic DBS neurosurgery. A standard DBS patient care pathway was developed and related costs were analyzed. A differential analysis was conducted to capture the impact of introducing the image guided system on the procedure workflow. The analysis was carried out in five Italian neurosurgical centers. A computational model was developed to estimate upfront investments and surgery costs leading to a definition of the best financial option to introduce the new frame. Investments may vary from Euro 1.900 (purchasing of Image Guided [IG] mini-stereotactic frame only) to Euro 158.000.000. Moreover the model demonstrates how the introduction of the IG mini-stereotactic frame doesn't substantially affect the DBS procedure costs.
Heterogeneity image patch index and its application to consumer video summarization.
Dang, Chinh T; Radha, Hayder
2014-06-01
Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Wavelet-based image analysis system for soil texture analysis
NASA Astrophysics Data System (ADS)
Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John
2003-05-01
Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.
Improved Fast, Deep Record Length, Time-Resolved Visible Spectroscopy of Plasmas Using Fiber Grids
NASA Astrophysics Data System (ADS)
Brockington, S.; Case, A.; Cruz, E.; Williams, A.; Witherspoon, F. D.; Horton, R.; Klauser, R.; Hwang, D.
2017-10-01
HyperV Technologies is developing a fiber-coupled, deep record-length, low-light camera head for performing high time resolution spectroscopy on visible emission from plasma events. By coupling the output of a spectrometer to an imaging fiber bundle connected to a bank of amplified silicon photomultipliers, time-resolved spectroscopic imagers of 100 to 1,000 pixels can be constructed. A second generation prototype 32-pixel spectroscopic imager employing this technique was constructed and successfully tested at the University of California at Davis Compact Toroid Injection Experiment (CTIX). Pixel performance of 10 Megaframes/sec with record lengths of up to 256,000 frames ( 25.6 milliseconds) were achieved. Pixel resolution was 12 bits. Pixel pitch can be refined by using grids of 100 μm to 1000 μm diameter fibers. Experimental results will be discussed, along with future plans for this diagnostic. Work supported by USDOE SBIR Grant DE-SC0013801.
NASA Technical Reports Server (NTRS)
2004-01-01
Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.
47 CFR 73.9003 - Compliance requirements for covered demodulator products: Unscreened content.
Code of Federal Regulations, 2010 CFR
2010-10-01
... operating in a mode compatible with the digital visual interface (DVI) rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g. an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may...
47 CFR 73.9004 - Compliance requirements for covered demodulator products: Marked content.
Code of Federal Regulations, 2010 CFR
2010-10-01
... compatible with the digital visual interface (DVI) Rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g., an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may be attained by...
Multi-frame image processing with panning cameras and moving subjects
NASA Astrophysics Data System (ADS)
Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric
2014-06-01
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
Testing the internal consistency of the standard gamble in 'success' and 'failure' frames.
Oliver, Adam
2004-06-01
Decision making behaviour has often been shown to vary following changes in the way in which choice problems are described (or 'framed'). Moreover, a number of researchers have demonstrated that the standard gamble is prone to internal inconsistency, and loss aversion has been proposed as an explanation for this observed bias. This study attempts to alter the influence of loss aversion by framing the treatment arm of the standard gamble in terms of success (where we may expect the influence of loss aversion to be relatively weak) and in terms of failure (where we may expect the influence of loss aversion to be relatively strong). The objectives of the study are (1) to test whether standard gamble values vary when structurally identical gambles are differentially framed, and (2) to test whether the standard gamble is equally prone to internal inconsistency across the two frames. The results show that compared to framing in terms of treatment success, significantly higher values were inferred when the gamble was framed in terms of treatment failure. However, there was no difference in the quite marked levels of internal inconsistency observed in both frames. It is possible that the essential construct of the standard gamble induces substantial and/or widespread loss aversion irrespective of the way in which the gamble is framed, which offers a fundamental challenge to the usefulness of this value elicitation instrument. It is therefore recommended that further tests are undertaken on more sophisticated corrective procedures designed to limit the influence of loss aversion.
SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.
Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A
2016-11-01
Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.
Holographic Optical Coherence Imaging of Rat Osteogenic Sarcoma Tumor Spheroids
NASA Astrophysics Data System (ADS)
Yu, Ping; Mustata, Mirela; Peng, Leilei; Turek, John J.; Melloch, Michael R.; French, Paul M. W.; Nolte, David D.
2004-09-01
Holographic optical coherence imaging is a full-frame variant of coherence-domain imaging. An optoelectronic semiconductor holographic film functions as a coherence filter placed before a conventional digital video camera that passes coherent (structure-bearing) light to the camera during holographic readout while preferentially rejecting scattered light. The data are acquired as a succession of en face images at increasing depth inside the sample in a fly-through acquisition. The samples of living tissue were rat osteogenic sarcoma multicellular tumor spheroids that were grown from a single osteoblast cell line in a bioreactor. Tumor spheroids are nearly spherical and have radial symmetry, presenting a simple geometry for analysis. The tumors investigated ranged in diameter from several hundred micrometers to over 1 mm. Holographic features from the tumors were observed in reflection to depths of 500-600 µm with a total tissue path length of approximately 14 mean free paths. The volumetric data from the tumor spheroids reveal heterogeneous structure, presumably caused by necrosis and microcalcifications characteristic of some human avascular tumors.
Imaging the Moon II: Webcam CCD Observations and Analysis (a Two-Week Lab for Non-Majors)
NASA Astrophysics Data System (ADS)
Sato, T.
2014-07-01
Imaging the Moon is a successful two-week lab involving real sky observations of the Moon in which students make telescopic observations and analyze their own images. Originally developed around the 35 mm film camera, a common household object adapted for astronomical work, the lab now uses webcams as film photography has evolved into an obscure specialty technology and increasing numbers of students have little familiarity with it. The printed circuit board with the CCD is harvested from a commercial webcam and affixed to a tube to mount on a telescope in place of an eyepiece. Image frames are compiled to form a lunar mosaic, and crater sizes are measured. Students also work through the logistical steps of telescope time assignment and scheduling. They learn to keep a schedule and work with uncertainties of weather in ways paralleling research observations. Because there is no need for a campus observatory, this lab can be replicated at a wide variety of institutions.
Muon Trigger for Mobile Phones
NASA Astrophysics Data System (ADS)
Borisyak, M.; Usvyatsov, M.; Mulhearn, M.; Shimmin, C.; Ustyuzhanin, A.
2017-10-01
The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case. We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazur, T; Wang, Y; Fischer-Valuck, B
2015-06-15
Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less
Quiet Clean Short-haul Experimental Engine (QCSEE) composite fan frame design report
NASA Technical Reports Server (NTRS)
Mitchell, S. C.
1978-01-01
An advanced composite frame which is flight-weight and integrates the functions of several structures was developed for the over the wing (OTW) engine and for the under the wing (UTW) engine. The composite material system selected as the basic material for the frame is Type AS graphite fiber in a Hercules 3501 epoxy resin matrix. The frame was analyzed using a finite element digital computer program. This program was used in an iterative fashion to arrive at practical thicknesses and ply orientations to achieve a final design that met all strength and stiffness requirements for critical conditions. Using this information, the detail design of each of the individual parts of the frame was completed and released. On the basis of these designs, the required tooling was designed to fabricate the various component parts of the frame. To verify the structural integrity of the critical joint areas, a full-scale test was conducted on the frame before engine testing. The testing of the frame established critical spring constants and subjected the frame to three critical load cases. The successful static load test was followed by 153 and 58 hours respectively of successful running on the UTW and OTW engines.
Tanaka, Rie; Sanada, Shigeru; Okazaki, Nobuo; Kobayashi, Takeshi; Fujimura, Masaki; Yasui, Masahide; Matsui, Takeshi; Nakayama, Kazuya; Nanbu, Yuko; Matsui, Osamu
2006-10-01
Dynamic flat panel detectors (FPD) permit acquisition of distortion-free radiographs with a large field of view and high image quality. The present study was performed to evaluate pulmonary function using breathing chest radiography with a dynamic FPD. We report primary results of a clinical study and computer algorithm for quantifying and visualizing relative local pulmonary airflow. Dynamic chest radiographs of 18 subjects (1 emphysema, 2 asthma, 4 interstitial pneumonia, 1 pulmonary nodule, and 10 normal controls) were obtained during respiration using an FPD system. We measured respiratory changes in distance from the lung apex to the diaphragm (DLD) and pixel values in each lung area. Subsequently, the interframe differences (D-frame) and difference values between maximum inspiratory and expiratory phases (D-max) were calculated. D-max in each lung represents relative vital capacity (VC) and regional D-frames represent pulmonary airflow in each local area. D-frames were superimposed on dynamic chest radiographs in the form of color display (fusion images). The results obtained using our methods were compared with findings on computed tomography (CT) images and pulmonary functional test (PFT), which were examined before inclusion in the study. In normal subjects, the D-frames were distributed symmetrically in both lungs throughout all respiratory phases. However, subjects with pulmonary diseases showed D-frame distribution patterns that differed from the normal pattern. In subjects with air trapping, there were some areas with D-frames near zero indicated as colorless areas on fusion images. These areas also corresponded to the areas showing air trapping on computed tomography images. In asthma, obstructive abnormality was indicated by areas continuously showing D-frame near zero in the upper lung. Patients with interstitial pneumonia commonly showed fusion images with an uneven color distribution accompanied by increased D-frames in the area identified as normal on computed tomography images. Furthermore, measurement of DLD was very effective for evaluating diaphragmatic kinetics. This is a rapid and simple method for evaluation of respiratory kinetics for pulmonary diseases, which can reveal abnormalities in diaphragmatic kinetics and regional lung ventilation. Furthermore, quantification and visualization of respiratory kinetics is useful as an aid in interpreting dynamic chest radiographs.
Cheetah: A high frame rate, high resolution SWIR image camera
NASA Astrophysics Data System (ADS)
Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob
2008-10-01
A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.
1.56 Terahertz 2-frames per second standoff imaging
NASA Astrophysics Data System (ADS)
Goyette, Thomas M.; Dickinson, Jason C.; Linden, Kurt J.; Neal, William R.; Joseph, Cecil S.; Gorveatt, William J.; Waldman, Jerry; Giles, Robert; Nixon, William E.
2008-02-01
A Terahertz imaging system intended to demonstrate identification of objects concealed under clothing was designed, assembled, and tested. The system design was based on a 2.5 m standoff distance, with a capability of visualizing a 0.5 m by 0.5 m scene at an image rate of 2 frames per second. The system optical design consisted of a 1.56 THz laser beam, which was raster swept by a dual torsion mirror scanner. The beam was focused onto the scan subject by a stationary 50 cm-diameter focusing mirror. A heterodyne detection technique was used to down convert the backscattered signal. The system demonstrated a 1.5 cm spot resolution. Human subjects were scanned at a frame rate of 2 frames per second. Hidden metal objects were detected under a jacket worn by the human subject. A movie including data and video images was produced in 1.5 minutes scanning a human through 180° of azimuth angle at 0.7° increment.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
NASA Astrophysics Data System (ADS)
Panayiotou, M.; King, A. P.; Ma, Y.; Housden, R. J.; Rinaldi, C. A.; Gill, J.; Cooklin, M.; O'Neill, M.; Rhode, K. S.
2013-11-01
The motion and deformation of catheters that lie inside cardiac structures can provide valuable information about the motion of the heart. In this paper we describe the formation of a novel statistical model of the motion of a coronary sinus (CS) catheter based on principal component analysis of tracked electrode locations from standard mono-plane x-ray fluoroscopy images. We demonstrate the application of our model for the purposes of retrospective cardiac and respiratory gating of x-ray fluoroscopy images in normal dose x-ray fluoroscopy images, and demonstrate how a modification of the technique allows application to very low dose scenarios. We validated our method on ten mono-plane imaging sequences comprising a total of 610 frames from ten different patients undergoing radiofrequency ablation for the treatment of atrial fibrillation. For normal dose images we established systole, end-inspiration and end-expiration gating with success rates of 100%, 92.1% and 86.9%, respectively. For very low dose applications, the method was tested on the same ten mono-plane x-ray fluoroscopy sequences without noise and with added noise at signal to noise ratio (SNR) values of √50, √10, √8, √6, √5, √2 and √1 to simulate the image quality of increasingly lower dose x-ray images. The method was able to detect the CS catheter even in the lowest SNR images with median errors not exceeding 2.6 mm per electrode. Furthermore, gating success rates of 100%, 71.4% and 85.7% were achieved at the low SNR value of √2, representing a dose reduction of more than 25 times. Thus, the technique has the potential to extract useful information whilst substantially reducing the radiation exposure.
Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.
Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R
2015-11-05
Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Background suppression of infrared small target image based on inter-frame registration
NASA Astrophysics Data System (ADS)
Ye, Xiubo; Xue, Bindang
2018-04-01
We propose a multi-frame background suppression method for remote infrared small target detection. Inter-frame information is necessary when the heavy background clutters make it difficult to distinguish real targets and false alarms. A registration procedure based on points matching in image patches is used to compensate the local deformation of background. Then the target can be separated by background subtraction. Experiments show our method serves as an effective preliminary of target detection.
Martian Dust Devil Action in Gale Crater, Sol 1597
2017-02-27
This frame from a sequence of images shows a dust-carrying whirlwind, called a dust devil, scooting across the ground inside Gale Crater, as observed on the local summer afternoon of NASA's Curiosity Mars Rover's 1,597th Martian day, or sol (Feb. 1, 2017). Set within a broader southward view from the rover's Navigation Camera, the rectangular area outlined in black was imaged multiple times over a span of several minutes to check for dust devils. Images from the period with most activity are shown in the inset area. The images are in pairs that were taken about 12 seconds apart, with an interval of about 90 seconds between pairs. Timing is accelerated and not fully proportional in this animation. A dust devil is most evident in the 10th, 11th and 12th frames. In the first and fifth frames, dust blowing across the ground appears as pale horizontal streak. Contrast has been modified to make frame-to-frame changes easier to see. A black frame is added between repeats of the sequence. On Mars as on Earth, dust devils are whirlwinds that result from sunshine warming the ground, prompting convective rising of air that has gained heat from the ground. Observations of Martian dust devils provide information about wind directions and interaction between the surface and the atmosphere. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21270
The segmentation of bones in pelvic CT images based on extraction of key frames.
Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen
2018-05-22
Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.
Nakazawa, Hisato; Mori, Yoshimasa; Yamamuro, Osamu; Komori, Masataka; Shibamoto, Yuta; Uchiyama, Yukio; Tsugawa, Takahiko; Hagiwara, Masahiro
2014-01-01
We assessed the geometric distortion of 1.5-Tesla (T) and 3.0-T magnetic resonance (MR) images with the Leksell skull frame system using three types of cranial quick fixation screws (QFSs) of different materials—aluminum, aluminum with tungsten tip, and titanium—for skull frame fixation. Two kinds of acrylic phantoms were placed on a Leksell skull frame using the three types of screws, and were scanned with computed tomography (CT), 1.5-T MR imaging and 3.0-T MR imaging. The 3D coordinates for both strengths of MR imaging were compared with those for CT. The deviations of the measured coordinates at selected points (x = 50, 100 and 150; y = 50, 100 and 150) were indicated on different axial planes (z = 50, 75, 100, 125 and 150). The errors of coordinates with QFSs of aluminum, tungsten-tipped aluminum, and titanium were <1.0, 1.0 and 2.0 mm in the entire treatable area, respectively, with 1.5 T. In the 3.0-T field, the errors with aluminum QFSs were <1.0 mm only around the center, while the errors with tungsten-tipped aluminum and titanium were >2.0 mm in most positions. The geometric accuracy of the Leksell skull frame system with 1.5-T MR imaging was high and valid for clinical use. However, the geometric errors with 3.0-T MR imaging were larger than those of 1.5-T MR imaging and were acceptable only with aluminum QFSs, and then only around the central region. PMID:25034732
Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.
Mansour, Omar; Poepping, Tamie L; Lacefield, James C
2016-07-21
Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.
Toshiba TDF-500 High Resolution Viewing And Analysis System
NASA Astrophysics Data System (ADS)
Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.
1988-06-01
A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
Lindsey, Brooks D; Shelton, Sarah E; Martin, K Heath; Ozgun, Kathryn A; Rojas, Juan D; Foster, F Stuart; Dayton, Paul A
2017-04-01
Mapping blood perfusion quantitatively allows localization of abnormal physiology and can improve understanding of disease progression. Dynamic contrast-enhanced ultrasound is a low-cost, real-time technique for imaging perfusion dynamics with microbubble contrast agents. Previously, we have demonstrated another contrast agent-specific ultrasound imaging technique, acoustic angiography, which forms static anatomical images of the superharmonic signal produced by microbubbles. In this work, we seek to determine whether acoustic angiography can be utilized for high resolution perfusion imaging in vivo by examining the effect of acquisition rate on superharmonic imaging at low flow rates and demonstrating the feasibility of dynamic contrast-enhanced superharmonic perfusion imaging for the first time. Results in the chorioallantoic membrane model indicate that frame rate and frame averaging do not affect the measured diameter of individual vessels observed, but that frame rate does influence the detection of vessels near and below the resolution limit. The highest number of resolvable vessels was observed at an intermediate frame rate of 3 Hz using a mechanically-steered prototype transducer. We also demonstrate the feasibility of quantitatively mapping perfusion rate in 2D in a mouse model with spatial resolution of ~100 μm. This type of imaging could provide non-invasive, high resolution quantification of microvascular function at penetration depths of several centimeters.
ERIC Educational Resources Information Center
Minix-Wilkins, Roxanne M.
2015-01-01
The purpose of this phenomenological study was to explore the professional development experiences of successful secondary principals framed within the practices of the transformational leadership theory. At this stage in the research, professional development will be generally defined as all of the types of training that the administrator…
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
Systemic racism and U.S. health care.
Feagin, Joe; Bennefield, Zinobia
2014-02-01
This article draws upon a major social science theoretical approach-systemic racism theory-to assess decades of empirical research on racial dimensions of U.S. health care and public health institutions. From the 1600s, the oppression of Americans of color has been systemic and rationalized using a white racial framing-with its constituent racist stereotypes, ideologies, images, narratives, and emotions. We review historical literature on racially exploitative medical and public health practices that helped generate and sustain this racial framing and related structural discrimination targeting Americans of color. We examine contemporary research on racial differentials in medical practices, white clinicians' racial framing, and views of patients and physicians of color to demonstrate the continuing reality of systemic racism throughout health care and public health institutions. We conclude from research that institutionalized white socioeconomic resources, discrimination, and racialized framing from centuries of slavery, segregation, and contemporary white oppression severely limit and restrict access of many Americans of color to adequate socioeconomic resources-and to adequate health care and health outcomes. Dealing justly with continuing racial "disparities" in health and health care requires a conceptual paradigm that realistically assesses U.S. society's white-racist roots and contemporary racist realities. We conclude briefly with examples of successful public policies that have brought structural changes in racial and class differentials in health care and public health in the U.S. and other countries. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ambient-Light-Canceling Camera Using Subtraction of Frames
NASA Technical Reports Server (NTRS)
Morookian, John Michael
2004-01-01
The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.
Multiple enface image averaging for enhanced optical coherence tomography angiography imaging.
Uji, Akihito; Balasubramanian, Siva; Lei, Jianqin; Baghdasaryan, Elmira; Al-Sheikh, Mayss; Borrelli, Enrico; Sadda, SriniVas R
2018-05-31
To investigate the effect of multiple enface image averaging on image quality of the optical coherence tomography angiography (OCTA). Twenty-one normal volunteers were enrolled in this study. For each subject, one eye was imaged with 3 × 3 mm scan protocol, and another eye was imaged with the 6 × 6 mm scan protocol centred on the fovea using the ZEISS Angioplex™ spectral-domain OCTA device. Eyes were repeatedly imaged to obtain nine OCTA cube scan sets, and nine superficial capillary plexus (SCP) and deep capillary plexus (DCP) were individually averaged after registration. Eighteen eyes with a 3 × 3 mm scan field and 14 eyes with a 6 × 6 mm scan field were studied. Averaged images showed more continuous vessels and less background noise in both the SCP and the DCP as the number of frames used for averaging increased, with both 3 × 3 and 6 × 6 mm scan protocols. The intensity histogram of the vessels dramatically changed after averaging. Contrast-to-noise ratio (CNR) and subjectively assessed image quality scores also increased as the number of frames used for averaging increased in all image types. However, the additional benefit in quality diminished when averaging more than five frames. Averaging only three frames achieved significant improvement in CNR and the score assigned by certified grades. Use of multiple image averaging in OCTA enface images was found to be both objectively and subjectively effective for enhancing image quality. These findings may of value for developing optimal OCTA imaging protocols for future studies. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
(abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, Kenneth C.
1994-01-01
We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.
NASA Astrophysics Data System (ADS)
Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.
2017-10-01
Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A
2015-01-01
The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.
Automatic video summarization driven by a spatio-temporal attention model
NASA Astrophysics Data System (ADS)
Barland, R.; Saadane, A.
2008-02-01
According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riblett, MJ; Weiss, E; Hugo, GD
Purpose: To evaluate the performance of a 4D-CBCT registration and reconstruction method that corrects for respiratory motion and enhances image quality under clinically relevant conditions. Methods: Building on previous work, which tested feasibility of a motion-compensation workflow using image datasets superior to clinical acquisitions, this study assesses workflow performance under clinical conditions in terms of image quality improvement. Evaluated workflows utilized a combination of groupwise deformable image registration (DIR) and image reconstruction. Four-dimensional cone beam CT (4D-CBCT) FDK reconstructions were registered to either mean or respiratory phase reference frame images to model respiratory motion. The resulting 4D transformation was usedmore » to deform projection data during the FDK backprojection operation to create a motion-compensated reconstruction. To simulate clinically realistic conditions, superior quality projection datasets were sampled using a phase-binned striding method. Tissue interface sharpness (TIS) was defined as the slope of a sigmoid curve fit to the lung-diaphragm boundary or to the carina tissue-airway boundary when no diaphragm was discernable. Image quality improvement was assessed in 19 clinical cases by evaluating mitigation of view-aliasing artifacts, tissue interface sharpness recovery, and noise reduction. Results: For clinical datasets, evaluated average TIS recovery relative to base 4D-CBCT reconstructions was observed to be 87% using fixed-frame registration alone; 87% using fixed-frame with motion-compensated reconstruction; 92% using mean-frame registration alone; and 90% using mean-frame with motion-compensated reconstruction. Soft tissue noise was reduced on average by 43% and 44% for the fixed-frame registration and registration with motion-compensation methods, respectively, and by 40% and 42% for the corresponding mean-frame methods. Considerable reductions in view aliasing artifacts were observed for each method. Conclusion: Data-driven groupwise registration and motion-compensated reconstruction have the potential to improve the quality of 4D-CBCT images acquired under clinical conditions. For clinical image datasets, the addition of motion compensation after groupwise registration visibly reduced artifact impact. This work was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA166119. Hugo and Weiss hold a research agreement with Philips Healthcare and license agreement with Varian Medical Systems. Weiss receives royalties from UpToDate. Christensen receives funds from Roger Koch to support research.« less
Segar, Michelle L; Updegraff, John A; Zikmund-Fisher, Brian J; Richardson, Caroline R
2012-01-01
The reasons for exercising that are featured in health communications brand exercise and socialize individuals about why they should be physically active. Discovering which reasons for exercising are associated with high-quality motivation and behavioral regulation is essential to promoting physical activity and weight control that can be sustained over time. This study investigates whether framing physical activity in advertisements featuring distinct types of goals differentially influences body image and behavioral regulations based on self-determination theory among overweight and obese individuals. Using a three-arm randomized trial, overweight and obese women and men (aged 40-60 yr, n = 1690) read one of three ads framing physical activity as a way to achieve (1) better health, (2) weight loss, or (3) daily well-being. Framing effects were estimated in an ANOVA model with pairwise comparisons using the Bonferroni correction. This study showed that there are immediate framing effects on physical activity behavioral regulations and body image from reading a one-page advertisement about physical activity and that gender and BMI moderate these effects. Framing physical activity as a way to enhance daily well-being positively influenced participants' perceptions about the experience of being physically active and enhanced body image among overweight women, but not men. The experiment had less impact among the obese study participants compared to those who were overweight. These findings support a growing body of research suggesting that, compared to weight loss, framing physical activity for daily well-being is a better gain-frame message for overweight women in midlife.
Segar, Michelle L.; Updegraff, John A.; Zikmund-Fisher, Brian J.; Richardson, Caroline R.
2012-01-01
The reasons for exercising that are featured in health communications brand exercise and socialize individuals about why they should be physically active. Discovering which reasons for exercising are associated with high-quality motivation and behavioral regulation is essential to promoting physical activity and weight control that can be sustained over time. This study investigates whether framing physical activity in advertisements featuring distinct types of goals differentially influences body image and behavioral regulations based on self-determination theory among overweight and obese individuals. Using a three-arm randomized trial, overweight and obese women and men (aged 40–60 yr, n = 1690) read one of three ads framing physical activity as a way to achieve (1) better health, (2) weight loss, or (3) daily well-being. Framing effects were estimated in an ANOVA model with pairwise comparisons using the Bonferroni correction. This study showed that there are immediate framing effects on physical activity behavioral regulations and body image from reading a one-page advertisement about physical activity and that gender and BMI moderate these effects. Framing physical activity as a way to enhance daily well-being positively influenced participants' perceptions about the experience of being physically active and enhanced body image among overweight women, but not men. The experiment had less impact among the obese study participants compared to those who were overweight. These findings support a growing body of research suggesting that, compared to weight loss, framing physical activity for daily well-being is a better gain-frame message for overweight women in midlife. PMID:22701782
NASA Astrophysics Data System (ADS)
Stolzenburg, Maribeth; Marshall, Thomas C.; Karunarathne, Sumedhe; Orville, Richard E.
2018-10-01
Using video data recorded at 50,000 frames per second for nearby negative lightning flashes, estimates are derived for the length of positive upward connecting leaders (UCLs) that presumably formed prior to new ground attachments. Return strokes were 1.7 to 7.8 km distant, yielding image resolutions of 4.25 to 19.5 m. No UCLs are imaged in these data, indicating those features were too transient or too dim compared to other lightning processes that are imaged at these resolutions. Upper bound lengths for 17 presumed UCLs are determined from the height above flat ground or water of the successful stepped leader tip in the image immediately prior to (within 20 μs before) the return stroke. Better estimates of maximum UCL lengths are determined using the downward stepped leader tip's speed of advance and the estimated return stroke time within its first frame. For 17 strokes, the upper bound length of the possible UCL averages 31.6 m and ranges from 11.3 to 50.3 m. Among the close strokes (those with spatial resolution <8 m per pixel), the five which connected to water (salt water lagoon) have UCL upper bound estimates averaging significantly shorter (24.1 m) than the average for the three close strokes which connected to land (36.9 m). The better estimates of maximum UCL lengths for the eight close strokes average 20.2 m, with slightly shorter average of 18.3 m for the five that connected to water. All the better estimates of UCL maximum lengths are <38 m in this dataset
Chen, Yuling; Lou, Yang; Yen, Jesse
2017-07-01
During conventional ultrasound imaging, the need for multiple transmissions for one image and the time of flight for a desired imaging depth limit the frame rate of the system. Using a single plane wave pulse during each transmission followed by parallel receive processing allows for high frame rate imaging. However, image quality is degraded because of the lack of transmit focusing. Beamforming by spatial matched filtering (SMF) is a promising method which focuses ultrasonic energy using spatial filters constructed from the transmit-receive impulse response of the system. Studies by other researchers have shown that SMF beamforming can provide dynamic transmit-receive focusing throughout the field of view. In this paper, we apply SMF beamforming to plane wave transmissions (PWTs) to achieve both dynamic transmit-receive focusing at all imaging depths and high imaging frame rate (>5000 frames per second). We demonstrated the capability of the combined method (PWT + SMF) of achieving two-way focusing mathematically through analysis based on the narrowband Rayleigh-Sommerfeld diffraction theory. Moreover, the broadband performance of PWT + SMF was quantified in terms of lateral resolution and contrast from both computer simulations and experimental data. Results were compared between SMF beamforming and conventional delay-and-sum (DAS) beamforming in both simulations and experiments. At an imaging depth of 40 mm, simulation results showed a 29% lateral resolution improvement and a 160% contrast improvement with PWT + SMF. These improvements were 17% and 48% for experimental data with noise.
2001-01-24
The potential for investigating combustion at the limits of flammability, and the implications for spacecraft fire safety, led to the Structures Of Flame Balls At Low Lewis-number (SOFBALL) experiment flown twice aboard the Space Shuttle in 1997. The success there led to on STS-107 Research 1 mission plarned for 2002. Shown here are video frames captured during the Microgravity Sciences Lab-1 mission in 1997. Flameballs are intrinsically dim, thus requiring the use of image intensifiers on video cameras. The principal investigator is Dr. Paul Ronney of the University of Southern California, Los Angeles. Glenn Research in Cleveland, OH, manages the project.
Weber, Michael; Mickoleit, Michaela; Huisken, Jan
2014-01-01
This chapter introduces the concept of light sheet microscopy along with practical advice on how to design and build such an instrument. Selective plane illumination microscopy is presented as an alternative to confocal microscopy due to several superior features such as high-speed full-frame acquisition, minimal phototoxicity, and multiview sample rotation. Based on our experience over the last 10 years, we summarize the key concepts in light sheet microscopy, typical implementations, and successful applications. In particular, sample mounting for long time-lapse imaging and the resulting challenges in data processing are discussed in detail. © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rodrigues, Pedro L.; Rodrigues, Nuno F.; Fonseca, Jaime C.; Vilaça, João. L.
2015-03-01
An accurate percutaneous puncture is essential for disintegration and removal of renal stones. Although this procedure has proven to be safe, some organs surrounding the renal target might be accidentally perforated. This work describes a new intraoperative framework where tracked surgical tools are superimposed within 4D ultrasound imaging for security assessment of the percutaneous puncture trajectory (PPT). A PPT is first generated from the skin puncture site towards an anatomical target, using the information retrieved by electromagnetic motion tracking sensors coupled to surgical tools. Then, 2D ultrasound images acquired with a tracked probe are used to reconstruct a 4D ultrasound around the PPT under GPU processing. Volume hole-filling was performed in different processing time intervals by a tri-linear interpolation method. At spaced time intervals, the volume of the anatomical structures was segmented to ascertain if any vital structure is in between PPT and might compromise the surgical success. To enhance the volume visualization of the reconstructed structures, different render transfer functions were used. Results: Real-time US volume reconstruction and rendering with more than 25 frames/s was only possible when rendering only three orthogonal slice views. When using the whole reconstructed volume one achieved 8-15 frames/s. 3 frames/s were reached when one introduce the segmentation and detection if some structure intersected the PPT. The proposed framework creates a virtual and intuitive platform that can be used to identify and validate a PPT to safely and accurately perform the puncture in percutaneous nephrolithotomy.
DHMI: dynamic holographic microscopy interface
NASA Astrophysics Data System (ADS)
He, Xuefei; Zheng, Yujie; Lee, Woei Ming
2016-12-01
Digital holographic microscopy (DHM) is a powerful in-vitro biological imaging tool. In this paper, we report a fully automated off-axis digital holographic microscopy system completed with a graphical user interface in the Matlab environment. The interface primarily includes Fourier domain processing, phase reconstruction, aberration compensation and autofocusing. A variety of imaging operations such as region of interest selection, de-noising mode (filtering and averaging), low frame rate imaging for immediate reconstruction and high frame rate imaging routine ( 27 fps) are implemented to facilitate ease of use.
Technique for identifying, tracing, or tracking objects in image data
Anderson, Robert J [Albuquerque, NM; Rothganger, Fredrick [Albuquerque, NM
2012-08-28
A technique for computer vision uses a polygon contour to trace an object. The technique includes rendering a polygon contour superimposed over a first frame of image data. The polygon contour is iteratively refined to more accurately trace the object within the first frame after each iteration. The refinement includes computing image energies along lengths of contour lines of the polygon contour and adjusting positions of the contour lines based at least in part on the image energies.
Evaluation of experimental UAV video change detection
NASA Astrophysics Data System (ADS)
Bartelsen, J.; Saur, G.; Teutsch, C.
2016-10-01
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
Smart CMOS image sensor for lightning detection and imaging.
Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor
2013-03-01
We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1984-01-01
A series of images of a portion of a TM frame of Lake Ontario are presented. The top left frame is the TM Band 6 image, the top right image is a conventional contrast stretched image. The bottom left image is a Band 5 to Band 3 ratio image. This image is used to generate a primitive land cover classificaton. Each land cover (Water, Urban, Forest, Agriculture) is assigned a Band 6 emissivity value. The ratio image is then combined with the Band 6 image and atmospheric propagation data to generate the bottom right image. This image represents a display of data whose digital count can be directly related to estimated surface temperature. The resolution appears higher because the process cell is the size of the TM shortwave pixels.
Luo, Xiongbiao; Mori, Kensaku
2014-06-01
Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.
NASA Astrophysics Data System (ADS)
Shine, R. A.
1997-05-01
Over the last decade, a repertoire of techniques have been developed and/or refined to improve the quality of high spatial resolution solar movies taken from ground based observatories. These include real time image motion corrections, frame selection, phase diversity measurements of the wavefront, and extensive post processing to partially remove atmospheric distortion. Their practical application has been made possible by the increasing availability and decreasing cost of large CCD's with fast digital readouts and high speed computer workstations with large memories. Most successful have been broad band (0.3 to 10 nm) filtergram movies which can use exposure times of 10 to 30 ms, short enough to ``freeze'' atmospheric motions. Even so, only a handful of movies with excellent image quality for more than a hour have been obtained to date. Narrowband filtergrams (about 0.01 nm), such as those required for constructing magnetograms and Dopplergrams, have been more challenging although some single images approach the quality of the best continuum images. Some promising new techniques and instruments, together with persistence and good luck, should continue the progress made in the last several years.
Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.
Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain
2010-03-01
Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (
NASA Astrophysics Data System (ADS)
Li, Senhu; Sarment, David
2015-12-01
Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.
The hot hand belief and framing effects.
MacMahon, Clare; Köppen, Jörn; Raab, Markus
2014-09-01
Recent evidence of the hot hand in sport-where success breeds success in a positive recency of successful shots, for instance-indicates that this pattern does not actually exist. Yet the belief persists. We used 2 studies to explore the effects of framing on the hot hand belief in sport. We looked at the effect of sport experience and task on the perception of baseball pitch behavior as well as the hot hand belief and free-throw behavior in basketball. Study 1 asked participants to designate outcomes with different alternation rates as the result of baseball pitches or coin tosses. Study 2 examined basketball free-throw behavior and measured predicted success before each shot as well as general belief in the hot hand pattern. The results of Study 1 illustrate that experience and stimulus alternation rates influence the perception of chance in human performance tasks. Study 2 shows that physically performing an act and making judgments are related. Specifically, beliefs were related to overall performance, with more successful shooters showing greater belief in the hot hand and greater predicted success for upcoming shots. Both of these studies highlight that the hot hand belief is influenced by framing, which leads to instability and situational contingencies. We show the specific effects of framing using accumulated experience of the individual with the sport and knowledge of its structure and specific experience with sport actions (basketball shots) prior to judgments.
Universal ICT Picosecond Camera
NASA Astrophysics Data System (ADS)
Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.
1989-06-01
The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or fully overlap the frames. This sort of control is independent of the frequency of frame running and of their duration, and can only determine frame positioning on the screen. Since diaphragm 7 is located in the area of crossover and electron trajectories cross in the crossover, the frame is not decomposed into separate elements during its formation. The image is transferred onto the screen practically within the entire time of frame duration increasing the aperture ratio of the tube as compared to that in Ref. 3.
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
VIRTUAL FRAME BUFFER INTERFACE
NASA Technical Reports Server (NTRS)
Wolfe, T. L.
1994-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.
Camenga, Deepa R.; Hieftje, Kimberly D.; Fiellin, Lynn E.; Edelman, E. Jennifer; Rosenthal, Marjorie S.; Duncan, Lindsay R.
2014-01-01
Few studies have explored the application of message framing to promote health behaviors in adolescents. In this exploratory study, we examined young adolescents’ selection of gain- versus loss-framed images and messages when designing an HIV-prevention intervention to promote delayed sexual initiation. Twenty-six adolescents (aged 10–14 years) participated in six focus groups and created and discussed posters to persuade their peers to delay the initiation of sexual activity. Focus groups were audio-recorded and transcribed. A five-person multidisciplinary team analyzed the posters and focus group transcripts using thematic analysis. The majority of the posters (18/26, 69%) contained both gain- and loss-framed content. Of the 93/170 (56%) images and messages with framing, similar proportions were gain- (48/93, 52%) and loss-framed (45/93, 48%). Most gain-framed content (23/48, 48%) focused on academic achievement, whereas loss-framed content focused on pregnancy (20/45, 44%) and HIV/AIDS (14/45, 31%). These preliminary data suggest that young adolescents may prefer a combination of gain- and loss-framing in health materials to promote reduction in sexual risk behaviors. PMID:24452229
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Wavelet denoising of multiframe optical coherence tomography data.
Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2012-03-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.
Backside-illuminated 6.6-μm pixel video-rate CCDs for scientific imaging applications
NASA Astrophysics Data System (ADS)
Tower, John R.; Levine, Peter A.; Hsueh, Fu-Lung; Patel, Vipulkumar; Swain, Pradyumna K.; Meray, Grazyna M.; Andrews, James T.; Dawson, Robin M.; Sudol, Thomas M.; Andreas, Robert
2000-05-01
A family of backside illuminated CCD imagers with 6.6 micrometers pixels has been developed. The imagers feature full 12 bit (> 4,000:1) dynamic range with measured noise floor of < 10 e RMS at 5 MHz clock rates, and measured full well capacity of > 50,000 e. The modulation transfer function performance is excellent, with measured MTF at Nyquist of 46% for 500 nm illumination. Three device types have been developed. The first device is a 1 K X 1 K full frame device with a single output port, which can be run as a 1 K X 512 frame transfer device. The second device is a 512 X 512 frame transfer device with a single output port. The third device is a 512 X 512 split frame transfer device with four output ports. All feature the high quantum efficiency afforded by backside illumination.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
The college journey and academic engagement: how metaphor use enhances identity-based motivation.
Landau, Mark J; Oyserman, Daphna; Keefer, Lucas A; Smith, George C
2014-05-01
People commonly talk about goals metaphorically as destinations on physical paths extending into the future or as contained in future periods. Does metaphor use have consequences for people's motivation to engage in goal-directed action? Three experiments examine the effect of metaphor use on students' engagement with their academic possible identity: their image of themselves as academically successful graduates. Students primed to frame their academic possible identity using the goal-as-journey metaphor reported stronger academic intention, and displayed increased effort on academic tasks, compared to students primed with a nonacademic possible identity, a different metaphoric framing (goal-as-contained-entity), and past academic achievements (Studies 1-2). This motivating effect persisted up to a week later as reflected in final exam performance (Study 3). Four experiments examine the cognitive processes underlying this effect. Conceptual metaphor theory posits that an accessible metaphor transfers knowledge between dissimilar concepts. As predicted in this paradigm, a journey-metaphoric framing of a possible academic identity transferred confidence in the procedure, or action sequence, required to attain that possible identity, which in turn led participants to perceive that possible identity as more connected to their current identity (Study 4). Drawing on identity-based motivation theory, we hypothesized that strengthened current/possible identity connection would mediate the journey framing's motivating effect. This mediational process predicted students' academic engagement (Study 5) and an online sample's engagement with possible identities in other domains (Study 6). Also as predicted, journey framing increased academic engagement particularly among students reporting a weak connection to their academic possible identity (Study 7).
Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan
2017-01-01
In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866
Research on seismic behavior and filling effect of a new CFT column-CFT beam frame structure
NASA Astrophysics Data System (ADS)
Wang, Ying; Shima, Hiroshi
2009-12-01
Concrete filled-steel tube (CFT) structure is popularly used in practical structures nowadays. Self-compacting concrete (SCC) was employed to construct a new CFT column-CFT beam frame structure (hereinafter cited as new CFT frame structure) in this research. Three specimens, two CFT column-CFT beam joints and one hollow steel column-I beam joint were tested to investigate seismic behavior of the new CFT frame structure. The experimental results showed that SCC can be successfully compacted into the new CFT frame structure joints in the lab, and the joints provided adequate seismic behavior. In order to further assess filling effect of SCC in the long steel tube, scale column-beam subassembly made of acrylics plate was employed and concrete visual model experiment was done. The results showed that the concrete was able to be successfully cast into the subassembly which indicated that the new CFT frame structure is possible to be constructed in the real building.
Research on seismic behavior and filling effect of a new CFT column-CFT beam frame structure
NASA Astrophysics Data System (ADS)
Wang, Ying; Shima, Hiroshi
2010-03-01
Concrete filled-steel tube (CFT) structure is popularly used in practical structures nowadays. Self-compacting concrete (SCC) was employed to construct a new CFT column-CFT beam frame structure (hereinafter cited as new CFT frame structure) in this research. Three specimens, two CFT column-CFT beam joints and one hollow steel column-I beam joint were tested to investigate seismic behavior of the new CFT frame structure. The experimental results showed that SCC can be successfully compacted into the new CFT frame structure joints in the lab, and the joints provided adequate seismic behavior. In order to further assess filling effect of SCC in the long steel tube, scale column-beam subassembly made of acrylics plate was employed and concrete visual model experiment was done. The results showed that the concrete was able to be successfully cast into the subassembly which indicated that the new CFT frame structure is possible to be constructed in the real building.
NASA Astrophysics Data System (ADS)
Liang, Yu-Li
Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory detection rate by using facial features and skin color model. To harness all the features in the scene, we further developed another system using multiple types of local descriptors along with Bag-of-Visual Word framework. In addition, an investigation of new contour feature in detecting obscene content is presented.
A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications
Fu, Bo; Pitter, Mark C.; Russell, Noah A.
2011-01-01
Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852
Immobilization precision of a modified GTC frame.
Winey, Brian; Daartz, Juliane; Dankers, Frank; Bussière, Marc
2012-05-10
The purpose of this study was to evaluate and quantify the interfraction reproducibility and intrafraction immobilization precision of a modified GTC frame. The error of the patient alignment and imaging systems were measured using a cranial skull phantom, with simulated, predetermined shifts. The kV setup images were acquired with a room-mounted set of kV sources and panels. Calculated translations and rotations provided by the computer alignment software relying upon three implanted fiducials were compared to the known shifts, and the accuracy of the imaging and positioning systems was calculated. Orthogonal kV setup images for 45 proton SRT patients and 1002 fractions (average 22.3 fractions/patient) were analyzed for interfraction and intrafraction immobilization precision using a modified GTC frame. The modified frame employs a radiotransparent carbon cup and molded pillow to allow for more treatment angles from posterior directions for cranial lesions. Patients and the phantom were aligned with three 1.5 mm stainless steel fiducials implanted into the skull. The accuracy and variance of the patient positioning and imaging systems were measured to be 0.10 ± 0.06 mm, with the maximum uncertainty of rotation being ±0.07°. 957 pairs of interfraction image sets and 974 intrafraction image sets were analyzed. 3D translations and rotations were recorded. The 3D vector interfraction setup reproducibility was 0.13 mm ± 1.8 mm for translations and the largest uncertainty of ± 1.07º for rotations. The intrafraction immobilization efficacy was 0.19 mm ± 0.66 mm for translations and the largest uncertainty of ± 0.50º for rotations. The modified GTC frame provides reproducible setup and effective intrafraction immobilization, while allowing for the complete range of entrance angles from the posterior direction.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
NASA Astrophysics Data System (ADS)
Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin
2017-03-01
Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.
Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns
Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang
2014-01-01
Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723
Dissecting key components of the Ca2+ homeostasis game by multifunctional fluorescence imaging
NASA Astrophysics Data System (ADS)
Bastianello, Stefano; Ciubotaru, Catalin D.; Beltramello, Martina; Mammano, Fabio
2004-07-01
Different sub-cellular compartments and organelles, such as cytosol, endoplasmic reticulum and mitochondria, are known to be differentially involved in Ca2+ homeostasis. It is thus of primary concern to develop imaging paradigms that permit to make out these diverse components. To this end, we have constructed a complete system that performs multi-functional imaging under software control. The main hardware components of this system are a piezoelectric actuator, used to set objective lens position, a fast-switching monochromator, used to select excitation wavelength, a beam splitter, used to separate emission wavelengths, and a I/O interface to control the hardware. For these demonstrative experiments, cultured HeLa cells were transfected with a Ca2+ sensitive fluorescent biosensor (cameleon) targeted to the mitochondria (mtCam), and also loaded with cytosolic Fura2. The main system clock was provided by the frame-valid signal (FVAL) of a cooled CCD camera that captured wide-field fluorescence images of the two probes. Excitation wavelength and objective lens position were rapidly set during silent periods between successive exposures, with a minimum inter-frame interval of 2 ms. Triplets of images were acquired at 340, 380 and 430 nm excitation wavelengths at each one of three adjacent focal planes, separated by 250 nm. Optical sectioning was enhanced off-line by applying a nearest-neighbor deconvolution algorithm based on a directly estimated point-spread function (PSF). To measure the PSF, image stacks of sub-resolution fluorescent beads, incorporated in the cell cytoplasm by electroporation, were acquired under identical imaging conditions. The different dynamics of cytosolic and mitochondrial Ca2+ signals evoked by histamine could be distinguished clearly, with sub-micron resolution. Other FRET-based probes capable of sensing different chemical modifications of the cellular environment can be integrated in this approach, which is intrinsically suitable for the analysis of the interactions and cross-talks between different signaling pathways (e.g. Ca2+ and cAMP).
Processing Infrared Images For Fire Management Applications
NASA Astrophysics Data System (ADS)
Warren, John R.; Pratt, William K.
1981-12-01
The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
Lees, Heidi; Zapata, Félix; Vaher, Merike; García-Ruiz, Carmen
2018-07-01
This novel investigation focused on studying the transfer of explosive residues (TNT, HMTD, PETN, ANFO, dynamite, black powder, NH 4 NO 3 , KNO 3 , NaClO 3 ) in ten consecutive fingerprints to two different surfaces - cotton fabric and polycarbonate plastic - by using multispectral imaging (MSI). Imaging was performed employing a reflex camera in a purpose-built photo studio. Images were processed in MATLAB to select the most discriminating frame - the one that provided the sharpest contrast between the explosive and the material in the red-green-blue (RGB) visible region. The amount of explosive residues transferred in each fingerprint was determined as the number of pixels containing explosive particles. First, the pattern of PETN transfer by ten different persons in successive fingerprints was studied. No significant differences in the pattern of transfer of PETN between subjects were observed, which was also confirmed by multivariate analysis of variance (MANOVA). Then, the transfer of traces of the nine above explosives in ten consecutive fingerprints to cotton fabric and polycarbonate plastic was investigated. The obtained results demonstrated that the amount of explosive residues deposited on successive fingerprints tended to undergo a power or exponential decrease, with the exception of inorganic salts (NH 4 NO 3 , KNO 3 , NaClO 3 ) and ANFO (consists of 90% NH 4 NO 3 ). Copyright © 2018 Elsevier B.V. All rights reserved.
Large Binocular Telescope Observations of Europa Occulting Io's Volcanoes at 4.8 μm
NASA Astrophysics Data System (ADS)
Skrutskie, Michael F.; Conrad, Albert; Resnick, Aaron; Leisenring, Jarron; Hinz, Phil; de Pater, Imke; de Kleer, Katherine; Spencer, John; Skemer, Andrew; Woodward, Charles E.; Davies, Ashley Gerard; Defrére, Denis
2015-11-01
On 8 March 2015 Europa passed nearly centrally in front of Io. The Large Binocular Telescope observed this event in dual-aperture AO-corrected Fizeau interferometric imaging mode using the mid-infrared imager LMIRcam operating behind the Large Binocular Telescope Interferometer (LBTI) at a broadband wavelength of 4.8 μm (M-band). Occultation light curves generated from frames recorded every 123 milliseconds show that both Loki and Pele/Pillan were well resolved. Europa's center shifted by 2 kilometers relative to Io from frame-to-frame. The derived light curve for Loki is consistent with the double-lobed structure reported by Conrad et al. (2015) using direct interferometric imaging with LBTI.
Data rate enhancement of optical camera communications by compensating inter-frame gaps
NASA Astrophysics Data System (ADS)
Nguyen, Duy Thong; Park, Youngil
2017-07-01
Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.
Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin
2015-01-01
Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
NASA Astrophysics Data System (ADS)
Kura, Sreekanth; Xie, Hongyu; Fu, Buyin; Ayata, Cenk; Boas, David A.; Sakadžić, Sava
2018-06-01
Objective. Resting state functional connectivity (RSFC) allows the study of functional organization in normal and diseased brain by measuring the spontaneous brain activity generated under resting conditions. Intrinsic optical signal imaging (IOSI) based on multiple illumination wavelengths has been used successfully to compute RSFC maps in animal studies. The IOSI setup complexity would be greatly reduced if only a single wavelength can be used to obtain comparable RSFC maps. Approach. We used anesthetized mice and performed various comparisons between the RSFC maps based on single wavelength as well as oxy-, deoxy- and total hemoglobin concentration changes. Main results. The RSFC maps based on IOSI at a single wavelength selected for sensitivity to the blood volume changes are quantitatively comparable to the RSFC maps based on oxy- and total hemoglobin concentration changes obtained by the more complex IOSI setups. Moreover, RSFC maps do not require CCD cameras with very high frame acquisition rates, since our results demonstrate that they can be computed from the data obtained at frame rates as low as 5 Hz. Significance. Our results will have general utility for guiding future RSFC studies based on IOSI and making decisions about the IOSI system designs.
PFM2: a 32 × 32 processor for X-ray diffraction imaging at FELs
NASA Astrophysics Data System (ADS)
Manghisoni, M.; Fabris, L.; Re, V.; Traversi, G.; Ratti, L.; Grassi, M.; Lodola, L.; Malcovati, P.; Vacchi, C.; Pancheri, L.; Benkechcache, M. E. A.; Dalla Betta, G.-F.; Xu, H.; Verzellesi, G.; Ronchin, S.; Boscardin, M.; Batignani, G.; Bettarini, S.; Casarosa, G.; Forti, F.; Giorgi, M.; Paladino, A.; Paoloni, E.; Rizzo, G.; Morsani, F.
2016-11-01
This work is concerned with the design of a readout chip for application to experiments at the next generation X-ray Free Electron Lasers (FEL). The ASIC, named PixFEL Matrix (PFM2), has been designed in a 65 nm CMOS technology and consists of 32 × 32 pixels. Each cell covers an area of 110 × 110 μm2 and includes a low-noise charge sensitive amplifier (CSA) with dynamic signal compression, a time-variant shaper used to process the preamplifier output signal, a 10-bit successive approximation register (SAR) analog-to-digital converter (ADC) and digital circuitry for channel control and data readout. Two different solutions for the readout channel, based on different versions of the time-variant filter, have been integrated in the chip. Both solutions can be operated in such a way to cope with the high frame rate (exceeding 1 MHz) foreseen for future X-ray FEL machines. The ASIC will be bump bonded to a slim/active edge pixel sensor to form the first demonstrator for the PixFEL X-ray imager. This work has been carried out in the frame of the PixFEL project funded by Istituto Nazionale di Fisica Nucleare (INFN), Italy.
Corrected High-Frame Rate Anchored Ultrasound with Software Alignment
ERIC Educational Resources Information Center
Miller, Amanda L.; Finch, Kenneth B.
2011-01-01
Purpose: To improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method. Method: A production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous…
"Mathematicians Would Say It This Way": An Investigation of Teachers' Framings of Mathematicians
ERIC Educational Resources Information Center
Cirillo, Michelle; Herbel-Eisenmann, Beth
2011-01-01
Although popular media often provides negative images of mathematicians, we contend that mathematics classroom practices can also contribute to students' images of mathematicians. In this study, we examined eight mathematics teachers' framings of mathematicians in their classrooms. Here, we analyze classroom observations to explore some of the…
Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F
2018-02-01
Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts.
Vienola, Kari V.; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A.; de Boer, Johannes F.
2018-01-01
Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts. PMID:29552396
Underwater image mosaicking and visual odometry
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott
2017-05-01
This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.
Features of Jupiter's Great Red Spot
NASA Technical Reports Server (NTRS)
1996-01-01
This montage features activity in the turbulent region of Jupiter's Great Red Spot (GRS). Four sets of images of the GRS were taken through various filters of the Galileo imaging system over an 11.5 hour period on 26 June, 1996 Universal Time. The sequence was designed to reveal cloud motions. The top and bottom frames on the left are of the same area, northeast of the GRS, viewed through the methane (732 nm) filter but about 70 minutes apart. The top left and top middle frames are of the same area and at the same time, but the top middle frame is taken at a wavelength (886 nm) where methane absorbs more strongly. (Only high clouds can reflect sunlight in this wavelength.) Brightness differences are caused by the different depths of features in the two images. The bottom middle frame shows reflected light at a wavelength (757 nm) where there are essentially no absorbers in the Jovian atmosphere. The white spot is to the northwest of the GRS; its appearance at different wavelengths suggests that the brightest elements are 30 km higher than the surrounding clouds. The top and bottom frames on the right, taken nine hours apart and in the violet (415 nm) filter, show the time evolution of an atmospheric wave northeast of the GRS. Visible crests in the top right frame are much less apparent 9 hours later in the bottom right frame. The misalignment of the north-south wave crests with the observed northwestward local wind may indicate a shift in wind direction (wind shear) with height. The areas within the dark lines are 'truth windows' or sections of the images which were transmitted to Earth using less data compression. Each of the six squares covers 4.8 degrees of latitude and longitude (about 6000 square kilometers). North is at the top of each frame.
Launched in October 1989, Galileo entered orbit around Jupiter on December 7, 1995. The spacecraft's mission is to conduct detailed studies of the giant planet, its largest moons and the Jovian magnetic environment. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoMelnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.
Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng
2014-01-01
Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.
Automated Movement Correction for Dynamic PET/CT Images: Evaluation with Phantom and Patient Data
Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R.; Nelson, Linda D.; Small, Gary W.; Huang, Sung-Cheng
2014-01-01
Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers. PMID:25111700
Application of automatic threshold in dynamic target recognition with low contrast
NASA Astrophysics Data System (ADS)
Miao, Hua; Guo, Xiaoming; Chen, Yu
2014-11-01
Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.
NASA Astrophysics Data System (ADS)
Magri, Alphonso William
This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.
Murugesan, Yahini Prabha; Alsadoon, Abeer; Manoranjan, Paul; Prasad, P W C
2018-06-01
Augmented reality-based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aims to improve the accuracy and depth perception of the augmented video. The proposed system consists of a rotational matrix and translation vector algorithm to reduce the geometric error and improve the depth perception by including 2 stereo cameras and a translucent mirror in the operating room. The results on the mandible/maxilla area show that the new algorithm improves the video accuracy by 0.30-0.40 mm (in terms of overlay error) and the processing rate to 10-13 frames/s compared to 7-10 frames/s in existing systems. The depth perception increased by 90-100 mm. The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with a shorter operating time, which provides surgeons with a smooth surgical flow. Copyright © 2018 John Wiley & Sons, Ltd.
Ebe, Kazuyu; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji
2015-08-01
To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio-caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient's tumor motion. A substitute target with the patient's tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors' QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients' tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors and gimbal motion errors in the ExacTrac log analyses (n = 13). The newly developed video image-based QA system, including in-house software, can analyze more than a thousand images (33 frames/s). Positional errors are approximately equivalent to those in ExacTrac log analyses. This system is useful for the visual illustration of the progress of the tracking state and for the quantification of positional accuracy during dynamic tumor tracking irradiation in the Vero4DRT system.
Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.
Ji, L; Danuser, G
2005-12-01
We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.
Noise and sensitivity of x-ray framing cameras at Nike (abstract)
NASA Astrophysics Data System (ADS)
Pawley, C. J.; Deniz, A. V.; Lehecka, T.
1999-01-01
X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.
Classical and neural methods of image sequence interpolation
NASA Astrophysics Data System (ADS)
Skoneczny, Slawomir; Szostakowski, Jaroslaw
2001-08-01
An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.
NASA Astrophysics Data System (ADS)
Ait Moulay Larbi, E.; Bouley, S.; Dassou, A.; Benkhaldoun, Z.; Baratoux, D.; Lazrek, M.
2013-12-01
We present the research environment of our network. We highlight some results of the analysis of the first Lunar Meteorides impacts detected in Morocco. We present an exemple of ground-based instrumentation to carry out a successful search for lunar flashes phenomena. We also discuss the interest to monotoring these phenomena by focusing on the interest of determining the positions of the craters on the moon. The precise determination of impact flashes is very advantageous, especially in the near future there will be several new craters identified by LROC or other robotic spacecraft cameras. The two flashes reported in this study are optimally situated on central region of the lunar disk, which reduce the mismatch between the barycenter of radiation and the actual position of the impact. Smaller-scale lunar features are easily identified after superposition of a large number of images in order to increase the signal to noise ratio and produce an optimal image of the non-illuminated fraction of the moon. The sub-pixel shift of each image relative to the first frame (base frame) was determined by fitting the correlation peak obtained in the Fourier space to a 2- dimensional gaussian following Schaum and McHugh [1996]; Baratoux et al. [2001]. To increase further the positioning, the signal of the flash is is fitted to a 2-dimensional gaussian for each frame (previously shifted to the base image) where the flash is present. The barycenter of the flash is given as the rounded to the nearest integer of the average centers of the 2-dimensional gaussian functions. Two impact flashes are detected from AGM observatory in Marrakech, respectively on the February 6, 2013, at 06:29:56.7 UT and April 14, 2013, 20:00:45.4 UT. The characteristics of each flash are given in the table below. the diameter of the crater formed on the lunar surface can be estimated using Gault's formula for craters of less than 100 m in diameter, the results show that the meteoroids are likely producing craters of about 2.5 m and 4.4 m in diameter for Flash 1 and 2, respectively.Characteristics of lunar impact flashes
40 MHz high-frequency ultrafast ultrasound imaging.
Huang, Chih-Chung; Chen, Pei-Yu; Peng, Po-Hsun; Lee, Po-Yang
2017-06-01
Ultrafast high-frame-rate ultrasound imaging based on coherent-plane-wave compounding has been developed for many biomedical applications. Most coherent-plane-wave compounding systems typically operate at 3-15 MHz, and the image resolution for this frequency range is not sufficient for visualizing microstructure tissues. Therefore, the purpose of this study was to implement a high-frequency ultrafast ultrasound imaging operating at 40 MHz. The plane-wave compounding imaging and conventional multifocus B-mode imaging were performed using the Field II toolbox of MATLAB in simulation study. In experiments, plane-wave compounding images were obtained from a 256 channel ultrasound research platform with a 40 MHz array transducer. All images were produced by point-spread functions and cyst phantoms. The in vivo experiment was performed from zebrafish. Since high-frequency ultrasound exhibits a lower penetration, chirp excitation was applied to increase the imaging depth in simulation. The simulation results showed that a lateral resolution of up to 66.93 μm and a contrast of up to 56.41 dB were achieved when using 75-angles plane waves in compounding imaging. The experimental results showed that a lateral resolution of up to 74.83 μm and a contrast of up to 44.62 dB were achieved when using 75-angles plane waves in compounding imaging. The dead zone and compounding noise are about 1.2 mm and 2.0 mm in depth for experimental compounding imaging, respectively. The structure of zebrafish heart was observed clearly using plane-wave compounding imaging. The use of fewer than 23 angles for compounding allowed a frame rate higher than 1000 frames per second. However, the compounding imaging exhibits a similar lateral resolution of about 72 μm as the angle of plane wave is higher than 10 angles. This study shows the highest operational frequency for ultrafast high-frame-rate ultrasound imaging. © 2017 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less
High-performance floating-point image computing workstation for medical applications
NASA Astrophysics Data System (ADS)
Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin
1990-07-01
The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.
Evaluation of sequential images for photogrammetrically point determination
NASA Astrophysics Data System (ADS)
Kowalczyk, M.
2011-12-01
Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R
2017-11-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.
Real-time CT-video registration for continuous endoscopic guidance
NASA Astrophysics Data System (ADS)
Merritt, Scott A.; Rai, Lav; Higgins, William E.
2006-03-01
Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.
Temporal enhancement of two-dimensional color doppler echocardiography
NASA Astrophysics Data System (ADS)
Terentjev, Alexey B.; Settlemier, Scott H.; Perrin, Douglas P.; del Nido, Pedro J.; Shturts, Igor V.; Vasilyev, Nikolay V.
2016-03-01
Two-dimensional color Doppler echocardiography is widely used for assessing blood flow inside the heart and blood vessels. Currently, frame acquisition time for this method varies from tens to hundreds of milliseconds, depending on Doppler sector parameters. This leads to low frame rates of resulting video sequences equal to tens of Hz, which is insufficient for some diagnostic purposes, especially in pediatrics. In this paper, we present a new approach for reconstruction of 2D color Doppler cardiac images, which results in the frame rate being increased to hundreds of Hz. This approach relies on a modified method of frame reordering originally applied to real-time 3D echocardiography. There are no previous publications describing application of this method to 2D Color Doppler data. The approach has been tested on several in-vivo cardiac 2D color Doppler datasets with approximate duration of 30 sec and native frame rate of 15 Hz. The resulting image sequences had equivalent frame rates to 500Hz.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Hologram generation by horizontal scanning of a high-speed spatial light modulator.
Takaki, Yasuhiro; Okada, Naoya
2009-06-10
In order to increase the image size and the viewing zone angle of a hologram, a high-speed spatial light modulator (SLM) is imaged as a vertically long image by an anamorphic imaging system, and this image is scanned horizontally by a galvano scanner. The reduction in horizontal pixel pitch of the SLM provides a wide viewing zone angle. The increased image height and horizontal scanning increased the image size. We demonstrated the generation of a hologram having a 15 degrees horizontal viewing zone angle and an image size of 3.4 inches with a frame rate of 60 Hz using a digital micromirror device with a frame rate of 13.333 kHz as a high-speed SLM.
NASA Astrophysics Data System (ADS)
Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel
2015-03-01
The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.
Mission Specialist Hawley works with the SWUIS experiment
2013-11-18
STS093-350-022 (22-27 July 1999) --- Astronaut Steven A. Hawley, mission specialist, works with the Southwest Ultraviolet Imaging System (SWUIS) experiment onboard the Earth-orbiting Space Shuttle Columbia. The SWUIS is based around a Maksutov-design Ultraviolet (UV) telescope and a UV-sensitive, image-intensified Charge-Coupled Device (CCD) camera that frames at video frame rates.
High resolution metric imaging payload
NASA Astrophysics Data System (ADS)
Delclaud, Y.
2017-11-01
Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.
NASA Technical Reports Server (NTRS)
2000-01-01
Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.
Computer quantitation of coronary angiograms
NASA Technical Reports Server (NTRS)
Ledbetter, D. C.; Selzer, R. H.; Gordon, R. M.; Blankenhorn, D. H.; Sanmarco, M. E.
1978-01-01
A computer technique is being developed at the Jet Propulsion Laboratory to automate the measurement of coronary stenosis. A Vanguard 35mm film transport is optically coupled to a Spatial Data System vidicon/digitizer which in turn is controlled by a DEC PDP 11/55 computer. Programs have been developed to track the edges of the arterial shadow, to locate normal and atherosclerotic vessel sections and to measure percent stenosis. Multiple frame analysis techniques are being investigated that involve on the one hand, averaging stenosis measurements from adjacent frames, and on the other hand, averaging adjacent frame images directly and then measuring stenosis from the averaged image. For the latter case, geometric transformations are used to force registration of vessel images whose spatial orientation changes.
Marker-less multi-frame motion tracking and compensation in PET-brain imaging
NASA Astrophysics Data System (ADS)
Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.
2015-03-01
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok
2017-01-01
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243
Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru
The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.
Video Image Stabilization and Registration (VISAR) Software
NASA Technical Reports Server (NTRS)
1999-01-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
Ultrafast Ultrasound Imaging With Cascaded Dual-Polarity Waves.
Zhang, Yang; Guo, Yuexin; Lee, Wei-Ning
2018-04-01
Ultrafast ultrasound imaging using plane or diverging waves, instead of focused beams, has advanced greatly the development of novel ultrasound imaging methods for evaluating tissue functions beyond anatomical information. However, the sonographic signal-to-noise ratio (SNR) of ultrafast imaging remains limited due to the lack of transmission focusing, and thus insufficient acoustic energy delivery. We hereby propose a new ultrafast ultrasound imaging methodology with cascaded dual-polarity waves (CDWs), which consists of a pulse train with positive and negative polarities. A new coding scheme and a corresponding linear decoding process were thereby designed to obtain the recovered signals with increased amplitude, thus increasing the SNR without sacrificing the frame rate. The newly designed CDW ultrafast ultrasound imaging technique achieved higher quality B-mode images than coherent plane-wave compounding (CPWC) and multiplane wave (MW) imaging in a calibration phantom, ex vivo pork belly, and in vivo human back muscle. CDW imaging shows a significant improvement in the SNR (10.71 dB versus CPWC and 7.62 dB versus MW), penetration depth (36.94% versus CPWC and 35.14% versus MW), and contrast ratio in deep regions (5.97 dB versus CPWC and 5.05 dB versus MW) without compromising other image quality metrics, such as spatial resolution and frame rate. The enhanced image qualities and ultrafast frame rates offered by CDW imaging beget great potential for various novel imaging applications.
Co-adding techniques for image-based wavefront sensing for segmented-mirror telescopes
NASA Astrophysics Data System (ADS)
Smith, J. S.; Aronstein, David L.; Dean, Bruce H.; Acton, D. S.
2007-09-01
Image-based wavefront sensing algorithms are being used to characterize the optical performance for a variety of current and planned astronomical telescopes. Phase retrieval recovers the optical wavefront that correlates to a series of diversity-defocused point-spread functions (PSFs), where multiple frames can be acquired at each defocus setting. Multiple frames of data can be co-added in different ways; two extremes are in "image-plane space," to average the frames for each defocused PSF and use phase retrieval once on the averaged images, or in "pupil-plane space," to use phase retrieval on each PSF frame individually and average the resulting wavefronts. The choice of co-add methodology is particularly noteworthy for segmented-mirror telescopes that are subject to noise that causes uncorrelated motions between groups of segments. Using models and data from the James Webb Space Telescope (JWST) Testbed Telescope (TBT), we show how different sources of noise (uncorrelated segment jitter, turbulence, and common-mode noise) and different parts of the optical wavefront, segment and global aberrations, contribute to choosing the co-add method. Of particular interest, segment piston is more accurately recovered in "image-plane space" co-adding, while segment tip/tilt is recovered in "pupil-plane space" co-adding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.
2014-08-15
Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less
Multi-frame super-resolution with quality self-assessment for retinal fundus videos.
Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P
2014-01-01
This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.
NASA Astrophysics Data System (ADS)
Bernier, Jean D.
1991-09-01
The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.
SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, S; Rao, A; Wendt, R
Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1993-10-01
This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.
NASA Astrophysics Data System (ADS)
Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu
2016-03-01
The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).
Mars Orbiter Camera Views the 'Face on Mars' - Comparison with Viking
NASA Technical Reports Server (NTRS)
1998-01-01
Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.
The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.In this comparison, the best Viking image has been enlarged to 3.3 times its original resolution, and the MOC image has been decreased by a similar 3.3 times, creating images of roughly the same size. In addition, the MOC images have been geometrically transformed to a more overhead projection (different from the mercator map projection of PIA01440 & 1441) for ease of comparison with the Viking image. The left image is a portion of Viking Orbiter 1 frame 070A13, the middle image is a portion of MOC frame shown normally, and the right image is the same MOC frame but with the brightness inverted to simulate the approximate lighting conditions of the Viking image.Processing Image processing has been applied to the images in order to improve the visibility of features. This processing included the following steps: The image was processed to remove the sensitivity differences between adjacent picture elements (calibrated). This removes the vertical streaking. The contrast and brightness of the image was adjusted, and 'filters' were applied to enhance detail at several scales. The image was then geometrically warped to meet the computed position information for a mercator-type map. This corrected for the left-right flip, and the non-vertical viewing angle (about 45o from vertical), but also introduced some vertical 'elongation' of the image for the same reason Greenland looks larger than Africa on a mercator map of the Earth. A section of the image, containing the 'Face' and a couple of nearly impact craters and hills, was 'cut' out of the full image and reproduced separately.See PIA01440-1442 for additional processing steps. Also see PIA01236 for the raw image.Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.Large format geiger-mode avalanche photodiode LADAR camera
NASA Astrophysics Data System (ADS)
Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison
2013-05-01
Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan
2016-09-01
The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.
Sang, Xiahan; LeBeau, James M
2014-03-01
We report the development of revolving scanning transmission electron microscopy--RevSTEM--a technique that enables characterization and removal of sample drift distortion from atomic resolution images without the need for a priori crystal structure information. To measure and correct the distortion, we acquire an image series while rotating the scan coordinate system between successive frames. Through theory and experiment, we show that the revolving image series captures the information necessary to analyze sample drift rate and direction. At atomic resolution, we quantify the image distortion using the projective standard deviation, a rapid, real-space method to directly measure lattice vector angles. By fitting these angles to a physical model, we show that the refined drift parameters provide the input needed to correct distortion across the series. We demonstrate that RevSTEM simultaneously removes the need for a priori structure information to correct distortion, leads to a dramatically improved signal-to-noise ratio, and enables picometer precision and accuracy regardless of drift rate. Copyright © 2013 Elsevier B.V. All rights reserved.
Informative-frame filtering in endoscopy videos
NASA Astrophysics Data System (ADS)
An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny
2005-04-01
Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).
Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing
NASA Technical Reports Server (NTRS)
Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio
2013-01-01
The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.
Wear your hat: representational resistance in safer sex discourse.
Nelson, S D
1994-01-01
Through an analysis of four posters used by the AIDS Action Committee of Massachusetts, this article asks how representation can effectively promote safer sex practices. The images under investigation have different targeted groups--one is aimed at African-American men, one at Latinas, and two at gay men. Using a frame-work that connects definitions of sex in the respective communities with differences surrounding gender, race, and class, the imagery is unpacked in order to expose the effects of safer sex representation. This essay then argues that the degree to which ingrained definitions of sex are challenged constitutes a determining factor in the success or failure of safer sex representations.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Io's Sodium Cloud (Clear and Green-Yellow Filters)
NASA Technical Reports Server (NTRS)
1997-01-01
The green-yellow filter and clear filter images of Io which were released over the past two days were originally exposed on the same frame. The camera pointed in slightly different directions for the two exposures, placing a clear filter image of Io on the top half of the frame, and a green-yellow filter image of Io on the bottom half of the frame. This picture shows that entire original frame in false color, the most intense emission appearing white.
East is to the right. Most of Io's visible surface is in shadow, though one can see part of an illuminated crescent on its western side. The burst of white light near Io's eastern equatorial edge (most distinctive in the green filter image) is sunlight scattered by the plume of the volcano Prometheus.There is much more bright light near Io in the clear filter image, since that filter's wider wavelength range admits more scattered light from Prometheus' sunlit plume and Io's illuminated crescent. Thus in the clear filter image especially, Prometheus's plume was bright enough to produce several white spikes which extend radially outward from the center of the plume emission. These spikes are artifacts produced by the optics of the camera. Two of the spikes in the clear filter image appear against Io's shadowed surface, and the lower of these is pointing towards a bright round spot. That spot corresponds to thermal emission from the volcano Pele.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.
Interactive wire-frame ship hullform generation and display
NASA Technical Reports Server (NTRS)
Calkins, D. E.; Garbini, J. L.; Ishimaru, J.
1984-01-01
An interactive automated procedure to generate a wire frame graphic image of a ship hullform, which uses a digitizing tablet in conjunction with the hullform lines drawing, was developed. The geometric image created is displayed on an Evans & Sutherland PS-300 graphics terminal for real time interactive viewing or is output as hard copy on an inexpensive dot matrix printer.
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Swirling Dust in Gale Crater, Mars, Sol 1613
2017-02-27
This frame from a sequence of images shows a dust-carrying whirlwind, called a dust devil, on lower Mount Sharp inside Gale Crater, as viewed by NASA's Curiosity Mars Rover during the summer afternoon of the rover's 1,613rd Martian day, or sol (Feb. 18, 2017). Set within a broader southward view from the rover's Navigation Camera, the rectangular area outlined in black was imaged multiple times over a span of several minutes to check for dust devils. Images from the period with most activity are shown in the inset area. The images are in pairs that were taken about 12 seconds apart, with an interval of about 90 seconds between pairs. Timing is accelerated and not fully proportional in this animation. Contrast has been modified to make frame-to-frame changes easier to see. A black frame provides a marker between repeats of the sequence. On Mars as on Earth, dust devils result from sunshine warming the ground, prompting convective rising of air that has gained heat from the ground. Observations of dust devils provide information about wind directions and interaction between the surface and the atmosphere. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21483
Imaging a seizure model in zebrafish with structured illumination light sheet microscopy
NASA Astrophysics Data System (ADS)
Liu, Yang; Dale, Savannah; Ball, Rebecca; VanLeuven, Ariel J.; Baraban, Scott; Sornborger, Andrew; Lauderdale, James D.; Kner, Peter
2018-02-01
Zebrafish are a promising vertebrate model for elucidating how neural circuits generate behavior under normal and pathological conditions. The Baraban group first demonstrated that zebrafish larvae are valuable for investigating seizure events and can be used as a model for epilepsy in humans. Because of their small size and transparency, zebrafish embryos are ideal for imaging seizure activity using calcium indicators. Light-sheet microscopy is well suited to capturing neural activity in zebrafish because it is capable of optical sectioning, high frame rates, and low excitation intensities. We describe work in our lab to use light-sheet microscopy for high-speed long-time imaging of neural activity in wildtype and mutant zebrafish to better understand the connectivity and activity of inhibitory neural networks when GABAergic signaling is altered in vivo. We show that, with light-sheet microscopy, neural activity can be recorded at 23 frames per second in twocolors for over 10 minutes allowing us to capture rare seizure events in mutants. We have further implemented structured illumination to increase resolution and contrast in the vertical and axial directions during high-speed imaging at an effective frame rate of over 7 frames per second.
Research on compression performance of ultrahigh-definition videos
NASA Astrophysics Data System (ADS)
Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di
2017-11-01
With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
4D microvascular imaging based on ultrafast Doppler tomography.
Demené, Charlie; Tiran, Elodie; Sieu, Lim-Anna; Bergel, Antoine; Gennisson, Jean Luc; Pernot, Mathieu; Deffieux, Thomas; Cohen, Ivan; Tanter, Mickael
2016-02-15
4D ultrasound microvascular imaging was demonstrated by applying ultrafast Doppler tomography (UFD-T) to the imaging of brain hemodynamics in rodents. In vivo real-time imaging of the rat brain was performed using ultrasonic plane wave transmissions at very high frame rates (18,000 frames per second). Such ultrafast frame rates allow for highly sensitive and wide-field-of-view 2D Doppler imaging of blood vessels far beyond conventional ultrasonography. Voxel anisotropy (100 μm × 100 μm × 500 μm) was corrected for by using a tomographic approach, which consisted of ultrafast acquisitions repeated for different imaging plane orientations over multiple cardiac cycles. UFT-D allows for 4D dynamic microvascular imaging of deep-seated vasculature (up to 20 mm) with a very high 4D resolution (respectively 100 μm × 100 μm × 100 μm and 10 ms) and high sensitivity to flow in small vessels (>1 mm/s) for a whole-brain imaging technique without requiring any contrast agent. 4D ultrasound microvascular imaging in vivo could become a valuable tool for the study of brain hemodynamics, such as cerebral flow autoregulation or vascular remodeling after ischemic stroke recovery, and, more generally, tumor vasculature response to therapeutic treatment. Copyright © 2015 Elsevier Inc. All rights reserved.
Jun, Jungmi
2016-07-01
This study examines how the Korean medical tourism industry frames its service, benefit, and credibility issues through texts and images of online brochures. The results of content analysis suggest that the Korean medical tourism industry attempts to frame their medical/health services as "excellence in surgeries and cancer care" and "advanced health technology and facilities." However, the use of cost-saving appeals was limited, which can be seen as a strategy to avoid consumers' association of lower cost with lower quality services, and to stress safety and credibility.
Advances in indirect detector systems for ultra high-speed hard X-ray imaging with synchrotron light
NASA Astrophysics Data System (ADS)
Olbinado, M. P.; Grenzer, J.; Pradel, P.; De Resseguier, T.; Vagovic, P.; Zdora, M.-C.; Guzenko, V. A.; David, C.; Rack, A.
2018-04-01
We report on indirect X-ray detector systems for various full-field, ultra high-speed X-ray imaging methodologies, such as X-ray phase-contrast radiography, diffraction topography, grating interferometry and speckle-based imaging performed at the hard X-ray imaging beamline ID19 of the European Synchrotron—ESRF. Our work highlights the versatility of indirect X-ray detectors to multiple goals such as single synchrotron pulse isolation, multiple-frame recording up to millions frames per second, high efficiency, and high spatial resolution. Besides the technical advancements, potential applications are briefly introduced and discussed.
NASA Astrophysics Data System (ADS)
Raphael, David T.; McIntee, Diane; Tsuruda, Jay S.; Colletti, Patrick; Tatevossian, Raymond; Frazier, James
2006-03-01
We explored multiple image processing approaches by which to display the segmented adult brachial plexus in a three-dimensional manner. Magnetic resonance neurography (MRN) 1.5-Tesla scans with STIR sequences, which preferentially highlight nerves, were performed in adult volunteers to generate high-resolution raw images. Using multiple software programs, the raw MRN images were then manipulated so as to achieve segmentation of plexus neurovascular structures, which were incorporated into three different visualization schemes: rotating upper thoracic girdle skeletal frames, dynamic fly-throughs parallel to the clavicle, and thin slab volume-rendered composite projections.
Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.
Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar
2017-11-03
Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.
Ta, Casey N; Eghtedari, Mohammad; Mattrey, Robert F; Kono, Yuko; Kummel, Andrew C
2014-11-01
Contrast-enhanced ultrasound (CEUS) cines of focal liver lesions (FLLs) can be quantitatively analyzed to measure tumor perfusion on a pixel-by-pixel basis for diagnostic indication. However, CEUS cines acquired freehand and during free breathing cause nonuniform in-plane and out-of-plane motion from frame to frame. These motions create fluctuations in the time-intensity curves (TICs), reducing the accuracy of quantitative measurements. Out-of-plane motion cannot be corrected by image registration in 2-dimensional CEUS and degrades the quality of in-plane motion correction (IPMC). A 2-tier IPMC strategy and adaptive out-of-plane motion filter (OPMF) are proposed to provide a stable correction of nonuniform motion to reduce the impact of motion on quantitative analyses. A total of 22 cines of FLLs were imaged with dual B-mode and contrast specific imaging to acquire a 3-minute TIC. B-mode images were analyzed for motion, and the motion correction was applied to both B-mode and contrast images. For IPMC, the main reference frame was automatically selected for each cine, and subreference frames were selected in each respiratory cycle and sequentially registered toward the main reference frame. All other frames were sequentially registered toward the local subreference frame. Four OPMFs were developed and tested: subsample normalized correlation (NC), subsample sum of absolute differences, mean frame NC, and histogram. The frames that were most dissimilar to the OPMF reference frame using 1 of the 4 above criteria in each respiratory cycle were adaptively removed by thresholding against the low-pass filter of the similarity curve. Out-of-plane motion filter was quantitatively evaluated by an out-of-plane motion metric (OPMM) that measured normalized variance in the high-pass filtered TIC within the tumor region-of-interest with low OPMM being the goal. Results for IPMC and OPMF were qualitatively evaluated by 2 blinded observers who ranked the motion in the cines before and after various combinations of motion correction steps. Quantitative measurements showed that 2-tier IPMC and OPMF improved imaging stability. With IPMC, the NC B-mode metric increased from 0.504 ± 0.149 to 0.585 ± 0.145 over all cines (P < 0.001). Two-tier IPMC also produced better fits on the contrast-specific TIC than industry standard IPMC techniques did (P < 0.02). In-plane motion correction and OPMF were shown to improve goodness of fit for pixel-by-pixel analysis (P < 0.001). Out-of-plane motion filter reduced variance in the contrast-specific signal as shown by a median decrease of 49.8% in the OPMM. Two-tier IPMC and OPMF were also shown to qualitatively reduce motion. Observers consistently ranked cines with IPMC higher than the same cine before IPMC (P < 0.001) as well as ranked cines with OPMF higher than when they were uncorrected. The 2-tier sequential IPMC and adaptive OPMF significantly reduced motion in 3-minute CEUS cines of FLLs, thereby overcoming the challenges of drift and irregular breathing motion in long cines. The 2-tier IPMC strategy provided stable motion correction tolerant of out-of-plane motion throughout the cine by sequentially registering subreference frames that bypassed the motion cycles, thereby overcoming the lack of a nearly stationary reference point in long cines. Out-of-plane motion filter reduced apparent motion by adaptively removing frames imaged off-plane from the automatically selected OPMF reference frame, thereby tolerating nonuniform breathing motion. Selection of the best OPMF by minimizing OPMM effectively reduced motion under a wide variety of motion patterns applicable to clinical CEUS. These semiautomated processes only required user input for region-of-interest selection and can improve the accuracy of quantitative perfusion measurements.
Smart Camera Technology Increases Quality
NASA Technical Reports Server (NTRS)
2004-01-01
When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.
Design and construction of a high frame rate imaging system
NASA Astrophysics Data System (ADS)
Wang, Jing; Waugaman, John L.; Liu, Anjun; Lu, Jian-Yu
2002-05-01
A new high frame rate imaging method has been developed recently [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44, 839-856 (1997)]. This method may have a clinical application for imaging of fast moving objects such as human hearts, velocity vector imaging, and low-speckle imaging. To implement the method, an imaging system has been designed. The system consists of one main printed circuit board (PCB) and 16 channel boards (each channel board contains 8 channels), in addition to a set-top box for connections to a personal computer (PC), a front panel board for user control and message display, and a power control and distribution board. The main board contains a field programmable gate array (FPGA) and controls all channels (each channel has also an FPGA). We will report the analog and digital circuit design and simulations, multiplayer PCB designs with commercial software (Protel 99), PCB signal integrity testing and system RFI/EMI shielding, and the assembly and construction of the entire system. [Work supported in part by Grant 5RO1 HL60301 from NIH.
Logic design and implementation of FPGA for a high frame rate ultrasound imaging system
NASA Astrophysics Data System (ADS)
Liu, Anjun; Wang, Jing; Lu, Jian-Yu
2002-05-01
Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.
Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses
Kim, Hyun Seok; Park, Kwang Suk
2017-01-01
Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots. PMID:29073735
Tiny videos: a large data set for nonparametric video retrieval and frame classification.
Karpenko, Alexandre; Aarabi, Parham
2011-03-01
In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.
Handheld probe for portable high frame photoacoustic/ultrasound imaging system
NASA Astrophysics Data System (ADS)
Daoudi, K.; van den Berg, P. J.; Rabot, O.; Kohl, A.; Tisserand, S.; Brands, P.; Steenbergen, W.
2013-03-01
Photoacoustics is a hybrid imaging modality that is based on the detection of acoustic waves generated by absorption of pulsed light by tissue chromophors. In current research, this technique uses large and costly photoacoustic systems with a low frame rate imaging. To open the door for widespread clinical use, a compact, cost effective and fast system is required. In this paper we report on the development of a small compact handset pulsed laser probe which will be connected to a portable ultrasound system for real-time photoacoustic imaging and ultrasound imaging. The probe integrates diode lasers driven by an electrical driver developed for very short high power pulses. It uses specifically developed highly efficient diode stacks with high frequency repetition rate up to 10 kHz, emitting at 800nm wavelength. The emitted beam is collimated and shaped with compact micro optics beam shaping system delivering a homogenized rectangular laser beam intensity distribution. The laser block is integrated with an ultrasound transducer in an ergonomically designed handset probe. This handset is a building block enabling for a low cost high frame rate photoacoustic and ultrasound imaging system. The probe was used with a modified ultrasound scanner and was tested by imaging a tissue mimicking phantom.
1999-06-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
In Vivo Mammalian Brain Imaging Using One- and Two-Photon Fluorescence Microendoscopy
Jung, Juergen C.; Mehta, Amit D.; Aksay, Emre; Stepnoski, Raymond; Schnitzer, Mark J.
2010-01-01
One of the major limitations in the current set of techniques available to neuroscientists is a dearth of methods for imaging individual cells deep within the brains of live animals. To overcome this limitation, we developed two forms of minimally invasive fluorescence microendoscopy and tested their abilities to image cells in vivo. Both one- and two-photon fluorescence microendoscopy are based on compound gradient refractive index (GRIN) lenses that are 350–1,000 μm in diameter and provide micron-scale resolution. One-photon microendoscopy allows full-frame images to be viewed by eye or with a camera, and is well suited to fast frame-rate imaging. Two-photon microendoscopy is a laser-scanning modality that provides optical sectioning deep within tissue. Using in vivo microendoscopy we acquired video-rate movies of thalamic and CA1 hippocampal red blood cell dynamics and still-frame images of CA1 neurons and dendrites in anesthetized rats and mice. Microendoscopy will help meet the growing demand for in vivo cellular imaging created by the rapid emergence of new synthetic and genetically encoded fluorophores that can be used to label specific brain areas or cell classes. PMID:15128753
Remote driving with reduced bandwidth communication
NASA Technical Reports Server (NTRS)
Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.
1993-01-01
Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.
Building a 2.5D Digital Elevation Model from 2D Imagery
NASA Technical Reports Server (NTRS)
Padgett, Curtis W.; Ansar, Adnan I.; Brennan, Shane; Cheng, Yang; Clouse, Daniel S.; Almeida, Eduardo
2013-01-01
When projecting imagery into a georeferenced coordinate frame, one needs to have some model of the geographical region that is being projected to. This model can sometimes be a simple geometrical curve, such as an ellipse or even a plane. However, to obtain accurate projections, one needs to have a more sophisticated model that encodes the undulations in the terrain including things like mountains, valleys, and even manmade structures. The product that is often used for this purpose is a Digital Elevation Model (DEM). The technology presented here generates a high-quality DEM from a collection of 2D images taken from multiple viewpoints, plus pose data for each of the images and a camera model for the sensor. The technology assumes that the images are all of the same region of the environment. The pose data for each image is used as an initial estimate of the geometric relationship between the images, but the pose data is often noisy and not of sufficient quality to build a high-quality DEM. Therefore, the source imagery is passed through a feature-tracking algorithm and multi-plane-homography algorithm, which refine the geometric transforms between images. The images and their refined poses are then passed to a stereo algorithm, which generates dense 3D data for each image in the sequence. The 3D data from each image is then placed into a consistent coordinate frame and passed to a routine that divides the coordinate frame into a number of cells. The 3D points that fall into each cell are collected, and basic statistics are applied to determine the elevation of that cell. The result of this step is a DEM that is in an arbitrary coordinate frame. This DEM is then filtered and smoothed in order to remove small artifacts. The final step in the algorithm is to take the initial DEM and rotate and translate it to be in the world coordinate frame [such as UTM (Universal Transverse Mercator), MGRS (Military Grid Reference System), or geodetic] such that it can be saved in a standard DEM format and used for projection.
Immobilization precision of a modified GTC frame
Daartz, Juliane; Dankers, Frank; Bussière, Marc
2012-01-01
The purpose of this study was to evaluate and quantify the interfraction reproducibility and intrafraction immobilization precision of a modified GTC frame. The error of the patient alignment and imaging systems were measured using a cranial skull phantom, with simulated, predetermined shifts. The kV setup images were acquired with a room‐mounted set of kV sources and panels. Calculated translations and rotations provided by the computer alignment software relying upon three implanted fiducials were compared to the known shifts, and the accuracy of the imaging and positioning systems was calculated. Orthogonal kV setup images for 45 proton SRT patients and 1002 fractions (average 22.3 fractions/patient) were analyzed for interfraction and intrafraction immobilization precision using a modified GTC frame. The modified frame employs a radiotransparent carbon cup and molded pillow to allow for more treatment angles from posterior directions for cranial lesions. Patients and the phantom were aligned with three 1.5 mm stainless steel fiducials implanted into the skull. The accuracy and variance of the patient positioning and imaging systems were measured to be 0.10±0.06 mm, with the maximum uncertainty of rotation being ±0.07°.957 pairs of interfraction image sets and 974 intrafraction image sets were analyzed. 3D translations and rotations were recorded. The 3D vector interfraction setup reproducibility was 0.13 mm ±1.8 mm for translations and the largest uncertainty of ±1.07° for rotations. The intrafraction immobilization efficacy was 0.19 mm ±0.66 mm for translations and the largest uncertainty of ±0.50° for rotations. The modified GTC frame provides reproducible setup and effective intrafraction immobilization, while allowing for the complete range of entrance angles from the posterior direction. PACS number: 87.53.Ly, 87.55.Qr PMID:22584167
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilpatrick, W.
1982-01-01
While literal language is successfully being subjected to automatic analysis, metaphors remain intractable. Using Minsky's frame theory the metaphoric process is viewed as a copying of stereotypic terminal clusters from the frames of the 1 degrees and 2 degrees terms of the metaphor. Stereotypic values from the two original frames share equal status in this new frame, while non-stereotypic values from the two will be kept separate for possible use in metaphoric extension. The a-frame analysis is illustrated by application to non-literary novel metaphors. Frames provide the quantity of information needed for interpretation. Certain frame values are marked as stereotypic.more » Creativity is realized by the construction of a new a-frame, and the tension is realized by the presence in a single a-frame of both shared stereotypic and discrete non-stereotypic values. 10 references.« less
A method for detecting small targets based on cumulative weighted value of target properties
NASA Astrophysics Data System (ADS)
Jin, Xing; Sun, Gang; Wang, Wei-hua; Liu, Fang; Chen, Zeng-ping
2015-03-01
Laser detection based on the "cat's eye effect" has become the hot research project for its initiative compared to the passivity of sound detection and infrared detection. And the target detection is one of the core technologies in this system. The paper puts forward a method for detecting small targets based on cumulative weighted value of target properties using given data. Firstly, we make a frame difference to the images, then make image processing based on Morphology Principles. Secondly, we segment images, and screen the targets; then find some interesting locations. Finally, comparing to a quantity of frames, we locate the target. We did an exam to 394 true frames, the experimental result shows that the mathod can detect small targets efficiently.
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applicationsa)
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Coffey, S. K.
2012-10-01
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm2 of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating (˜200 eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.
Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.
2017-01-01
The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089
Ping Gong; Pengfei Song; Shigao Chen
2017-06-01
The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.
Correia, Mafalda; Provost, Jean; Chatelin, Simon; Villemain, Olivier; Tanter, Mickael; Pernot, Mathieu
2016-01-01
Transthoracic shear wave elastography of the myocardium remains very challenging due to the poor quality of transthoracic ultrafast imaging and the presence of clutter noise, jitter, phase aberration, and ultrasound reverberation. Several approaches, such as, e.g., diverging-wave coherent compounding or focused harmonic imaging have been proposed to improve the imaging quality. In this study, we introduce ultrafast harmonic coherent compounding (UHCC), in which pulse-inverted diverging-waves are emitted and coherently compounded, and show that such an approach can be used to enhance both Shear Wave Elastography (SWE) and high frame rate B-mode Imaging. UHCC SWE was first tested in phantoms containing an aberrating layer and was compared against pulse-inversion harmonic imaging and against ultrafast coherent compounding (UCC) imaging at the fundamental frequency. In-vivo feasibility of the technique was then evaluated in six healthy volunteers by measuring myocardial stiffness during diastole in transthoracic imaging. We also demonstrated that improvements in imaging quality could be achieved using UHCC B-mode imaging in healthy volunteers. The quality of transthoracic images of the heart was found to be improved with the number of pulse-inverted diverging waves with reduction of the imaging mean clutter level up to 13.8-dB when compared against UCC at the fundamental frequency. These results demonstrated that UHCC B-mode imaging is promising for imaging deep tissues exposed to aberration sources with a high frame-rate. PMID:26890730
Owen, Kevin; Fuller, Michael I.; Hossack, John A.
2015-01-01
Two-dimensional arrays present significant beamforming computational challenges because of their high channel count and data rate. These challenges are even more stringent when incorporating a 2-D transducer array into a battery-powered hand-held device, placing significant demands on power efficiency. Previous work in sonar and ultrasound indicates that 2-D array beamforming can be decomposed into two separable line-array beamforming operations. This has been used in conjunction with frequency-domain phase-based focusing to achieve fast volume imaging. In this paper, we analyze the imaging and computational performance of approximate near-field separable beamforming for high-quality delay-and-sum (DAS) beamforming and for a low-cost, phaserotation-only beamforming method known as direct-sampled in-phase quadrature (DSIQ) beamforming. We show that when high-quality time-delay interpolation is used, separable DAS focusing introduces no noticeable imaging degradation under practical conditions. Similar results for DSIQ focusing are observed. In addition, a slight modification to the DSIQ focusing method greatly increases imaging contrast, making it comparable to that of DAS, despite having a wider main lobe and higher side lobes resulting from the limitations of phase-only time-delay interpolation. Compared with non-separable 2-D imaging, up to a 20-fold increase in frame rate is possible with the separable method. When implemented on a smart-phone-oriented processor to focus data from a 60 × 60 channel array using a 40 × 40 aperture, the frame rate per C-mode volume slice increases from 16 to 255 Hz for DAS, and from 11 to 193 Hz for DSIQ. Energy usage per frame is similarly reduced from 75 to 4.8 mJ/ frame for DAS, and from 107 to 6.3 mJ/frame for DSIQ. We also show that the separable method outperforms 2-D FFT-based focusing by a factor of 1.64 at these data sizes. This data indicates that with the optimal design choices, separable 2-D beamforming can significantly improve frame rate and battery life for hand-held devices with 2-D arrays. PMID:22828829
Framing Vision: An Examination of Framing, Sensegiving, and Sensemaking during a Change Initiative
ERIC Educational Resources Information Center
Hamilton, William
2016-01-01
The purpose of this short article is to review the findings from an instrumental case study that examines how a college president used what this article refers to as "frame alignment processes" to mobilize internal and external support for a college initiative--one that achieved success under the current president. Specifically, I…
2009-10-01
cryostat and cooled at a temperature under 77K by a Stirling cryocooler , as represented on the following Figure 5 : Cryostat...Figure 5. Detector cryostat and cryocooler The read-out frequency of the detectors is adapted to the ground speed of the plane above...Cold shield Detector plane Cryocoole r Cryocoole r compresso r Fixed frame Roll frame Pitch frame Yaw frame SIELETERS: a Static Fourier
NASA Astrophysics Data System (ADS)
Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.
2011-12-01
The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.
Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV kV imaging
NASA Astrophysics Data System (ADS)
Liu, W.; Wiersma, R. D.; Mao, W.; Luxton, G.; Xing, L.
2008-12-01
To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ~0.5 mm for the normal adult breathing pattern to ~1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time tracking of implanted markers using hybrid MV-kV imaging is achievable and the technique should be useful to improve the beam targeting accuracy of arc therapy.
Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.
Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L
2008-12-21
To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time tracking of implanted markers using hybrid MV-kV imaging is achievable and the technique should be useful to improve the beam targeting accuracy of arc therapy.
Validation of an improved abnormality insertion method for medical image perception investigations
NASA Astrophysics Data System (ADS)
Madsen, Mark T.; Durst, Gregory R.; Caldwell, Robert T.; Schartz, Kevin M.; Thompson, Brad H.; Berbaum, Kevin S.
2009-02-01
The ability to insert abnormalities in clinical tomographic images makes image perception studies with medical images practical. We describe a new insertion technique and its experimental validation that uses complementary image masks to select an abnormality from a library and place it at a desired location. The method was validated using a 4-alternative forced-choice experiment. For each case, four quadrants were simultaneously displayed consisting of 5 consecutive frames of a chest CT with a pulmonary nodule. One quadrant was unaltered, while the other 3 had the nodule from the unaltered quadrant artificially inserted. 26 different sets were generated and repeated with order scrambling for a total of 52 cases. The cases were viewed by radiology staff and residents who ranked each quadrant by realistic appearance. On average, the observers were able to correctly identify the unaltered quadrant in 42% of cases, and identify the unaltered quadrant both times it appeared in 25% of cases. Consensus, defined by a majority of readers, correctly identified the unaltered quadrant in only 29% of 52 cases. For repeats, the consensus observer successfully identified the unaltered quadrant only once. We conclude that the insertion method can be used to reliably place abnormalities in perception experiments.
Heuristic approach to image registration
NASA Astrophysics Data System (ADS)
Gertner, Izidor; Maslov, Igor V.
2000-08-01
Image registration, i.e. correct mapping of images obtained from different sensor readings onto common reference frame, is a critical part of multi-sensor ATR/AOR systems based on readings from different types of sensors. In order to fuse two different sensor readings of the same object, the readings have to be put into a common coordinate system. This task can be formulated as optimization problem in a space of all possible affine transformations of an image. In this paper, a combination of heuristic methods is explored to register gray- scale images. The modification of Genetic Algorithm is used as the first step in global search for optimal transformation. It covers the entire search space with (randomly or heuristically) scattered probe points and helps significantly reduce the search space to a subspace of potentially most successful transformations. Due to its discrete character, however, Genetic Algorithm in general can not converge while coming close to the optimum. Its termination point can be specified either as some predefined number of generations or as achievement of a certain acceptable convergence level. To refine the search, potential optimal subspaces are searched using more delicate and efficient for local search Taboo and Simulated Annealing methods.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Fluorescence imaging to study cancer burden on lymph nodes
NASA Astrophysics Data System (ADS)
D'Souza, Alisha V.; Elliott, Jonathan T.; Gunn, Jason R.; Samkoe, Kimberley S.; Tichauer, Kenneth M.; Pogue, Brian W.
2015-03-01
Morbidity and complexity involved in lymph node staging via surgical resection and biopsy calls for staging techniques that are less invasive. While visible blue dyes are commonly used in locating sentinel lymph nodes, since they follow tumor-draining lymphatic vessels, they do not provide a metric to evaluate presence of cancer. An area of active research is to use fluorescent dyes to assess tumor burden of sentinel and secondary lymph nodes. The goal of this work was to successfully deploy and test an intra-nodal cancer-cell injection model to enable planar fluorescence imaging of a clinically relevant blue dye, specifically methylene blue along with a cancer targeting tracer, Affibody labeled with IRDYE800CW and subsequently segregate tumor-bearing from normal lymph nodes. This direct-injection based tumor model was employed in athymic rats (6 normal, 4 controls, 6 cancer-bearing), where luciferase-expressing breast cancer cells were injected into axillary lymph nodes. Tumor presence in nodes was confirmed by bioluminescence imaging before and after fluorescence imaging. Lymphatic uptake from the injection site (intradermal on forepaw) to lymph node was imaged at approximately 2 frames/minute. Large variability was observed within each cohort.
Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.
Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2018-01-24
Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.
Evaluation of a MMW active through-the-wall surveillance system
NASA Astrophysics Data System (ADS)
Currie, Nicholas C.; Stiefvater, Kenneth
2002-08-01
This paper discusses the TWS data collected with a state-of- the-art 100 GHz radar imager developed for law enforcement use by Millivision, PPC. The system collects a cube of data consisting of 16 azimuth elements by 16 elevation elements by 256 range elements. The cube represents 11 degrees by 11 degrees by 25 m of coverage. The relatively narrow field-of- view (fov) was extended by physically moving the antenna in 11 degree segments and collecting data which is stitched together into larger images, e.g. a 3X3 stitched image represents 33 degrees by 33 degrees by 26 m of coverage. Unfortunately, this stitching process required up to 5 minutes to collect a single (3X3) stitched image. Thus, motion had to be simulated. The paper will discuss the phenomenology of the MMW radar return from various objects including walls, wall-corners, desks and other furniture, and persons simulating walking. Successive frames from a simulated move of a man and woman walking will be presented, and the actual movie shown at the presentation. Comments will be offered as to the practicality of active MMW imaging for TWS application.
Lack of visible change around active hotspots on Io
NASA Technical Reports Server (NTRS)
1996-01-01
Detail of changes around two hotspots on Jupiter's moon Io as seen by Voyager 1 in April 1979 (left) and NASA's Galileo spacecraft on September 7th, 1996 (middle and right). The right frame was created with images from the Galileo Solid State Imaging system's near-infrared (756 nm), green, and violet filters. For better comparison, the middle frame mimics Voyager colors. The calderas at the top and at the lower right of the images correspond to the locations of hotspots detected by the Near Infrared Mapping Spectrometer aboard the Galileo spacecraft during its second orbit. There are no significant morphologic changes around these hot calderas; however, the diffuse red deposits, which are simply dark in the Voyager colors, appear to be associated with recent and/or ongoing volcanic activity. The three calderas range in size from approximately 100 kilometers to approximately 150 kilometers in diameter. The caldera in the lower right of each frame is named Malik. North is to the top of all frames.
The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoPCIE interface design for high-speed image storage system based on SSD
NASA Astrophysics Data System (ADS)
Wang, Shiming
2015-02-01
This paper proposes and implements a standard interface of miniaturized high-speed image storage system, which combines PowerPC with FPGA and utilizes PCIE bus as the high speed switching channel. Attached to the PowerPC, mSATA interface SSD(Solid State Drive) realizes RAID3 array storage. At the same time, a high-speed real-time image compression patent IP core also can be embedded in FPGA, which is in the leading domestic level with compression rate and image quality, making that the system can record higher image data rate or achieve longer recording time. The notebook memory card buckle type design is used in the mSATA interface SSD, which make it possible to complete the replacement in 5 seconds just using single hand, thus the total length of repeated recordings is increased. MSI (Message Signaled Interrupts) interruption guarantees the stability and reliability of continuous DMA transmission. Furthermore, only through the gigabit network, the remote display, control and upload to backup function can be realized. According to an optional 25 frame/s or 30 frame/s, upload speeds can be up to more than 84 MB/s. Compared with the existing FLASH array high-speed memory systems, it has higher degree of modularity, better stability and higher efficiency on development, maintenance and upgrading. Its data access rate is up to 300MB/s, realizing the high speed image storage system miniaturization, standardization and modularization, thus it is fit for image acquisition, storage and real-time transmission to server on mobile equipment.
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Huang, Howard; Lei, Chen; Kim, Younsu; Boctor, Emad M.
2017-03-01
Photoacoustic (PA) imaging has shown its potential for many clinical applications, but current research and usage of PA imaging are constrained by additional hardware costs to collect channel data, as the PA signals are incorrectly processed in existing clinical ultrasound systems. This problem arises from the fact that ultrasound systems beamform the PA signals as echoes from the ultrasound transducer instead of directly from illuminated sources. Consequently, conventional implementations of PA imaging rely on parallel channel acquisition from research platforms, which are not only slow and expensive, but are also mostly not approved by the FDA for clinical use. In previous studies, we have proposed the synthetic-aperture based photoacoustic re-beamformer (SPARE) that uses ultrasound beamformed radio frequency (RF) data as the input, which is readily available in clinical ultrasound scanners. The goal of this work is to implement the SPARE beamformer in a clinical ultrasound system, and to experimentally demonstrate its real-time visualization. Assuming a high pulsed repetition frequency (PRF) laser is used, a PZT-based pseudo PA source transmission was synchronized with the ultrasound line trigger. As a result, the frame-rate increases when limiting the image field-of-view (FOV), with 50 to 20 frames per second achieved for FOVs from 35 mm to 70 mm depth, respectively. Although in reality the maximum PRF of laser firing limits the PA image frame rate, this result indicates that the developed software is capable of displaying PA images with the maximum possible frame-rate for certain laser system without acquiring channel data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, A; Bednarz, B
Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localizedmore » block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA190298.« less
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2014-01-01
Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632