Sample records for array imaging algorithms

  1. Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging

    PubMed Central

    Guo, Qijia; Wang, Jie; Chang, Tianying

    2017-01-01

    The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083

  2. Three-dimensional near-field MIMO array imaging using range migration techniques.

    PubMed

    Zhuge, Xiaodong; Yarovoy, Alexander G

    2012-06-01

    This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.

  3. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †

    PubMed Central

    Kiku, Daisuke; Okutomi, Masatoshi

    2017-01-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407

  4. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    PubMed

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  5. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  6. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  7. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern

    PubMed Central

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-01-01

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method. PMID:28657602

  8. Colorization-Based RGB-White Color Interpolation using Color Filter Array with Randomly Sampled Pattern.

    PubMed

    Oh, Paul; Lee, Sukho; Kang, Moon Gi

    2017-06-28

    Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.

  9. The wavenumber algorithm for full-matrix imaging using an ultrasonic array.

    PubMed

    Hunter, Alan J; Drinkwater, Bruce W; Wilcox, Paul D

    2008-11-01

    Ultrasonic imaging using full-matrix capture, e.g., via the total focusing method (TFM), has been shown to increase angular inspection coverage and improve sensitivity to small defects in nondestructive evaluation. In this paper, we develop a Fourier-domain approach to full-matrix imaging based on the wavenumber algorithm used in synthetic aperture radar and sonar. The extension to the wavenumber algorithm for full-matrix data is described and the performance of the new algorithm compared with the TFM, which we use as a representative benchmark for the time-domain algorithms. The wavenumber algorithm provides a mathematically rigorous solution to the inverse problem for the assumed forward wave propagation model, whereas the TFM employs heuristic delay-and-sum beamforming. Consequently, the wavenumber algorithm has an improved point-spread function and provides better imagery. However, the major advantage of the wavenumber algorithm is its superior computational performance. For large arrays and images, the wavenumber algorithm is several orders of magnitude faster than the TFM. On the other hand, the key advantage of the TFM is its flexibility. The wavenumber algorithm requires a regularly sampled linear array, while the TFM can handle arbitrary imaging geometries. The TFM and the wavenumber algorithm are compared using simulated and experimental data.

  10. A Flexible Annular-Array Imaging Platform for Micro-Ultrasound

    PubMed Central

    Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei

    2013-01-01

    Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923

  11. Delamination Detection Using Guided Wave Phased Arrays

    NASA Technical Reports Server (NTRS)

    Tian, Zhenhua; Yu, Lingyu; Leckey, Cara

    2016-01-01

    This paper presents a method for detecting multiple delaminations in composite laminates using non-contact phased arrays. The phased arrays are implemented with a non-contact scanning laser Doppler vibrometer (SLDV). The array imaging algorithm is performed in the frequency domain where both the guided wave dispersion effect and direction dependent wave properties are considered. By using the non-contact SLDV array with a frequency domain imaging algorithm, an intensity image of the composite plate can be generated for delamination detection. For the proof of concept, a laboratory test is performed using a non-contact phased array to detect two delaminations (created through quasi-static impact test) at different locations in a composite plate. Using the non-contact phased array and frequency domain imaging, the two impact-induced delaminations are successfully detected. This study shows that the non-contact phased array method is a potentially effective method for rapid delamination inspection in large composite structures.

  12. Solution for the nonuniformity correction of infrared focal plane arrays.

    PubMed

    Zhou, Huixin; Liu, Shangqian; Lai, Rui; Wang, Dabao; Cheng, Yubao

    2005-05-20

    Based on the S-curve model of the detector response of infrared focal plan arrays (IRFPAs), an improved two-point correction algorithm is presented. The algorithm first transforms the nonlinear image data into linear data and then uses the normal two-point algorithm to correct the linear data. The algorithm can effectively overcome the influence of nonlinearity of the detector's response, and it enlarges the correction precision and the dynamic range of the response. A real-time imaging-signal-processing system for IRFPAs that is based on a digital signal processor and field-programmable gate arrays is also presented. The nonuniformity correction capability of the presented solution is validated by experimental imaging procedures of a 128 x 128 pixel IRFPA camera prototype.

  13. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  14. Switched Antenna Array Tile for Real-Time Microwave Imaging Aperture

    DTIC Science & Technology

    2016-06-26

    Switched Antenna Array Tile for Real -Time Microwave Imaging Aperture William F. Moulder, Janusz J. Majewski, Charles M. Coldwell, James D. Krieger...Fast Imaging Algorithm 10mm 250mm Switched Array Tile Fig. 1. Diagram of real -time imaging array, with fabricated antenna tile. except for antenna...formed. IV. CONCLUSIONS A switched array tile to be used in a real time imaging aperture has been presented. Design and realization of the tile were

  15. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    PubMed Central

    Li, David Day-Uei; Ameer-Beg, Simon; Arlt, Jochen; Tyndall, David; Walker, Richard; Matthews, Daniel R.; Visitkul, Viput; Richardson, Justin; Henderson, Robert K.

    2012-01-01

    We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD)-based cameras for fluorescence lifetime imaging microscopy (FLIM) by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber) are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast. PMID:22778606

  16. Fast photoacoustic imaging system based on 320-element linear transducer array.

    PubMed

    Yin, Bangzheng; Xing, Da; Wang, Yi; Zeng, Yaguang; Tan, Yi; Chen, Qun

    2004-04-07

    A fast photoacoustic (PA) imaging system, based on a 320-transducer linear array, was developed and tested on a tissue phantom. To reconstruct a test tomographic image, 64 time-domain PA signals were acquired from a tissue phantom with embedded light-absorption targets. A signal acquisition was accomplished by utilizing 11 phase-controlled sub-arrays, each consisting of four transducers. The results show that the system can rapidly map the optical absorption of a tissue phantom and effectively detect the embedded light-absorbing target. By utilizing the multi-element linear transducer array and phase-controlled imaging algorithm, we thus can acquire PA tomography more efficiently, compared to other existing technology and algorithms. The methodology and equipment thus provide a rapid and reliable approach to PA imaging that may have potential applications in noninvasive imaging and clinic diagnosis.

  17. 3-Dimensional stereo implementation of photoacoustic imaging based on a new image reconstruction algorithm without using discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu

    2017-05-01

    In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.

  18. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  19. Implementation of total focusing method for phased array ultrasonic imaging on FPGA

    NASA Astrophysics Data System (ADS)

    Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2015-02-01

    This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.

  20. Image Reconstruction in Radio Astronomy with Non-Coplanar Synthesis Arrays

    NASA Astrophysics Data System (ADS)

    Goodrick, L.

    2015-03-01

    Traditional radio astronomy imaging techniques assume that the interferometric array is coplanar, with a small field of view, and that the two-dimensional Fourier relationship between brightness and visibility remains valid, allowing the Fast Fourier Transform to be used. In practice, to acquire more accurate data, the non-coplanar baseline effects need to be incorporated, as small height variations in the array plane introduces the w spatial frequency component. This component adds an additional phase shift to the incoming signals. There are two approaches to account for the non-coplanar baseline effects: either the full three-dimensional brightness and visibility model can be used to reconstruct an image, or the non-coplanar effects can be removed, reducing the three dimensional relationship to that of the two-dimensional one. This thesis describes and implements the w-projection and w-stacking algorithms. The aim of these algorithms is to account for the phase error introduced by non-coplanar synthesis arrays configurations, making the recovered visibilities more true to the actual brightness distribution model. This is done by reducing the 3D visibilities to a 2D visibility model. The algorithms also have the added benefit of wide-field imaging, although w-stacking supports a wider field of view at the cost of more FFT bin support. For w-projection, the w-term is accounted for in the visibility domain by convolving it out of the problem with a convolution kernel, allowing the use of the two-dimensional Fast Fourier Transform. Similarly, the w-Stacking algorithm applies a phase correction in the image domain to image layers to produce an intensity model that accounts for the non-coplanar baseline effects. This project considers the KAT7 array for simulation and analysis of the limitations and advantages of both the algorithms. Additionally, a variant of the Högbom CLEAN algorithm was used which employs contour trimming for extended source emission flagging. The CLEAN algorithm is an iterative two-dimensional deconvolution method that can further improve image fidelity by removing the effects of the point spread function which can obscure source data.

  1. Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

    NASA Technical Reports Server (NTRS)

    Sidick, Erikin

    2009-01-01

    A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.

  2. SELF CALIBRATED STMR ARRAY FOR MATERIAL CHARACTERIZATION AND SHM OF ORTHOTROPIC PLATE-LIKE STRUCTURES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnuvardhan, J.; Muralidharan, Ajith; Balasubramaniam, Krishnan

    A full ring STMR array patch had been used for Structural Health Monitoring (SHM) of anisotropic materials where the elastic moduli, correspond to the virgin sample, were used in the calculations. In the present work an in-situ SHM has been successfully demonstrated using a novel compact sensor patch (Double ring single quadrant small footprint STMR array) through simultaneous reconstruction of the elastic moduli, material symmetry, orientation of principal planes and defect imaging. The direct received signals were used to measure Lamb wave velocities, which were used in a slowness based reconstructed algorithm using Genetic Algorithm to reconstruct the elastic moduli,more » material symmetry and orientation of principal planes. The measured signals along with the reconstructed elastic moduli were used in the phased addition algorithm for imaging the damages present on the structure. To show the applicability of the method, simulations were carried out with the double ring single quadrant STMR array configuration to image defects and are compared with the images obtained using simulation data of the full ring STMR array configuration. The experimental validation has been carried out using 3.15 mm quasi-isotropic graphite-epoxy composite. The double ring single quadrant STMR array has advantages over the full ring STMR array as it can carry out in-situ SHM with limited footprint on the structure.« less

  3. Three-Dimensional Microwave Imaging for Indoor Environments

    NASA Astrophysics Data System (ADS)

    Scott, Simon

    Microwave imaging involves the use of antenna arrays, operating at microwave and millimeter-wave frequencies, for capturing images of real-world objects. Typically, one or more antennas in the array illuminate the scene with a radio-frequency (RF) signal. Part of this signal reflects back to the other antennas, which record both the amplitude and phase of the reflected signal. These reflected RF signals are then processed to form an image of the scene. This work focuses on using planar antenna arrays, operating between 17 and 26 GHz, to capture three-dimensional images of people and other objects inside a room. Such an imaging system enables applications such as indoor positioning and tracking, health monitoring and hand gesture recognition. Microwave imaging techniques based on beamforming cannot be used for indoor imaging, as most objects lie within the array near-field. Therefore, the range-migration algorithm (RMA) is used instead, as it compensates for the curvature of the reflected wavefronts, hence enabling near-field imaging. It is also based on fast-Fourier transforms and is therefore computationally efficient. A number of novel RMA variants were developed to support a wider variety of antenna array configurations, as well as to generate 3-D velocity maps of objects moving around a room. The choice of antenna array configuration, microwave transceiver components and transmit power has a significant effect on both the energy consumed by the imaging system and the quality of the resulting images. A generic microwave imaging testbed was therefore built to characterize the effect of these antenna array parameters on image quality in the 20 GHz band. All variants of the RMA were compared and found to produce good quality three-dimensional images with transmit power levels as low as 1 muW. With an array size of 80x80 antennas, most of the imaging algorithms were able to image objects at 0.5 m range with 12.5 mm resolution, although some were only able to achieve 20 mm resolution. Increasing the size of the antenna array further results in a proportional improvement in image resolution and image SNR, until the resolution reaches the half-wavelength limit. While microwave imaging is not a new technology, it has seen little commercial success due to the cost and power consumption of the large number of antennas and radio transceivers required to build such a system. The cost and power consumption can be reduced by using low-power and low-cost components in both the transmit and receive RF chains, even if these components have poor noise figures. Alternatively, the cost and power consumption can be reduced by decreasing the number of antennas in the array, while keeping the aperture constant. This reduction in antenna count is achieved by randomly depopulating the array, resulting in a sparse antenna array. A novel compressive sensing algorithm, coupled with the wavelet transform, is used to process the samples collected by the sparse array and form a 3-D image of the scene. This algorithm works well for antenna arrays that are up to 96% sparse, equating to a 25 times reduction in the number of required antennas. For microwave imaging to be useful, it needs to capture images of the scene in real time. The architecture of a system capable of capturing real-time 3-D microwave images is therefore designed. The system consists of a modular antenna array, constructed by plugging RF daughtercards into a carrier board. Each daughtercard is a self-contained radio system, containing an antenna, RF transceiver baseband signal chain, and analog-to-digital converters. A small number of daughtercards have been built, and proven to be suitable for real-time microwave imaging. By arranging these daughtercards in different ways, any antenna array pattern can be built. This architecture allows real-time microwave imaging systems to be rapidly prototyped, while still being able to generate images at video frame rates.

  4. Synthetic Aperture Imaging Polarimeter: Postprint

    DTIC Science & Technology

    2010-02-01

    mechanical design of the SAlP prototype revol .... es around the concept of a modular array. The modular aspect allows for the array to be built in...imagery of source . The top row images are of the actual fringe pattern incident on the SAlP prototype array. These pictures were taken through the...processed images associated with each of the inputs. The results demonstrated that the SAlP prototype array works in conjunction with the algorithm

  5. Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.

    PubMed

    Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N

    2008-10-10

    The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America

  6. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    NASA Astrophysics Data System (ADS)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  7. Sharpening spots: correcting for bleedover in cDNA array images.

    PubMed

    Therneau, Terry; Tschumper, Renee C; Jelinek, Diane

    2002-03-01

    For cDNA array methods that depend on imaging of a radiolabel, we show that bleedover of one spot onto another, due to the gap between the array and the imaging media, can be a major problem. The images can be sharpened, however, using a blind convolution method based on the EM algorithm. The sharpened images look like a set of donuts, which concurs with our knowledge of the spotting process. Oversharpened images are actually useful as well, in locating the centers of each spot.

  8. Using redundancy of round-trip ultrasound signal for non-continuous arrays: Application to gap and blockage compensation.

    PubMed

    Robert, Jean-Luc; Erkamp, Ramon; Korukonda, Sanghamithra; Vignon, François; Radulescu, Emil

    2015-11-01

    In ultrasound imaging, an array of elements is used to image a medium. If part of the array is blocked by an obstacle, or if the array is made from several sub-arrays separated by a gap, grating lobes appear and the image is degraded. The grating lobes are caused by missing spatial frequencies, corresponding to the blocked or non-existing elements. However, in an active imaging system, where elements are used both for transmitting and receiving, the round trip signal is redundant: different pairs of transmit and receive elements carry similar information. It is shown here that, if the gaps are smaller than the active sub-apertures, this redundancy can be used to compensate for the missing signals and recover full resolution. Three algorithms are proposed: one is based on a synthetic aperture method, a second one uses dual-apodization beamforming, and the third one is a radio frequency (RF) data based deconvolution. The algorithms are evaluated on simulated and experimental data sets. An application could be imaging through ribs with a large aperture.

  9. Automated Handling of Garments for Pressing

    DTIC Science & Technology

    1991-09-30

    Parallel Algorithms for 2D Kalman Filtering ................................. 47 DJ. Potter and M.P. Cline Hash Table and Sorted Array: A Case Study of... Kalman Filtering on the Connection Machine ............................ 55 MA. Palis and D.K. Krecker Parallel Sorting of Large Arrays on the MasPar...ALGORITHM’VS FOR SEAM SENSING. .. .. .. ... ... .... ..... 24 6.1 KarelTW Algorithms .. .. ... ... ... ... .... ... ...... 24 6.1.1 Image Filtering

  10. Real-time algorithm for acoustic imaging with a microphone array.

    PubMed

    Huang, Xun

    2009-05-01

    Acoustic phased array has become an important testing tool in aeroacoustic research, where the conventional beamforming algorithm has been adopted as a classical processing technique. The computation however has to be performed off-line due to the expensive cost. An innovative algorithm with real-time capability is proposed in this work. The algorithm is similar to a classical observer in the time domain while extended for the array processing to the frequency domain. The observer-based algorithm is beneficial mainly for its capability of operating over sampling blocks recursively. The expensive experimental time can therefore be reduced extensively since any defect in a testing can be corrected instantaneously.

  11. Imaging through turbulence using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-09-01

    Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.

  12. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  13. Demosaicking algorithm for the Kodak-RGBW color filter array

    NASA Astrophysics Data System (ADS)

    Rafinazari, M.; Dubois, E.

    2015-01-01

    Digital cameras capture images through different Color Filter Arrays and then reconstruct the full color image. Each CFA pixel only captures one primary color component; the other primary components will be estimated using information from neighboring pixels. During the demosaicking algorithm, the two unknown color components will be estimated at each pixel location. Most of the demosaicking algorithms use the RGB Bayer CFA pattern with Red, Green and Blue filters. The least-Squares Luma-Chroma demultiplexing method is a state of the art demosaicking method for the Bayer CFA. In this paper we develop a new demosaicking algorithm using the Kodak-RGBW CFA. This particular CFA reduces noise and improves the quality of the reconstructed images by adding white pixels. We have applied non-adaptive and adaptive demosaicking method using the Kodak-RGBW CFA on the standard Kodak image dataset and the results have been compared with previous work.

  14. Early Breast Cancer Diagnosis Using Microwave Imaging via Space-Frequency Algorithm

    NASA Astrophysics Data System (ADS)

    Vemulapalli, Spandana

    The conventional breast cancer detection methods have limitations ranging from ionizing radiations, low specificity to high cost. These limitations make way for a suitable alternative called Microwave Imaging, as a screening technique in the detection of breast cancer. The discernible differences between the benign, malignant and healthy breast tissues and the ability to overcome the harmful effects of ionizing radiations make microwave imaging, a feasible breast cancer detection technique. Earlier studies have shown the variation of electrical properties of healthy and malignant tissues as a function of frequency and hence stimulates high bandwidth requirement. A Ultrawideband, Wideband and Narrowband arrays have been designed, simulated and optimized for high (44%), medium (33%) and low (7%) bandwidths respectively, using the EM (electromagnetic software) called FEKO. These arrays are then used to illuminate the breast model (phantom) and the received backscattered signals are obtained in the near field for each case. The Microwave Imaging via Space-Time (MIST) beamforming algorithm in the frequency domain, is next applied to these near field backscattered monostatic frequency response signals for the image reconstruction of the breast model. The main purpose of this investigation is to access the impact of bandwidth and implement a novel imaging technique for use in the early detection of breast cancer. Earlier studies show the implementation of the MIST imaging algorithm on the time domain signals via a frequency domain beamformer. The performance evaluation of the imaging algorithm on the frequency response signals has been carried out in the frequency domain. The energy profile of the breast in the spatial domain is created via the frequency domain Parseval's theorem. The beamformer weights calculated using these the MIST algorithm (not including the effect of the skin) has been calculated for Ultrawideband, Wideband and Narrowband arrays, respectively. Quality metrics such as dynamic range, radiometric resolution etc. are also evaluated for all the three types of arrays.

  15. A methodology for evaluating detection performance of ultrasonic array imaging algorithms for coarse-grained materials.

    PubMed

    Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S

    2014-12-01

    Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.

  16. A Generic and Efficient E-field Parallel Imaging Correlator for Next-Generation Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Beardsley, Adam P.; Bowman, Judd D.; Morales, Miguel F.

    2017-05-01

    Modern radio telescopes are favouring densely packed array layouts with large numbers of antennas (NA ≳ 1000). Since the complexity of traditional correlators scales as O(N_A^2), there will be a steep cost for realizing the full imaging potential of these powerful instruments. Through our generic and efficient E-field Parallel Imaging Correlator (epic), we present the first software demonstration of a generalized direct imaging algorithm, namely the Modular Optimal Frequency Fourier imager. Not only does it bring down the cost for dense layouts to O(N_A log _2N_A) but can also image from irregular layouts and heterogeneous arrays of antennas. epic is highly modular, parallelizable, implemented in object-oriented python, and publicly available. We have verified the images produced to be equivalent to those from traditional techniques to within a precision set by gridding coarseness. We have also validated our implementation on data observed with the Long Wavelength Array (LWA1). We provide a detailed framework for imaging with heterogeneous arrays and show that epic robustly estimates the input sky model for such arrays. Antenna layouts with dense filling factors consisting of a large number of antennas such as LWA, the Square Kilometre Array, Hydrogen Epoch of Reionization Array, and Canadian Hydrogen Intensity Mapping Experiment will gain significant computational advantage by deploying an optimized version of epic. The algorithm is a strong candidate for instruments targeting transient searches of fast radio bursts as well as planetary and exoplanetary phenomena due to the availability of high-speed calibrated time-domain images and low output bandwidth relative to visibility-based systems.

  17. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  18. Comparative study of 2D ultrasound imaging methods in the f-k domain and evaluation of their performances in a realistic NDT configuration

    NASA Astrophysics Data System (ADS)

    Merabet, Lucas; Robert, Sébastien; Prada, Claire

    2018-04-01

    In this paper, we present two frequency-domain algorithms for 2D imaging with plane wave emissions, namely Stolt's migration and Lu's method. The theoretical background is first presented, followed by an analysis of the algorithm complexities. The frequency-domain methods are then compared to the time-domain plane wave imaging in a realistic inspection configuration where the array elements are not in contact with the specimen. Imaging defects located far away from the array aperture is assessed and computation times for the three methods are presented as a function of the number of pixels of the reconstructed image. We show that Lu's method provides a time gain of up to 33 compared to the time-domain algorithm, and demonstrate the limitations of Stolt's migration for defects far away from the aperture.

  19. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  20. 2006 Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Ireland, Michael; Monnier, John D.; Thiebaut, Eric; Rengaswamy, Sridharan; Baron, Fabien; Young, John S.; Kraus, Stefan; hide

    2006-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Five different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formatted in the OI-FITS format. The data are calibrated power spectra and bispectra measured with an array intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  1. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  2. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  3. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  4. Increasing feasibility of the field-programmable gate array implementation of an iterative image registration using a kernel-warping algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.

    2017-09-01

    Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.

  5. An Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Monnier, John D.; Zhaod, Ming; Young, John S.; Thorsteinsson, Hrobjartur; Meimon, Serge C.; Mugnier, Laurent; LeBesnerais, Guy; hide

    2004-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Six different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formated in the interferometry Data Exchange Standard and is designed to simulate a specific problem relevant to long-baseline imaging. The data are calibrated power spectra and bispectra measured with a ctitious array, intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  6. Damage localization in aluminum plate with compact rectangular phased piezoelectric transducer array

    NASA Astrophysics Data System (ADS)

    Liu, Zenghua; Sun, Kunming; Song, Guorong; He, Cunfu; Wu, Bin

    2016-03-01

    In this work, a detection method for the damage in plate-like structure with a compact rectangular phased piezoelectric transducer array of 16 piezoelectric elements was presented. This compact array can not only detect and locate a single defect (through hole) in plate, but also identify multi-defects (through holes and surface defect simulated by an iron pillar glued to the plate). The experiments proved that the compact rectangular phased transducer array could detect the full range of plate structures and implement multiple-defect detection simultaneously. The processing algorithm proposed in this paper contains two parts: signal filtering and damage imaging. The former part was used to remove noise from signals. Continuous wavelet transform was applicable to signal filtering. Continuous wavelet transform can provide a plot of wavelet coefficients and the signal with narrow frequency band can be easily extracted from the plot. The latter part of processing algorithm was to implement damage detection and localization. In order to accurately locate defects and improve the imaging quality, two images were obtained from amplitude and phase information. One image was obtained with the Total Focusing Method (TFM) and another phase image was obtained with the Sign Coherence Factor (SCF). Furthermore, an image compounding technique for compact rectangular phased piezoelectric transducer array was proposed in this paper. With the proposed technique, the compounded image can be obtained by combining TFM image with SCF image, thus greatly improving the resolution and contrast of image.

  7. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  8. Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.

    PubMed

    Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho

    2017-10-01

    This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.

  9. Development of a Receiver Processor For UAV Video Signal Acquisition and Tracking Using Digital Phased Array Antenna

    DTIC Science & Technology

    2010-09-01

    53 Figure 26. Image of the phased array antenna...................................................................54...69 Figure 38. Computation of correction angle from array factor and sum/difference beams...71 Figure 39. Front panel of the tracking algorithm

  10. GPU-based real-time trinocular stereo vision

    NASA Astrophysics Data System (ADS)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  11. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  12. Wideband RELAX and wideband CLEAN for aeroacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  13. Wideband RELAX and wideband CLEAN for aeroacoustic imaging.

    PubMed

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  14. Breast cancer detection using time reversal

    NASA Astrophysics Data System (ADS)

    Sheikh Sajjadieh, Mohammad Hossein

    Breast cancer is the second leading cause of cancer death after lung cancer among women. Mammography and magnetic resonance imaging (MRI) have certain limitations in detecting breast cancer, especially during its early stage of development. A number of studies have shown that microwave breast cancer detection has potential to become a successful clinical complement to the conventional X-ray mammography. Microwave breast imaging is performed by illuminating the breast tissues with an electromagnetic waveform and recording its reflections (backscatters) emanating from variations in the normal breast tissues and tumour cells, if present, using an antenna array. These backscatters, referred to as the overall (tumour and clutter) response, are processed to estimate the tumour response, which is applied as input to array imaging algorithms used to estimate the location of the tumour. Due to changes in the breast profile over time, the commonly utilized background subtraction procedures used to estimate the target (tumour) response in array processing are impractical for breast cancer detection. The thesis proposes a new tumour estimation algorithm based on a combination of the data adaptive filter with the envelope detection filter (DAF/EDF), which collectively do not require a training step. After establishing the superiority of the DAF/EDF based approach, the thesis shows that the time reversal (TR) array imaging algorithms outperform their conventional conterparts in detecting and localizing tumour cells in breast tissues at SNRs ranging from 15 to 30dB.

  15. Configuration study for a 30 GHz monolithic receive array, volume 2

    NASA Technical Reports Server (NTRS)

    Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.

    1984-01-01

    The formalism of the sidelobe suppression algorithm and the method used to calculate the system noise figure for a 30 GHz monolithic receive array are presented. Results of array element weight determination and performance studies of a Gregorian aperture image system are also given.

  16. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  17. High-resolution imaging using a wideband MIMO radar system with two distributed arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Chen, A-Lei; Su, Yi

    2010-05-01

    Imaging a fast maneuvering target has been an active research area in past decades. Usually, an array antenna with multiple elements is implemented to avoid the motion compensations involved in the inverse synthetic aperture radar (ISAR) imaging. Nevertheless, there is a price dilemma due to the high level of hardware complexity compared to complex algorithm implemented in the ISAR imaging system with only one antenna. In this paper, a wideband multiple-input multiple-output (MIMO) radar system with two distributed arrays is proposed to reduce the hardware complexity of the system. Furthermore, the system model, the equivalent array production method and the imaging procedure are presented. As compared with the classical real aperture radar (RAR) imaging system, there is a very important contribution in our method that the lower hardware complexity can be involved in the imaging system since many additive virtual array elements can be obtained. Numerical simulations are provided for testing our system and imaging method.

  18. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation.

    PubMed

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-05-08

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system.

  19. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  20. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  1. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  2. Photoacoustic Imaging of Animals with an Annular Transducer Array

    NASA Astrophysics Data System (ADS)

    Yang, Di-Wu; Zhou, Zhi-Bin; Zeng, Lv-Ming; Zhou, Xin; Chen, Xing-Hui

    2014-07-01

    A photoacoustic system with an annular transducer array is presented for rapid, high-resolution photoacoustic tomography of animals. An eight-channel data acquisition system is applied to capture the photoacoustic signals by using multiplexing and the total time of data acquisition and transferring is within 3 s. A limited-view filtered back projection algorithm is used to reconstruct the photoacoustic images. Experiments are performed on a mouse head and a rabbit head and clear photoacoustic images are obtained. The experimental results demonstrate that this imaging system holds the potential for imaging the human brain.

  3. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  4. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  5. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla.

    PubMed

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1(-)) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. In vivo sensitivity estimation and imaging acceleration with rotating RF coil arrays at 7 Tesla

    NASA Astrophysics Data System (ADS)

    Li, Mingyan; Jin, Jin; Zuo, Zhentao; Liu, Feng; Trakic, Adnan; Weber, Ewald; Zhuo, Yan; Xue, Rong; Crozier, Stuart

    2015-03-01

    Using a new rotating SENSitivity Encoding (rotating-SENSE) algorithm, we have successfully demonstrated that the rotating radiofrequency coil array (RRFCA) was capable of achieving a significant reduction in scan time and a uniform image reconstruction for a homogeneous phantom at 7 Tesla. However, at 7 Tesla the in vivo sensitivity profiles (B1-) become distinct at various angular positions. Therefore, sensitivity maps at other angular positions cannot be obtained by numerically rotating the acquired ones. In this work, a novel sensitivity estimation method for the RRFCA was developed and validated with human brain imaging. This method employed a library database and registration techniques to estimate coil sensitivity at an arbitrary angular position. The estimated sensitivity maps were then compared to the acquired sensitivity maps. The results indicate that the proposed method is capable of accurately estimating both magnitude and phase of sensitivity at an arbitrary angular position, which enables us to employ the rotating-SENSE algorithm to accelerate acquisition and reconstruct image. Compared to a stationary coil array with the same number of coil elements, the RRFCA was able to reconstruct images with better quality at a high reduction factor. It is hoped that the proposed rotation-dependent sensitivity estimation algorithm and the acceleration ability of the RRFCA will be particularly useful for ultra high field MRI.

  7. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Chengguang; Drinkwater, Bruce W.

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method.more » However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.« less

  8. An improved non-uniformity correction algorithm and its hardware implementation on FPGA

    NASA Astrophysics Data System (ADS)

    Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong

    2017-09-01

    The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.

  9. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  10. Maximum likelihood positioning and energy correction for scintillation detectors

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten

    2016-02-01

    An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.

  11. Development of a 35-MHz piezo-composite ultrasound array for medical imaging.

    PubMed

    Cannata, Jonathan M; Williams, Jay A; Zhou, Qifa; Ritter, Timothy A; Shung, K Kirk

    2006-01-01

    This paper discusses the development of a 64-element 35-MHz composite ultrasonic array. This array was designed primarily for ocular imaging applications, and features 2-2 composite elements mechanically diced out of a fine-grain high-density Navy Type VI ceramic. Array elements were spaced at a 50-micron pitch, interconnected via a custom flexible circuit and matched to the 50-ohm system electronics via a 75-ohm transmission line coaxial cable. Elevation focusing was achieved using a cylindrically shaped epoxy lens. One functional 64-element array was fabricated and tested. Bandwidths averaging 55%, 23-dB insertion loss, and crosstalk less than -24 dB were measured. An image of a tungsten wire target phantom was acquired using a synthetic aperture reconstruction algorithm. The results from this imaging test demonstrate resolution exceeding 50 microm axially and 100 microm laterally.

  12. A Method of Sky Ripple Residual Nonuniformity Reduction for a Cooled Infrared Imager and Hardware Implementation

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Li, Shuo; Zhang, Xu; Zhu, Jin

    2017-01-01

    Cooled infrared detector arrays always suffer from undesired ripple residual nonuniformity (RNU) in sky scene observations. The ripple residual nonuniformity seriously affects the imaging quality, especially for small target detection. It is difficult to eliminate it using the calibration-based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified temporal high-pass nonuniformity correction algorithm using fuzzy scene classification. The fuzzy scene classification is designed to control the correction threshold so that the algorithm can remove ripple RNU without degrading the scene details. We test the algorithm on a real infrared sequence by comparing it to several well-established methods. The result shows that the algorithm has obvious advantages compared with the tested methods in terms of detail conservation and convergence speed for ripple RNU correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA), which has two advantages: (1) low resources consumption; and (2) small hardware delay (less than 10 image rows). It has been successfully applied in an actual system. PMID:28481320

  13. Scene-based nonuniformity correction technique that exploits knowledge of the focal-plane array readout architecture.

    PubMed

    Narayanan, Balaji; Hardie, Russell C; Muse, Robert A

    2005-06-10

    Spatial fixed-pattern noise is a common and major problem in modern infrared imagers owing to the nonuniform response of the photodiodes in the focal plane array of the imaging system. In addition, the nonuniform response of the readout and digitization electronics, which are involved in multiplexing the signals from the photodiodes, causes further nonuniformity. We describe a novel scene based on a nonuniformity correction algorithm that treats the aggregate nonuniformity in separate stages. First, the nonuniformity from the readout amplifiers is corrected by use of knowledge of the readout architecture of the imaging system. Second, the nonuniformity resulting from the individual detectors is corrected with a nonlinear filter-based method. We demonstrate the performance of the proposed algorithm by applying it to simulated imagery and real infrared data. Quantitative results in terms of the mean absolute error and the signal-to-noise ratio are also presented to demonstrate the efficacy of the proposed algorithm. One advantage of the proposed algorithm is that it requires only a few frames to obtain high-quality corrections.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  15. An Efficient Image Recovery Algorithm for Diffraction Tomography Systems

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1993-01-01

    A diffraction tomography system has potential application in ultrasonic medical imaging area. It is capable of achieving imagery with the ultimate resolution of one quarter the wavelength by collecting ultrasonic backscattering data from a circular array of sensors and reconstructing the object reflectivity using a digital image recovery algorithm performed by a computer. One advantage of such a system is that is allows a relatively lower frequency wave to penetrate more deeply into the object and still achieve imagery with a reasonable resolution. An efficient image recovery algorithm for the diffraction tomography system was originally developed for processing a wide beam spaceborne SAR data...

  16. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  17. Sparse interferometric millimeter-wave array for centimeter-level 100-m standoff imaging

    NASA Astrophysics Data System (ADS)

    Suen, Jonathan Y.; Lubin, Philip M.; Solomon, Steven L.; Ginn, Robert P.

    2013-05-01

    We present work on the development of a long range standoff concealed weapons detection system capable of imaging under very heavy clothing at distances exceeding 100 m with a cm resolution. The system is based off a combination of phased array technologies used in radio astronomy and SAR radar by using a coherent, multi-frequency reconstruction algorithm which can run at up to 1000 Hz frame rates and high SNR with a multi-tone transceiver. We show the flexible design space of our system as well as algorithm development, predicted system performance and impairments, and simulated reconstructed images. The system can be used for a variety of purposes including portal applications, crowd scanning and tactical situations. Additional uses include seeing through dust and fog.

  18. Model-based Estimation for Pose, Velocity of Projectile from Stereo Linear Array Image

    NASA Astrophysics Data System (ADS)

    Zhao, Zhuxin; Wen, Gongjian; Zhang, Xing; Li, Deren

    2012-01-01

    The pose (position and attitude) and velocity of in-flight projectiles have major influence on the performance and accuracy. A cost-effective method for measuring the gun-boosted projectiles is proposed. The method adopts only one linear array image collected by the stereo vision system combining a digital line-scan camera and a mirror near the muzzle. From the projectile's stereo image, the motion parameters (pose and velocity) are acquired by using a model-based optimization algorithm. The algorithm achieves optimal estimation of the parameters by matching the stereo projection of the projectile and that of the same size 3D model. The speed and the AOA (angle of attack) could also be determined subsequently. Experiments are made to test the proposed method.

  19. Plenoptic projection fluorescence tomography.

    PubMed

    Iglesias, Ignacio; Ripoll, Jorge

    2014-09-22

    A new method to obtain the three-dimensional localization of fluorochrome distributions in micrometric samples is presented. It uses a microlens array coupled to the image port of a standard microscope to obtain tomographic data by a filtered back-projection algorithm. Scanning of the microlens array is proposed to obtain a dense data set for reconstruction. Simulation and experimental results are shown and the implications of this approach in fast 3D imaging are discussed.

  20. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  1. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection.

    PubMed

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-06-08

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image.

  2. Image Processing of Porous Silicon Microarray in Refractive Index Change Detection

    PubMed Central

    Guo, Zhiqing; Jia, Zhenhong; Yang, Jie; Kasabov, Nikola; Li, Chuanxi

    2017-01-01

    A new method for extracting the dots is proposed by the reflected light image of porous silicon (PSi) microarray utilization in this paper. The method consists of three parts: pretreatment, tilt correction and spot segmentation. First, based on the characteristics of different components in HSV (Hue, Saturation, Value) space, a special pretreatment is proposed for the reflected light image to obtain the contour edges of the array cells in the image. Second, through the geometric relationship of the target object between the initial external rectangle and the minimum bounding rectangle (MBR), a new tilt correction algorithm based on the MBR is proposed to adjust the image. Third, based on the specific requirements of the reflected light image segmentation, the array cells are segmented into dots as large as possible and the distance between the dots is equal in the corrected image. Experimental results show that the pretreatment part of this method can effectively avoid the influence of complex background and complete the binarization processing of the image. The tilt correction algorithm has a shorter computation time, which makes it highly suitable for tilt correction of reflected light images. The segmentation algorithm makes the dots in a regular arrangement, excludes the edges and the bright spots. This method could be utilized in the fast, accurate and automatic dots extraction of the PSi microarray reflected light image. PMID:28594383

  3. The NMR phased array.

    PubMed

    Roemer, P B; Edelstein, W A; Hayes, C E; Souza, S P; Mueller, O M

    1990-11-01

    We describe methods for simultaneously acquiring and subsequently combining data from a multitude of closely positioned NMR receiving coils. The approach is conceptually similar to phased array radar and ultrasound and hence we call our techniques the "NMR phased array." The NMR phased array offers the signal-to-noise ratio (SNR) and resolution of a small surface coil over fields-of-view (FOV) normally associated with body imaging with no increase in imaging time. The NMR phased array can be applied to both imaging and spectroscopy for all pulse sequences. The problematic interactions among nearby surface coils is eliminated (a) by overlapping adjacent coils to give zero mutual inductance, hence zero interaction, and (b) by attaching low input impedance preamplifiers to all coils, thus eliminating interference among next nearest and more distant neighbors. We derive an algorithm for combining the data from the phased array elements to yield an image with optimum SNR. Other techniques which are easier to implement at the cost of lower SNR are explored. Phased array imaging is demonstrated with high resolution (512 x 512, 48-cm FOV, and 32-cm FOV) spin-echo images of the thoracic and lumbar spine. Data were acquired from four-element linear spine arrays, the first made of 12-cm square coils and the second made of 8-cm square coils. When compared with images from a single 15 x 30-cm rectangular coil and identical imaging parameters, the phased array yields a 2X and 3X higher SNR at the depth of the spine (approximately 7 cm).

  4. Imaging of downward-looking linear array SAR using three-dimensional spatial smoothing MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Siqian; Kuang, Gangyao

    2014-10-01

    In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.

  5. Improved Phased Array Imaging of a Model Jet

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Podboy, Gary G.

    2010-01-01

    An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.

  6. A novel weighted-direction color interpolation

    NASA Astrophysics Data System (ADS)

    Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng

    2013-08-01

    A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.

  7. Arraycount, an algorithm for automatic cell counting in microwell arrays.

    PubMed

    Kachouie, Nezamoddin; Kang, Lifeng; Khademhosseini, Ali

    2009-09-01

    Microscale technologies have emerged as a powerful tool for studying and manipulating biological systems and miniaturizing experiments. However, the lack of software complementing these techniques has made it difficult to apply them for many high-throughput experiments. This work establishes Arraycount, an approach to automatically count cells in microwell arrays. The procedure consists of fluorescent microscope imaging of cells that are seeded in microwells of a microarray system and then analyzing images via computer to recognize the array and count cells inside each microwell. To start counting, green and red fluorescent images (representing live and dead cells, respectively) are extracted from the original image and processed separately. A template-matching algorithm is proposed in which pre-defined well and cell templates are matched against the red and green images to locate microwells and cells. Subsequently, local maxima in the correlation maps are determined and local maxima maps are thresholded. At the end, the software records the cell counts for each detected microwell on the original image in high-throughput. The automated counting was shown to be accurate compared with manual counting, with a difference of approximately 1-2 cells per microwell: based on cell concentration, the absolute difference between manual and automatic counting measurements was 2.5-13%.

  8. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.

  9. A pipelined architecture for real time correction of non-uniformity in infrared focal plane arrays imaging system using multiprocessors

    NASA Astrophysics Data System (ADS)

    Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan

    2010-07-01

    This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.

  10. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

    PubMed

    Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S

    2018-03-05

    A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

  11. A Comparison of Lightning Flashes as Observed by the Lightning Imaging Sensor and the North Alabama Lightning Mapping Array

    NASA Technical Reports Server (NTRS)

    Bateman, M. G.; Mach, D. M.; McCaul, M. G.; Bailey, J. C.; Christian, H. J.

    2008-01-01

    The Lightning Imaging Sensor (LIS) aboard the TRMM satellite has been collecting optical lightning data since November 1997. A Lightning Mapping Array (LMA) that senses VHF impulses from lightning was installed in North Alabama in the Fall of 2001. A dataset has been compiled to compare data from both instruments for all times when the LIS was passing over the domain of our LMA. We have algorithms for both instruments to group pixels or point sources into lightning flashes. This study presents the comparison statistics of the flash data output (flash duration, size, and amplitude) from both algorithms. We will present the results of this comparison study and show "point-level" data to explain the differences. AS we head closer to realizing a Global Lightning Mapper (GLM) on GOES-R, better understanding and ground truth of each of these instruments and their respective flash algorithms is needed.

  12. Eigenspace-based minimum variance adaptive beamformer combined with delay multiply and sum: experimental study

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-02-01

    Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.

  13. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  14. Comparing an FPGA to a Cell for an Image Processing Application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  15. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  16. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    NASA Technical Reports Server (NTRS)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  17. An evaluation to design high performance pinhole array detector module for four head SPECT: a simulation study

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2014-09-01

    The purpose of this study is to derive optimized parameters for a detector module employing an off-the-shelf X-ray camera and a pinhole array collimator applicable for a range of different SPECT systems. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were performed to estimate the performance of the pinhole array collimators and were compared to that of low energy high resolution (LEHR) parallel-hole collimator in a four head SPECT system. A detector module was simulated to have 48 mm by 48 mm active area along with 1mm, 1.6mm and 2 mm pinhole aperture sizes at 0.48 mm pitch on a tungsten plate. Perpendicular lead septa were employed to verify overlapping and non-overlapping projections against a proper acceptance angle without lead septa. A uniform shape cylindrical water phantom was used to evaluate the performance of the proposed four head SPECT system of the pinhole array detector module. For each head, 100 pinhole configurations were evaluated based on sensitivity and detection efficiency for 140 keV γ-rays, and compared to LEHR parallel-hole collimator. SPECT images were reconstructed based on filtered back projection (FBP) algorithm where neither scatter nor attenuation corrections were performed. A better reconstruction algorithm development for this specific system is in progress. Nevertheless, activity distribution was well visualized using the backprojection algorithm. In this study, we have evaluated several quantitative and comparative analyses for a pinhole array imaging system providing high detection efficiency and better system sensitivity over a large FOV, comparing to the conventional four head SPECT system. The proposed detector module is expected to provide improved performance in various SPECT imaging.

  18. Ripple FPN reduced algorithm based on temporal high-pass filter and hardware implementation

    NASA Astrophysics Data System (ADS)

    Li, Yiyang; Li, Shuo; Zhang, Zhipeng; Jin, Weiqi; Wu, Lei; Jin, Minglei

    2016-11-01

    Cooled infrared detector arrays always suffer from undesired Ripple Fixed-Pattern Noise (FPN) when observe the scene of sky. The Ripple Fixed-Pattern Noise seriously affect the imaging quality of thermal imager, especially for small target detection and tracking. It is hard to eliminate the FPN by the Calibration based techniques and the current scene-based nonuniformity algorithms. In this paper, we present a modified space low-pass and temporal high-pass nonuniformity correction algorithm using adaptive time domain threshold (THP&GM). The threshold is designed to significantly reduce ghosting artifacts. We test the algorithm on real infrared in comparison to several previously published methods. This algorithm not only can effectively correct common FPN such as Stripe, but also has obviously advantage compared with the current methods in terms of detail protection and convergence speed, especially for Ripple FPN correction. Furthermore, we display our architecture with a prototype built on a Xilinx Virtex-5 XC5VLX50T field-programmable gate array (FPGA). The hardware implementation of the algorithm based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay (less than 20 lines). The hardware has been successfully applied in actual system.

  19. Application of dot-matrix illumination of liquid crystal phase space light modulator in 3D imaging of APD array

    NASA Astrophysics Data System (ADS)

    Wang, Shuai; Sun, Huayan; Guo, Huichao

    2018-01-01

    Aiming at the problem of beam scanning in low-resolution APD array in three-dimensional imaging, a method of beam scanning with liquid crystal phase-space optical modulator is proposed to realize high-resolution imaging by low-resolution APD array. First, a liquid crystal phase spatial light modulator is used to generate a beam array and then a beam array is scanned. Since the sub-beam divergence angle in the beam array is smaller than the field angle of a single pixel in the APD array, the APD's pixels respond only to the three-dimensional information of the beam illumination position. Through the scanning of the beam array, a single pixel is used to collect the target three-dimensional information multiple times, thereby improving the resolution of the APD detector. Finally, MATLAB is used to simulate the algorithm in this paper by using two-dimensional scalar diffraction theory, which realizes the splitting and scanning with a resolution of 5 x 5. The feasibility is verified theoretically.

  20. Resolving phase ambiguities in the calibration of redundant interferometric arrays: implications for array design

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy G.; Tarokh, Vahid; Rachlin, Yaron; Shah, Vinay N.; Ashcom, Jonathan B.

    2016-10-01

    We provide new results enabling robust interferometric image reconstruction in the presence of unknown aperture piston variation via the technique of redundant spacing calibration (RSC). The RSC technique uses redundant measurements of the same interferometric baseline with different pairs of apertures to reveal the piston variation among these pairs. In both optical and radio interferometry, the presence of phase-wrapping ambiguities in the measurements is a fundamental issue that needs to be addressed for reliable image reconstruction. In this paper, we show that these ambiguities affect recently developed RSC phasor-based reconstruction approaches operating on the complex visibilities, as well as traditional phase-based approaches operating on their logarithm. We also derive new sufficient conditions for an interferometric array to be immune to these ambiguities in the sense that their effect can be rendered benign in image reconstruction. This property, which we call wrap-invariance, has implications for the reliability of imaging via classical three-baseline phase closures as well as generalized closures. We show that wrap-invariance is conferred upon arrays whose interferometric graph satisfies a certain cycle-free condition. For cases in which this condition is not satisfied, a simple algorithm is provided for identifying those graph cycles which prevent its satisfaction. We apply this algorithm to diagnose and correct a member of a pattern family popular in the literature.

  1. Isolation and measurement of the features of arrays of cell aggregates formed by dielectrophoresis using the user-specified Multi Regions Masking (MRM) technique

    NASA Astrophysics Data System (ADS)

    Yusvana, Rama; Headon, Denis; Markx, Gerard H.

    2009-08-01

    The use of dielectrophoresis for the construction of artificial skin tissue with skin cells in follicle-like 3D cell aggregates in well-defined patterns is demonstrated. To analyse the patterns produced and to study their development after their formation a Virtual Instrument (VI) system was developed using the LabVIEW IMAQ Vision Development Module. A series of programming functions (algorithms) was used to isolate the features on the image (in our case; the patterned aggregates) and separate them from all other unwanted regions on the image. The image was subsequently converted into a binary version, covering only the desired microarray regions which could then be analysed by computer for automatic object measurements. The analysis utilized the simple and easy-to-use User-Specified Multi-Regions Masking (MRM) technique, which allows one to concentrate the analysis on the desired regions specified in the mask. This simplified the algorithms for the analysis of images of cell arrays having similar geometrical properties. By having a collection of scripts containing masks of different patterns, it was possible to quickly and efficiently develop sets of custom virtual instruments for the offline or online analysis of images of cell arrays in the database.

  2. Breast ultrasound tomography with two parallel transducer arrays

    NASA Astrophysics Data System (ADS)

    Huang, Lianjie; Shin, Junseob; Chen, Ting; Lin, Youzuo; Gao, Kai; Intrator, Miranda; Hanson, Kenneth

    2016-03-01

    Breast ultrasound tomography is an emerging imaging modality to reconstruct the sound speed, density, and ultrasound attenuation of the breast in addition to ultrasound reflection/beamforming images for breast cancer detection and characterization. We recently designed and manufactured a new synthetic-aperture breast ultrasound tomography prototype with two parallel transducer arrays consisting of a total of 768 transducer elements. The transducer arrays are translated vertically to scan the breast in a warm water tank from the chest wall/axillary region to the nipple region to acquire ultrasound transmission and reflection data for whole-breast ultrasound tomography imaging. The distance of these two ultrasound transducer arrays is adjustable for scanning breasts with different sizes. We use our breast ultrasound tomography prototype to acquire phantom and in vivo patient ultrasound data to study its feasibility for breast imaging. We apply our recently developed ultrasound imaging and tomography algorithms to ultrasound data acquired using our breast ultrasound tomography system. Our in vivo patient imaging results demonstrate that our breast ultrasound tomography can detect breast lesions shown on clinical ultrasound and mammographic images.

  3. Computational Ghost Imaging for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In CGI, the measurement obtained from the reference arm (with the high-resolution detector) is replaced by a computational derivation of the measurement-plane intensity profile of the reference-arm beam. The algorithms applied to computational ghost imaging have diversified beyond simple correlation measurements, and now include modern reconstruction algorithms based on compressive sensing.

  4. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  5. Effects of illumination on image reconstruction via Fourier ptychography

    NASA Astrophysics Data System (ADS)

    Cao, Xinrui; Sinzinger, Stefan

    2017-12-01

    The Fourier ptychographic microscopy (FPM) technique provides high-resolution images by combining a traditional imaging system, e.g. a microscope or a 4f-imaging system, with a multiplexing illumination system, e.g. an LED array and numerical image processing for enhanced image reconstruction. In order to numerically combine images that are captured under varying illumination angles, an iterative phase-retrieval algorithm is often applied. However, in practice, the performance of the FPM algorithm degrades due to the imperfections of the optical system, the image noise caused by the camera, etc. To eliminate the influence of the aberrations of the imaging system, an embedded pupil function recovery (EPRY)-FPM algorithm has been proposed [Opt. Express 22, 4960-4972 (2014)]. In this paper, we study how the performance of FPM and EPRY-FPM algorithms are affected by imperfections of the illumination system using both numerical simulations and experiments. The investigated imperfections include varying and non-uniform intensities, and wavefront aberrations. Our study shows that the aberrations of the illumination system significantly affect the performance of both FPM and EPRY-FPM algorithms. Hence, in practice, aberrations in the illumination system gain significant influence on the resulting image quality.

  6. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    NASA Astrophysics Data System (ADS)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  7. Fringe pattern demodulation using the one-dimensional continuous wavelet transform: field-programmable gate array implementation.

    PubMed

    Abid, Abdulbasit

    2013-03-01

    This paper presents a thorough discussion of the proposed field-programmable gate array (FPGA) implementation for fringe pattern demodulation using the one-dimensional continuous wavelet transform (1D-CWT) algorithm. This algorithm is also known as wavelet transform profilometry. Initially, the 1D-CWT is programmed using the C programming language and compiled into VHDL using the ImpulseC tool. This VHDL code is implemented on the Altera Cyclone IV GX EP4CGX150DF31C7 FPGA. A fringe pattern image with a size of 512×512 pixels is presented to the FPGA, which processes the image using the 1D-CWT algorithm. The FPGA requires approximately 100 ms to process the image and produce a wrapped phase map. For performance comparison purposes, the 1D-CWT algorithm is programmed using the C language. The C code is then compiled using the Intel compiler version 13.0. The compiled code is run on a Dell Precision state-of-the-art workstation. The time required to process the fringe pattern image is approximately 1 s. In order to further reduce the execution time, the 1D-CWT is reprogramed using Intel Integrated Primitive Performance (IPP) Library Version 7.1. The execution time was reduced to approximately 650 ms. This confirms that at least sixfold speedup was gained using FPGA implementation over a state-of-the-art workstation that executes heavily optimized implementation of the 1D-CWT algorithm.

  8. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  9. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, H; Xing, L; Liang, Z

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of eachmore » tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions, which is valuable for the early diagnosis of certain brain diseases, e.g. multiple sclerosis.« less

  10. Noninvasive photoacoustic detecting intraocular foreign bodies with an annular transducer array.

    PubMed

    Yang, Diwu; Zeng, Lvming; Pan, Changning; Zhao, Xuehui; Ji, Xuanrong

    2013-01-14

    We present a fast photoacoustic imaging system based on an annular transducer array for detection of intraocular foreign bodies. An eight-channel data acquisition system is applied to capture the photoacoustic signals using multiplexing and the total time of data acquisition and transferring is within 3 s. A limited-view filtered back projection algorithm is used to reconstruct the photoacoustic images. Experimental models of intraocular metal and glass foreign bodies were constructed on ex vivo pig's eyes and clear photoacoustic images of intraocular foreign bodies were obtained. Experimental results demonstrate the photoacoustic imaging system holds the potential for in clinic detecting the intraocular foreign bodies.

  11. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  12. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  13. The IMS Software Integration Platform

    DTIC Science & Technology

    1993-04-12

    products to incorporate all data shared by the IMS applications. Some entities (time-series, images, a algorithm -specific parameters) must be managed...dbwhoanii, dbcancel Transaction Management: dbcommit, dbrollback Key Counter Assignment: dbgetcounter String Handling: cstr ~to~pad, pad-to- cstr Error...increment *value; String Maniputation: int cstr topad (array, string, arraylength) char *array, *string; int arrayjlength; int pad tocstr (string

  14. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  15. CFA-aware features for steganalysis of color images

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica

    2015-03-01

    Color interpolation is a form of upsampling, which introduces constraints on the relationship between neighboring pixels in a color image. These constraints can be utilized to substantially boost the accuracy of steganography detectors. In this paper, we introduce a rich model formed by 3D co-occurrences of color noise residuals split according to the structure of the Bayer color filter array to further improve detection. Some color interpolation algorithms, AHD and PPG, impose pixel constraints so tight that extremely accurate detection becomes possible with merely eight features eliminating the need for model richification. We carry out experiments on non-adaptive LSB matching and the content-adaptive algorithm WOW on five different color interpolation algorithms. In contrast to grayscale images, in color images that exhibit traces of color interpolation the security of WOW is significantly lower and, depending on the interpolation algorithm, may even be lower than non-adaptive LSB matching.

  16. Nonuniformity correction for an infrared focal plane array based on diamond search block matching.

    PubMed

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.

  17. PhylArray: phylogenetic probe design algorithm for microarray.

    PubMed

    Militon, Cécile; Rimour, Sébastien; Missaoui, Mohieddine; Biderre, Corinne; Barra, Vincent; Hill, David; Moné, Anne; Gagne, Geneviève; Meier, Harald; Peyretaillade, Eric; Peyret, Pierre

    2007-10-01

    Microbial diversity is still largely unknown in most environments, such as soils. In order to get access to this microbial 'black-box', the development of powerful tools such as microarrays are necessary. However, the reliability of this approach relies on probe efficiency, in particular sensitivity, specificity and explorative power, in order to obtain an image of the microbial communities that is close to reality. We propose a new probe design algorithm that is able to select microarray probes targeting SSU rRNA at any phylogenetic level. This original approach, implemented in a program called 'PhylArray', designs a combination of degenerate and non-degenerate probes for each target taxon. Comparative experimental evaluations indicate that probes designed with PhylArray yield a higher sensitivity and specificity than those designed by conventional approaches. Applying the combined PhyArray/GoArrays strategy helps to optimize the hybridization performance of short probes. Finally, hybridizations with environmental targets have shown that the use of the PhylArray strategy can draw attention to even previously unknown bacteria.

  18. Thin-film sparse boundary array design for passive acoustic mapping during ultrasound therapy.

    PubMed

    Coviello, Christian M; Kozick, Richard J; Hurrell, Andrew; Smith, Penny Probert; Coussios, Constantin-C

    2012-10-01

    A new 2-D hydrophone array for ultrasound therapy monitoring is presented, along with a novel algorithm for passive acoustic mapping using a sparse weighted aperture. The array is constructed using existing polyvinylidene fluoride (PVDF) ultrasound sensor technology, and is utilized for its broadband characteristics and its high receive sensitivity. For most 2-D arrays, high-resolution imagery is desired, which requires a large aperture at the cost of a large number of elements. The proposed array's geometry is sparse, with elements only on the boundary of the rectangular aperture. The missing information from the interior is filled in using linear imaging techniques. After receiving acoustic emissions during ultrasound therapy, this algorithm applies an apodization to the sparse aperture to limit side lobes and then reconstructs acoustic activity with high spatiotemporal resolution. Experiments show verification of the theoretical point spread function, and cavitation maps in agar phantoms correspond closely to predicted areas, showing the validity of the array and methodology.

  19. Robust sparse image reconstruction of radio interferometric observations with PURIFY

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; McEwen, Jason D.; d'Avezac, Mayeul; Carrillo, Rafael E.; Onose, Alexandru; Wiaux, Yves

    2018-01-01

    Next-generation radio interferometers, such as the Square Kilometre Array, will revolutionize our understanding of the Universe through their unprecedented sensitivity and resolution. However, to realize these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed interferometric images that are limited in quality and scalability for big data. In this work, we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions while providing a computational saving and an analytic form. Secondly, we apply PURIFY to real interferometric observations from the Very Large Array and the Australia Telescope Compact Array and find that images recovered by PURIFY are of higher quality than those recovered by CLEAN. Thirdly, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.

  20. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  1. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less

  2. Quasi-Epipolar Resampling of High Resolution Satellite Stereo Imagery for Semi Global Matching

    NASA Astrophysics Data System (ADS)

    Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.

    2015-12-01

    Semi-global matching is a well-known stereo matching algorithm in photogrammetric and computer vision society. Epipolar images are supposed as input of this algorithm. Epipolar geometry of linear array scanners is not a straight line as in case of frame camera. Traditional epipolar resampling algorithms demands for rational polynomial coefficients (RPCs), physical sensor model or ground control points. In this paper we propose a new solution for epipolar resampling method which works without the need for these information. In proposed method, automatic feature extraction algorithms are employed to generate corresponding features for registering stereo pairs. Also original images are divided into small tiles. In this way by omitting the need for extra information, the speed of matching algorithm increased and the need for high temporal memory decreased. Our experiments on GeoEye-1 stereo pair captured over Qom city in Iran demonstrates that the epipolar images are generated with sub-pixel accuracy.

  3. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  4. Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.

    PubMed

    Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan

    2014-09-22

    A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.

  5. Phase retrieval with Fourier-weighted projections.

    PubMed

    Guizar-Sicairos, Manuel; Fienup, James R

    2008-03-01

    In coherent lensless imaging, the presence of image sidelobes, which arise as a natural consequence of the finite nature of the detector array, was early recognized as a convergence issue for phase retrieval algorithms that rely on an object support constraint. To mitigate the problem of truncated far-field measurement, a controlled analytic continuation by means of an iterative transform algorithm with weighted projections is proposed and tested. This approach avoids the use of sidelobe reduction windows and achieves full-resolution reconstructions.

  6. Simulations For Investigating the Contrast Mechanism of Biological Cells with High Frequency Scanning Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Juntarapaso, Yada

    Scanning Acoustic Microscopy (SAM) is one of the most powerful techniques for nondestructive evaluation and it is a promising tool for characterizing the elastic properties of biological tissues/cells. Exploring a single cell is important since there is a connection between single cell biomechanics and human cancer. Scanning acoustic microscopy (SAM) has been accepted and extensively utilized for acoustical cellular and tissue imaging including measurements of the mechanical and elastic properties of biological specimens. SAM provides superb advantages in that it is non-invasive, can measure mechanical properties of biological cells or tissues, and fixation/chemical staining is not necessary. The first objective of this research is to develop a program for simulating the images and contrast mechanism obtained by high-frequency SAM. Computer simulation algorithms based on MatlabRTM were built for simulating the images and contrast mechanisms. The mechanical properties of HeLa and MCF-7 cells were computed from the measurement data of the output signal amplitude as a function of distance from the focal planes of the acoustics lens which is known as V(z) . Algorithms for simulating V(z) responses involved the calculation of the reflectance function and were created based on ray theory and wave theory. The second objective is to design transducer arrays for SAM. Theoretical simulations based on Field II(c) programs of the high frequency ultrasound array designs were performed to enhance image resolution and volumetric imaging capabilities. Phased array beam forming and dynamic apodization and focusing were employed in the simulations. The new transducer array design will be state-of-the-art in improving the performance of SAM by electronic scanning and potentially providing a 4-D image of the specimen.

  7. Inspection design using 2D phased array, TFM and cueMAP software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGilp, Ailidh; Dziewierz, Jerzy; Lardner, Tim

    2014-02-18

    A simulation suite, cueMAP, has been developed to facilitate the design of inspection processes and sparse 2D array configurations. At the core of cueMAP is a Total Focusing Method (TFM) imaging algorithm that enables computer assisted design of ultrasonic inspection scenarios, including the design of bespoke array configurations to match the inspection criteria. This in-house developed TFM code allows for interactive evaluation of image quality indicators of ultrasonic imaging performance when utilizing a 2D phased array working in FMC/TFM mode. The cueMAP software uses a series of TFM images to build a map of resolution, contrast and sensitivity of imagingmore » performance of a simulated reflector, swept across the inspection volume. The software takes into account probe properties, wedge or water standoff, and effects of specimen curvature. In the validation process of this new software package, two 2D arrays have been evaluated on 304n stainless steel samples, typical of the primary circuit in nuclear plants. Thick section samples have been inspected using a 1MHz 2D matrix array. Due to the processing efficiency of the software, the data collected from these array configurations has been used to investigate the influence sub-aperture operation on inspection performance.« less

  8. Next generation miniature simultaneous multi-hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Gupta, Neelam

    2014-03-01

    The concept for a hyperspectral imaging system using a Fabry-Perot tunable filter (FPTF) array that is fabricated using "miniature optical electrical mechanical system" (MOEMS) technology. [1] Using an array of FPTF as an approach to hyperspectral imaging relaxes wavelength tuning requirements considerably because of the reduced portion of the spectrum that is covered by each element in the array. In this paper, Pacific Advanced Technology and ARL present the results of a concept design and performed analysis of a MOEMS based tunable Fabry-Perot array (FPTF) to perform simultaneous multispectral and hyperspectral imaging with relatively high spatial resolution. The concept design was developed with support of an Army SBIR Phase I program The Fabry-Perot tunable MOEMS filter array was combined with a miniature optics array and a focal plane array of 1024 x 1024 pixels to produce 16 colors every frame of the camera. Each color image has a spatial resolution of 256 x 256 pixels with an IFOV of 1.7 mrads and FOV of 25 degrees. The spectral images are collected simultaneously allowing high resolution spectral-spatial-temporal information in each frame of the camera, thus enabling the implementation of spectral-temporal-spatial algorithms in real-time to provide high sensitivity for the detection of weak signals in a high clutter background environment with low sensitivity to camera motion. The challenge in the design was the independent actuation of each Fabry Perot element in the array allowing for individual tuning. An additional challenge was the need to maximize the fill factor to improve the spatial coverage with minimal dead space. This paper will only address the concept design and analysis of the Fabry-Perot tunable filter array. A previous paper presented at SPIE DSS in 2012 explained the design of the optical array.

  9. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  10. LC-lens array with light field algorithm for 3D biomedical applications

    NASA Astrophysics Data System (ADS)

    Huang, Yi-Pai; Hsieh, Po-Yuan; Hassanfiroozi, Amir; Martinez, Manuel; Javidi, Bahram; Chu, Chao-Yu; Hsuan, Yun; Chu, Wen-Chun

    2016-03-01

    In this paper, liquid crystal lens (LC-lens) array was utilized in 3D bio-medical applications including 3D endoscope and light field microscope. Comparing with conventional plastic lens array, which was usually placed in 3D endoscope or light field microscope system to record image disparity, our LC-lens array has higher flexibility of electrically changing its focal length. By using LC-lens array, the working distance and image quality of 3D endoscope and microscope could be enhanced. Furthermore, the 2D/3D switching ability could be achieved if we turn off/on the electrical power on LClens array. In 3D endoscope case, a hexagonal micro LC-lens array with 350um diameter was placed at the front end of a 1mm diameter endoscope. With applying electric field on LC-lens array, the 3D specimen would be recorded as from seven micro-cameras with different disparity. We could calculate 3D construction of specimen with those micro images. In the other hand, if we turn off the electric field on LC-lens array, the conventional high resolution 2D endoscope image would be recorded. In light field microscope case, the LC-lens array was placed in front of the CMOS sensor. The main purpose of LC-lens array is to extend the refocusing distance of light field microscope, which is usually very narrow in focused light field microscope system, by montaging many light field images sequentially focusing on different depth. With adjusting focal length of LC-lens array from 2.4mm to 2.9mm, the refocusing distance was extended from 1mm to 11.3mm. Moreover, we could use a LC wedge to electrically shift the optics axis and increase the resolution of light field.

  11. Diffraction-limited imaging with very large telescopes; Proceedings of the NATO Advanced Study Institute, Cargese, France, Sept. 13-23, 1988

    NASA Astrophysics Data System (ADS)

    Alloin, D. M.; Mariotti, J.-M.

    Recent advances in optics and observation techniques for very large astronomical telescopes are discussed in reviews and reports. Topics addressed include Fourier optics and coherence, optical propagation and image formation through a turbulent atmosphere, radio telescopes, continuously deformable telescopes for optical interferometry (I), amplitude estimation from speckle I, noise calibration of speckle imagery, and amplitude estimation from diluted-array I. Consideration is given to first-order imaging methods, speckle imaging with the PAPA detector and the Knox-Thompson algorithm, phase-closure imaging, real-time wavefront sensing and adaptive optics, differential I, astrophysical programs for high-angular-resolution optical I, cophasing telescope arrays, aperture synthesis for space observatories, and lunar occultations for marcsec resolution.

  12. Accuracy of measurement of star images on a pixel array

    NASA Technical Reports Server (NTRS)

    King, I. R.

    1983-01-01

    Algorithms are developed for predicting the accuracy with which the brightness of a star can be determined from its image on a digital detector array, as a function of the brightness of the background. The assumption is made that a known profile is being fitted by least squares. The two profiles used correspond to ST images and to ground-based observations. The first result is an approximate rule of thumb for equivalent noise area. More rigorous results are then given in tabular form. The size of the pixels, relative to the image size, is taken into account. Astronometric accuracy is also discussed briefly; the error, relative to image size, is very similar to the photometric error relative to brightness.

  13. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  14. Improvement of resolution in full-view linear-array photoacoustic computed tomography using a novel adaptive weighting method

    NASA Astrophysics Data System (ADS)

    Omidi, Parsa; Diop, Mamadou; Carson, Jeffrey; Nasiriavanaki, Mohammadreza

    2017-03-01

    Linear-array-based photoacoustic computed tomography is a popular methodology for deep and high resolution imaging. However, issues such as phase aberration, side-lobe effects, and propagation limitations deteriorate the resolution. The effect of phase aberration due to acoustic attenuation and constant assumption of the speed of sound (SoS) can be reduced by applying an adaptive weighting method such as the coherence factor (CF). Utilizing an adaptive beamforming algorithm such as the minimum variance (MV) can improve the resolution at the focal point by eliminating the side-lobes. Moreover, invisibility of directional objects emitting parallel to the detection plane, such as vessels and other absorbing structures stretched in the direction perpendicular to the detection plane can degrade resolution. In this study, we propose a full-view array level weighting algorithm in which different weighs are assigned to different positions of the linear array based on an orientation algorithm which uses the histogram of oriented gradient (HOG). Simulation results obtained from a synthetic phantom show the superior performance of the proposed method over the existing reconstruction methods.

  15. Fiber array based hyperspectral Raman imaging for chemical selective analysis of malaria-infected red blood cells.

    PubMed

    Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten

    2015-09-24

    A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Experimental Demonstration of Adaptive Infrared Multispectral Imaging using Plasmonic Filter Array.

    PubMed

    Jang, Woo-Yong; Ku, Zahyun; Jeon, Jiyeon; Kim, Jun Oh; Lee, Sang Jun; Park, James; Noyola, Michael J; Urbas, Augustine

    2016-10-10

    In our previous theoretical study, we performed target detection using a plasmonic sensor array incorporating the data-processing technique termed "algorithmic spectrometry". We achieved the reconstruction of a target spectrum by extracting intensity at multiple wavelengths with high resolution from the image data obtained from the plasmonic array. The ultimate goal is to develop a full-scale focal plane array with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios. Each plasmonic filter was designed using periodic circular holes perforated through a gold layer, and an enhanced target detection strategy was proposed to refine the original spectrometry concept for spatial and spectral computation of the data measured from the plasmonic array. Both the spectrum of blackbody radiation and a metal ring object at multiple wavelengths were successfully reconstructed using the weighted superposition of plasmonic output images as specified in the proposed detection strategy. In addition, plasmonic filter arrays were theoretically tested on a target at extremely high temperature as a challenging scenario for the detection scheme.

  17. Three-Dimensional Medical Image Analysis Using Local Dynamic Algorithm Selection on a Multiple-Instruction, Multiple-Data Architecture

    DTIC Science & Technology

    1989-01-01

    is represented by a number, called a Hounsfield Unit (HU), which represents the attenuation within the volume relative to the attenuation of the same...volume of water. Hounsfield Unit values range from -1000 to +3000, with a value of zero assigned to the attenuation of water. A HU value of -1000...represented by a 3D array. Each array element represents a single voxel, and the value of the array entry is the corresponding scaled Hounsfield Unit value

  18. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    DTIC Science & Technology

    2018-03-10

    can be generated using only two sensors in the physical array. In case ofredundancy in the difference coarray, there is more than one antenna pair that...estimation results based on the MUSIC algorithm using multi- frequency co-prime arrays. Both proportional and nonproportional source spectra cases are...be made in this case as well. However, two differences can be noticed by comparing the RMSE plots in Figs. 11 and 13. First, the RMSE takes on lower

  19. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  20. Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging

    PubMed Central

    Izquierdo, Alberto; Villacorta, Juan José; del Val Puente, Lara; Suárez, Luis

    2016-01-01

    This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems) microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system. PMID:27727174

  1. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  2. Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.

    PubMed

    Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan

    2013-10-21

    We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.

  3. Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array

    PubMed Central

    Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan

    2013-01-01

    We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637

  4. Implementation of Multispectral Image Classification on a Remote Adaptive Computer

    NASA Technical Reports Server (NTRS)

    Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna

    1999-01-01

    As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).

  5. Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images

    PubMed Central

    Izquierdo, Alberto; Suárez, Luis; Suárez, David

    2017-01-01

    Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485

  6. NeuroSeek dual-color image processing infrared focal plane array

    NASA Astrophysics Data System (ADS)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  7. MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera

    NASA Astrophysics Data System (ADS)

    Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.

    2017-10-01

    An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).

  8. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  9. Optimization of sparse synthetic transmit aperture imaging with coded excitation and frequency division.

    PubMed

    Behar, Vera; Adam, Dan

    2005-12-01

    An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.

  10. Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter.

    PubMed

    Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo

    2017-02-01

    In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.

  11. Superresolution with the focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Chunev, Georgi; Lumsdaine, Andrew

    2011-03-01

    Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

  12. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  13. Total focusing method with correlation processing of antenna array signals

    NASA Astrophysics Data System (ADS)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  14. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    NASA Technical Reports Server (NTRS)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  15. In-TFT-array-process micro defect inspection using nonlinear principal component analysis.

    PubMed

    Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang

    2009-11-20

    Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image.

  16. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  17. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  18. Model Based Reconstruction of UT Array Data

    NASA Astrophysics Data System (ADS)

    Calmon, P.; Iakovleva, E.; Fidahoussen, A.; Ribay, G.; Chatillon, S.

    2008-02-01

    Beyond the detection of defects, their characterization (identification, positioning, sizing) is one goal of great importance often assigned to the analysis of NDT data. The first step of such analysis in the case of ultrasonic testing amounts to image in the part the detected echoes. This operation is in general achieved by considering time of flights and by applying simplified algorithms which are often valid only on canonical situations. In this communication we present an overview of different imaging techniques studied at CEA LIST and based on the exploitation of direct models which enable to address complex configurations and are available in the CIVA software plat-form. We discuss in particular ray-model based algorithms, algorithms derived from classical synthetic focusing and processing of the full inter-element matrix (MUSIC algorithm).

  19. Experimental Demonstration of Adaptive Infrared Multispectral Imaging using Plasmonic Filter Array

    PubMed Central

    Jang, Woo-Yong; Ku, Zahyun; Jeon, Jiyeon; Kim, Jun Oh; Lee, Sang Jun; Park, James; Noyola, Michael J.; Urbas, Augustine

    2016-01-01

    In our previous theoretical study, we performed target detection using a plasmonic sensor array incorporating the data-processing technique termed “algorithmic spectrometry”. We achieved the reconstruction of a target spectrum by extracting intensity at multiple wavelengths with high resolution from the image data obtained from the plasmonic array. The ultimate goal is to develop a full-scale focal plane array with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios. Each plasmonic filter was designed using periodic circular holes perforated through a gold layer, and an enhanced target detection strategy was proposed to refine the original spectrometry concept for spatial and spectral computation of the data measured from the plasmonic array. Both the spectrum of blackbody radiation and a metal ring object at multiple wavelengths were successfully reconstructed using the weighted superposition of plasmonic output images as specified in the proposed detection strategy. In addition, plasmonic filter arrays were theoretically tested on a target at extremely high temperature as a challenging scenario for the detection scheme. PMID:27721506

  20. Volumetric visualization algorithm development for an FPGA-based custom computing machine

    NASA Astrophysics Data System (ADS)

    Sallinen, Sami J.; Alakuijala, Jyrki; Helminen, Hannu; Laitinen, Joakim

    1998-05-01

    Rendering volumetric medical images is a burdensome computational task for contemporary computers due to the large size of the data sets. Custom designed reconfigurable hardware could considerably speed up volume visualization if an algorithm suitable for the platform is used. We present an algorithm and speedup techniques for visualizing volumetric medical CT and MR images with a custom-computing machine based on a Field Programmable Gate Array (FPGA). We also present simulated performance results of the proposed algorithm calculated with a software implementation running on a desktop PC. Our algorithm is capable of generating perspective projection renderings of single and multiple isosurfaces with transparency, simulated X-ray images, and Maximum Intensity Projections (MIP). Although more speedup techniques exist for parallel projection than for perspective projection, we have constrained ourselves to perspective viewing, because of its importance in the field of radiotherapy. The algorithm we have developed is based on ray casting, and the rendering is sped up by three different methods: shading speedup by gradient precalculation, a new generalized version of Ray-Acceleration by Distance Coding (RADC), and background ray elimination by speculative ray selection.

  1. Enhanced linear-array photoacoustic beamforming using modified coherence factor.

    PubMed

    Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador

    2018-02-01

    Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  2. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  3. Using phase information to enhance speckle noise reduction in the ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Lardner, Timothy; Li, Minghui; Gachagan, Anthony

    2014-02-01

    Materials with a coarse grain structure are becoming increasingly prevalent in industry due to their resilience to stress and corrosion. These materials are difficult to inspect with ultrasound because reflections from the grains lead to high noise levels which hinder the echoes of interest. Spatially Averaged Sub-Aperture Correlation Imaging (SASACI) is an advanced array beamforming technique that uses the cross-correlation between images from array sub-apertures to generate an image weighting matrix, in order to reduce noise levels. This paper presents a method inspired by SASACI to further improve imaging using phase information to refine focusing and reduce noise. A-scans from adjacent array elements are cross-correlated using both signal amplitude and phase to refine delay laws and minimize phase aberration. The phase-based and amplitude-based corrected images are used as inputs to a two-dimensional cross-correlation algorithm that will output a weighting matrix that can be applied to any conventional image. This approach was validated experimentally using a 5MHz array a coarse grained Inconel 625 step wedge, and compared to the Total Focusing Method (TFM). Initial results have seen SNR improvements of over 20dB compared to TFM, and a resolution that is much higher.

  4. An algorithm for automatic crystal identification in pixelated scintillation detectors using thin plate splines and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.

    2016-02-01

    A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial template model used.

  5. Polarization Smoothing Generalized MUSIC Algorithm with Polarization Sensitive Array for Low Angle Estimation.

    PubMed

    Tan, Jun; Nie, Zaiping

    2018-05-12

    Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.

  6. A Reconstruction Algorithm for Breast Cancer Imaging With Electrical Impedance Tomography in Mammography Geometry

    PubMed Central

    Kao, Tzu-Jen; Isaacson, David; Saulnier, Gary J.; Newell, Jonathan C.

    2009-01-01

    The conductivity and permittivity of breast tumors are known to differ significantly from those of normal breast tissues, and electrical impedance tomography (EIT) is being studied as a modality for breast cancer imaging to exploit these differences. At present, X-ray mammography is the primary standard imaging modality used for breast cancer screening in clinical practice, so it is desirable to study EIT in the geometry of mammography. This paper presents a forward model of a simplified mammography geometry and a reconstruction algorithm for breast tumor imaging using EIT techniques. The mammography geometry is modeled as a rectangular box with electrode arrays on the top and bottom planes. A forward model for the electrical impedance imaging problem is derived for a homogeneous conductivity distribution and is validated by experiment using a phantom tank. A reconstruction algorithm for breast tumor imaging based on a linearization approach and the proposed forward model is presented. It is found that the proposed reconstruction algorithm performs well in the phantom experiment, and that the locations of a 5-mm-cube metal target and a 6-mm-cube agar target could be recovered at a target depth of 15 mm using a 32 electrode system. PMID:17405377

  7. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  8. Nanohole-array-based device for 2D snapshot multispectral imaging

    PubMed Central

    Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.

    2013-01-01

    We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065

  9. Improved Line Tracing Methods for Removal of Bad Streaks Noise in CCD Line Array Image—A Case Study with GF-1 Images

    PubMed Central

    Wang, Bo; Bao, Jianwei; Wang, Shikui; Wang, Houjun; Sheng, Qinghong

    2017-01-01

    Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. PMID:28441754

  10. Holographic imaging of natural-fiber-containing materials

    DOEpatents

    Bunch, Kyle J [Richland, WA; Tucker, Brian J [Pasco, WA; Severtsen, Ronald H [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA; Lechelt, Wayne M [West Richland, WA; Griffin, Jeffrey W [Kennewick, WA; Sheen, David M [Richland, WA

    2010-12-21

    The present invention includes methods and apparatuses for imaging material properties in natural-fiber-containing materials. In particular, the images can provide quantified measures of localized moisture content. Embodiments of the invention utilize an array of antennas and at least one transceiver to collect amplitude and phase data from radiation interacting with the natural-fiber-containing materials. The antennas and the transceivers are configured to transmit and receive electromagnetic radiation at one or more frequencies, which are between 50 MHz and 1 THz. A conveyance system passes the natural-fiber-containing materials through a field of view of the array of antennas. A computing device is configured to apply a synthetic imaging algorithm to construct a three-dimensional image of the natural-fiber-containing materials that provides a quantified measure of localized moisture content. The image and the quantified measure are both based on the amplitude data, the phase data, or both.

  11. Recursive algorithms for bias and gain nonuniformity correction in infrared videos.

    PubMed

    Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R

    2012-12-01

    Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.

  12. A refraction-corrected tomographic algorithm for immersion laser-ultrasonic imaging of solids with piecewise linear surface profile

    NASA Astrophysics Data System (ADS)

    Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.

    2018-05-01

    In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.

  13. Highly accelerated acquisition and homogeneous image reconstruction with rotating RF coil array at 7T-A phantom based study.

    PubMed

    Li, Mingyan; Zuo, Zhentao; Jin, Jin; Xue, Rong; Trakic, Adnan; Weber, Ewald; Liu, Feng; Crozier, Stuart

    2014-03-01

    Parallel imaging (PI) is widely used for imaging acceleration by means of coil spatial sensitivities associated with phased array coils (PACs). By employing a time-division multiplexing technique, a single-channel rotating radiofrequency coil (RRFC) provides an alternative method to reduce scan time. Strategically combining these two concepts could provide enhanced acceleration and efficiency. In this work, the imaging acceleration ability and homogeneous image reconstruction strategy of 4-element rotating radiofrequency coil array (RRFCA) was numerically investigated and experimental validated at 7T with a homogeneous phantom. Each coil of RRFCA was capable of acquiring a large number of sensitivity profiles, leading to a better acceleration performance illustrated by the improved geometry-maps that have lower maximum values and more uniform distributions compared to 4- and 8-element stationary arrays. A reconstruction algorithm, rotating SENSitivity Encoding (rotating SENSE), was proposed to provide image reconstruction. Additionally, by optimally choosing the angular sampling positions and transmit profiles under the rotating scheme, phantom images could be faithfully reconstructed. The results indicate that, the proposed technique is able to provide homogeneous reconstructions with overall higher and more uniform signal-to-noise ratio (SNR) distributions at high reduction factors. It is hoped that, by employing the high imaging acceleration and homogeneous imaging reconstruction ability of RRFCA, the proposed method will facilitate human imaging for ultra high field MRI. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    DTIC Science & Technology

    2016-04-01

    Interferometry 1.1 Chapter Overview In this Section, we introduce the physics -based principles of optical interferometry, thereby providing a foundation for...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for identifying...mathematical conditions for wrap invariance to a physical condition on aperture placement is more intuitive when considering the raw phase measurements as

  15. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    DTIC Science & Technology

    2016-04-01

    Chapter 1 Fundamentals of Optical Interferometry 1.1 Chapter Overview In this chapter, we introduce the physics -based principles of optical...particular physical structure (i.e. the existence of a certain type of loop in the interferometric graph), and provide a simple algorithm for... physical condition on aperture placement is more intuitive when considering the raw phase measurements as opposed to their closures. For this reason

  16. Double-Stage Delay Multiply and Sum Beamforming Algorithm: Application to Linear-Array Photoacoustic Imaging.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-01-01

    Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely delay-multiply-and-sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging, was introduced to overcome the challenges in DAS. DMAS was used in PAI systems and it was shown that this algorithm results in resolution improvement and sidelobe degrading. However, DMAS is still sensitive to high levels of noise, and resolution improvement is not satisfying. Here, we propose a novel algorithm based on DAS algebra inside DMAS formula expansion, double stage DMAS (DS-DMAS), which improves the image resolution and levels of sidelobe, and is much less sensitive to high level of noise compared to DMAS. The performance of DS-DMAS algorithm is evaluated numerically and experimentally. The resulted images are evaluated qualitatively and quantitatively using established quality metrics including signal-to-noise ratio (SNR), full-width-half-maximum (FWHM) and contrast ratio (CR). It is shown that DS-DMAS outperforms DAS and DMAS at the expense of higher computational load. DS-DMAS reduces the lateral valley for about 15 dB and improves the SNR and FWHM better than 13% and 30%, respectively. Moreover, the levels of sidelobe are reduced for about 10 dB in comparison with those in DMAS.

  17. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  18. FPGA implementation of image dehazing algorithm for real time applications

    NASA Astrophysics Data System (ADS)

    Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.

    2017-09-01

    Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.

  19. Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.

    2010-01-01

    In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.

  20. Portable Ultrasound Imaging of the Brain for Use in Forward Battlefield Areas

    DTIC Science & Technology

    2011-03-01

    ultrasound measurement of skull thickness and sound speed, phase correction of beam distortion, the tomographic reconstruction algorithm, and the final...produce a coherent imaging source. We propose a corrective technique that will use ultrasound-based phased -array beam correction [3], optimized...not expected to be a significant factor in the ability to phase -correct the imaging beam . In addition to planning (2.2.1), the data is also be used

  1. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  2. Submillimeter video imaging with a superconducting bolometer array

    NASA Astrophysics Data System (ADS)

    Becker, Daniel Thomas

    Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.

  3. A coherent through-wall MIMO phased array imaging radar based on time-duplexed switching

    NASA Astrophysics Data System (ADS)

    Chen, Qingchao; Chetty, Kevin; Brennan, Paul; Lok, Lai Bun; Ritchie, Matthiew; Woodbridge, Karl

    2017-05-01

    Through-the-Wall (TW) radar sensors are gaining increasing interest for security, surveillance and search and rescue applications. Additionally, the integration of Multiple-Input, Multiple-Output (MIMO) techniques with phased array radar is allowing higher performance at lower cost. In this paper we present a 4-by-4 TW MIMO phased array imaging radar operating at 2.4 GHz with 200 MHz bandwidth. To achieve high imaging resolution in a cost-effective manner, the 4 Tx and 4 Rx elements are used to synthesize a uniform linear array (ULA) of 16 virtual elements. Furthermore, the transmitter is based on a single-channel 4-element time-multiplexed switched array. In transmission, the radar utilizes frequency modulated continuous wave (FMCW) waveforms that undergo de-ramping on receive to allow digitization at relatively low sampling rates, which then simplifies the imaging process. This architecture has been designed for the short-range TW scenarios envisaged, and permits sufficient time to switch between antenna elements. The paper first outlines the system characteristics before describing the key signal processing and imaging algorithms which are based on traditional Fast Fourier Transform (FFT) processing. These techniques are implemented in LabVIEW software. Finally, we report results from an experimental campaign that investigated the imaging capabilities of the system and demonstrated the detection of personnel targets. Moreover, we show that multiple targets within a room with greater than approximately 1 meter separation can be distinguished from one another.

  4. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    PubMed Central

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring technique currently exists. PMID:23807573

  5. RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error.

    PubMed

    Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize

    2018-06-01

    The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.

  6. [Near infrared spectroscopy system structure with MOEMS scanning mirror array].

    PubMed

    Luo, Biao; Wen, Zhi-Yu; Wen, Zhong-Quan; Chen, Li; Qian, Rong-Rong

    2011-11-01

    A method which uses MOEMS mirror array optical structure to reduce the high cost of infrared spectrometer is given in the present paper. This method resolved the problem that MOEMS mirror array can not be used in simple infrared spectrometer because the problem of imaging irregularity in infrared spectroscopy and a new structure for spectral imaging was designed. According to the requirements of imaging spot, this method used optical design software ZEMAX and standard-specific aberrations of the optimization algorithm, designed and optimized the optical structure. It works from 900 to 1 400 nm. The results of design analysis showed that with the light source slit width of 50 microm, the spectrophotometric system is superior to the theoretical resolution of 6 nm, and the size of the available spot is 0.042 mm x 0.08 mm. Verification examples show that the design meets the requirements of the imaging regularity, and can be used for MOEMS mirror reflectance scan. And it was also verified that the use of a new MOEMS mirror array spectrometer model is feasible. Finally, analyze the relationship between the location of the detector and the maximum deflection angle of micro-mirror was analyzed.

  7. Super-resolution for imagery from integrated microgrid polarimeters.

    PubMed

    Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M

    2011-07-04

    Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.

  8. 4-D ultrafast shear-wave imaging.

    PubMed

    Gennisson, Jean-Luc; Provost, Jean; Deffieux, Thomas; Papadacci, Clément; Imbault, Marion; Pernot, Mathieu; Tanter, Mickael

    2015-06-01

    Over the last ten years, shear wave elastography (SWE) has seen considerable development and is now routinely used in clinics to provide mechanical characterization of tissues to improve diagnosis. The most advanced technique relies on the use of an ultrafast scanner to generate and image shear waves in real time in a 2-D plane at several thousands of frames per second. We have recently introduced 3-D ultrafast ultrasound imaging to acquire with matrix probes the 3-D propagation of shear waves generated by a dedicated radiation pressure transducer in a single acquisition. In this study, we demonstrate 3-D SWE based on ultrafast volumetric imaging in a clinically applicable configuration. A 32 × 32 matrix phased array driven by a customized, programmable, 1024-channel ultrasound system was designed to perform 4-D shear-wave imaging. A matrix phased array was used to generate and control in 3-D the shear waves inside the medium using the acoustic radiation force. The same matrix array was used with 3-D coherent plane wave compounding to perform high-quality ultrafast imaging of the shear wave propagation. Volumetric ultrafast acquisitions were then beamformed in 3-D using a delay-and-sum algorithm. 3-D volumetric maps of the shear modulus were reconstructed using a time-of-flight algorithm based on local multiscale cross-correlation of shear wave profiles in the three main directions using directional filters. Results are first presented in an isotropic homogeneous and elastic breast phantom. Then, a full 3-D stiffness reconstruction of the breast was performed in vivo on healthy volunteers. This new full 3-D ultrafast ultrasound system paves the way toward real-time 3-D SWE.

  9. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles

    PubMed Central

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe

    2017-01-01

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l’information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N-th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work. PMID:28718788

  10. Implementation of an IMU Aided Image Stacking Algorithm in a Digital Camera for Unmanned Aerial Vehicles.

    PubMed

    Audi, Ahmad; Pierrot-Deseilligny, Marc; Meynard, Christophe; Thom, Christian

    2017-07-18

    Images acquired with a long exposure time using a camera embedded on UAVs (Unmanned Aerial Vehicles) exhibit motion blur due to the erratic movements of the UAV. The aim of the present work is to be able to acquire several images with a short exposure time and use an image processing algorithm to produce a stacked image with an equivalent long exposure time. Our method is based on the feature point image registration technique. The algorithm is implemented on the light-weight IGN (Institut national de l'information géographique) camera, which has an IMU (Inertial Measurement Unit) sensor and an SoC (System on Chip)/FPGA (Field-Programmable Gate Array). To obtain the correct parameters for the resampling of the images, the proposed method accurately estimates the geometrical transformation between the first and the N -th images. Feature points are detected in the first image using the FAST (Features from Accelerated Segment Test) detector, then homologous points on other images are obtained by template matching using an initial position benefiting greatly from the presence of the IMU sensor. The SoC/FPGA in the camera is used to speed up some parts of the algorithm in order to achieve real-time performance as our ultimate objective is to exclusively write the resulting image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images and block diagrams of the described architecture. The resulting stacked image obtained for real surveys does not seem visually impaired. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real time the gyrometers of the IMU. Timing results demonstrate that the image resampling part of this algorithm is the most demanding processing task and should also be accelerated in the FPGA in future work.

  11. Fast uncooled module 32×32 array of polycrystalline PbSe used for muzzle flash detection

    NASA Astrophysics Data System (ADS)

    Kastek, Mariusz; Dulski, Rafał; Trzaskawka, Piotr; Bieszczad, Grzegorz

    2011-06-01

    The paper presents some aspects of muzzle flash detection using low resolution polycrystalline PbSe uncooled 32×32 detectors array. This system for muzzle flash detection works in MWIR (3 - 5 microns) region and it is based on VPD (Vapor Phase Deposition) technology. The low density uncooled 32×32 array is suitable for being used in low cost IR imagers sensitive in the MWIR band with frame rates exceeding 1.000 Hz. The FPA detector, read-out electronics and processing electronics (allowing the implementation of some algorithms for muzzle flash detection) has been presented. The system has been tested at field test ground. Results of detection range measurement with two types of optical systems (wide and narrow field of view) have been shown. The initial results of testing of some algorithms for muzzle flash detection have been also presented.

  12. In-TFT-Array-Process Micro Defect Inspection Using Nonlinear Principal Component Analysis

    PubMed Central

    Liu, Yi-Hung; Wang, Chi-Kai; Ting, Yung; Lin, Wei-Zhi; Kang, Zhi-Hao; Chen, Ching-Shun; Hwang, Jih-Shang

    2009-01-01

    Defect inspection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacture, and has received much attention in the field of automatic optical inspection (AOI). Previously, most focus was put on the problems of macro-scale Mura-defect detection in cell process, but it has recently been found that the defects which substantially influence the yield rate of LCD panels are actually those in the TFT array process, which is the first process in TFT-LCD manufacturing. Defect inspection in TFT array process is therefore considered a difficult task. This paper presents a novel inspection scheme based on kernel principal component analysis (KPCA) algorithm, which is a nonlinear version of the well-known PCA algorithm. The inspection scheme can not only detect the defects from the images captured from the surface of LCD panels, but also recognize the types of the detected defects automatically. Results, based on real images provided by a LCD manufacturer in Taiwan, indicate that the KPCA-based defect inspection scheme is able to achieve a defect detection rate of over 99% and a high defect classification rate of over 96% when the imbalanced support vector machine (ISVM) with 2-norm soft margin is employed as the classifier. More importantly, the inspection time is less than 1 s per input image. PMID:20057957

  13. Access control violation prevention by low-cost infrared detection

    NASA Astrophysics Data System (ADS)

    Rimmer, Andrew N.

    2004-09-01

    A low cost 16x16 un-cooled pyroelectric detector array, allied with advanced tracking and detection algorithms, has enabled the development of a universal detector with a wide range of applications in people monitoring and homeland security. Violation of access control systems, whether controlled by proximity card, biometrics, swipe card or similar, may occur by 'tailgating' or 'piggybacking' where an 'approved' entrant with a valid entry card is accompanied by a closely spaced 'non-approved' entrant. The violation may be under duress, where the accompanying person is attempting to enter a secure facility by force or threat. Alternatively, the violation may be benign where staff members collude either through habit or lassitude, either with each other or with third parties, without considering the security consequences. Examples of the latter could include schools, hospitals or maternity homes. The 16x16 pyroelectric array is integrated into a detector or imaging system which incorporates data processing, target extraction and decision making algorithms. The algorithms apply interpolation to the array output, allowing a higher level of resolution than might otherwise be expected from such a low resolution array. The pyroelectric detection principle means that the detection will work in variable light conditions and even in complete darkness, if required. The algorithms can monitor the shape, form, temperature and number of persons in the scene and utilise this information to determine whether a violation has occurred or not. As people are seen as 'hot blobs' and are not individually recognisable, civil liberties are not infringed in the detection process. The output from the detector is a simple alarm signal which may act as input to the access control system as an alert or to trigger CCTV image display and storage. The applications for a tailgate detector can be demonstrated across many medium security applications where there are no physical means to prevent this type of security breach.

  14. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    NASA Astrophysics Data System (ADS)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  15. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  16. Explosive Detection in Aviation Applications Using CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martz, H E; Crawford, C R

    2011-02-15

    CT scanners are deployed world-wide to detect explosives in checked and carry-on baggage. Though very similar to single- and dual-energy multi-slice CT scanners used today in medical imaging, some recently developed explosives detection scanners employ multiple sources and detector arrays to eliminate mechanical rotation of a gantry, photon counting detectors for spectral imaging, and limited number of views to reduce cost. For each bag scanned, the resulting reconstructed images are first processed by automated threat recognition algorithms to screen for explosives and other threats. Human operators review the images only when these automated algorithms report the presence of possible threats.more » The US Department of Homeland Security (DHS) has requirements for future scanners that include dealing with a larger number of threats, higher probability of detection, lower false alarm rates and lower operating costs. One tactic that DHS is pursuing to achieve these requirements is to augment the capabilities of the established security vendors with third-party algorithm developers. A third-party in this context refers to academics and companies other than the established vendors. DHS is particularly interested in exploring the model that has been used very successfully by the medical imaging industry, in which university researchers develop algorithms that are eventually deployed in commercial medical imaging equipment. The purpose of this paper is to discuss opportunities for third-parties to develop advanced reconstruction and threat detection algorithms.« less

  17. Radiation Hardened Low Power Digital Signal Processor

    DTIC Science & Technology

    2005-04-15

    Image Figure 53.0 Point Spread Function PSF Figure 54.0 Restored Image and Restored PSF Figure 55.0 Newly Created Array Figure 56.0 Deblurred Image and... noise and interference rejection. WOA’s of 32-taps and greater are easily managed by the TCSP. An architecture that could efficiently perform filter...to quickly calculate a Remez filter impulse response to be used in place of the window function. Using the Remez exchange algorithm to calculate the

  18. Cipher image damage and decisions in real time

    NASA Astrophysics Data System (ADS)

    Silva-García, Victor Manuel; Flores-Carapia, Rolando; Rentería-Márquez, Carlos; Luna-Benoso, Benjamín; Jiménez-Vázquez, Cesar Antonio; González-Ramírez, Marlon David

    2015-01-01

    This paper proposes a method for constructing permutations on m position arrangements. Our objective is to encrypt color images using advanced encryption standard (AES), using variable permutations means a different one for each 128-bit block in the first round after the x-or operation is applied. Furthermore, this research offers the possibility of knowing the original image when the encrypted figure suffered a failure from either an attack or not. This is achieved by permuting the original image pixel positions before being encrypted with AES variable permutations, which means building a pseudorandom permutation of 250,000 position arrays or more. To this end, an algorithm that defines a bijective function between the nonnegative integer and permutation sets is built. From this algorithm, the way to build permutations on the 0,1,…,m-1 array, knowing m-1 constants, is presented. The transcendental numbers are used to select these m-1 constants in a pseudorandom way. The quality of the proposed encryption according to the following criteria is evaluated: the correlation coefficient, the entropy, and the discrete Fourier transform. A goodness-of-fit test for each basic color image is proposed to measure the bits randomness degree of the encrypted figure. On the other hand, cipher images are obtained in a loss-less encryption way, i.e., no JPEG file formats are used.

  19. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  20. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  1. Three-dimensional imaging through turbid media based on polarization-difference liquid-crystal microlens array

    NASA Astrophysics Data System (ADS)

    Xin, Zhaowei; Wei, Dong; Li, Dapeng; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Wang, Haiwei; Xie, Changsheng

    2018-02-01

    In this paper, a polarization difference liquid-crystal microlens array (PD-LCMLA) for three dimensional imaging application through turbid media is fabricated and demonstrated. This device is composed of a twisted nematic liquidcrystal cell (TNLCC), a polarizer and a liquid-crystal microlens array. The polarizer is sandwiched between the TNLCC and LCMLA to help the polarization difference system achieving the orthogonal polarization raw images. The prototyped camera for polarization difference imaging has been constructed by integrating the PD-LCMLA with an image sensor. The orthogonally polarized light-field images are recorded by switching the working state of the TNLCC. Here, by using a special microstructure in conjunction with the polarization-difference algorithm, we demonstrate that the three-dimensional information in the scattering media can be retrieved from the polarization-difference imaging system with an electrically tunable PD-LCMLA. We further investigate the system's potential function based on the flexible microstructure. The microstructure provides a wide operation range in the manipulation of incident beams and also emerges multiple operation modes for imaging applications, such as conventional planar imaging, polarization imaging mode, and polarization-difference imaging mode. Since the PD-LCMLA demonstrates a very low power consumption, multiple imaging modes and simple manufacturing, this kind of device presents a potential to be used in many other optical and electro-optical systems.

  2. Implementation and image processing of a multi-focusing bionic compound eye

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Guo, Yongcai; Luo, Jiasai

    2018-01-01

    In this paper, a new BCE with multi-focusing microlens array (MLA) was proposed. The BCE consist of detachable micro-hole array (MHA), multi-focusing MLA and spherical substrate, thus allowing it to have a large FOV without crosstalk and stray light. The MHA was fabricated by the precision machining and the parameters of the microlens varied depend on the aperture of micro-hole, through which the implementation of the multi-focusing MLA was realized under the negative pressure. Without the pattern transfer and substrate reshaping, the whole fabrication method was capable of accomplishing within several minutes by using microinjection technology. Furthermore, the method is cost-effective and easy for operation, thus providing a feasible method for the mass production of the BCE. The corresponding image processing was used to realize the image stitching for the sub-image of each single microlens, which offering an integral image in large FOV. The image stitching was implemented through the overlap between the adjacent sub-images and the feature points between the adjacent sub-images were captured by the Harris point detection. By using the adaptive non-maximal suppression, numerous potential mismatching points were eliminated and the algorithm efficiency was proved effectively. Following this, the random sample consensus (RANSAC) was used for feature points matching, by which the relation of projection transformation of the image is obtained. The implementation of the accurate image matching was then realized after the smooth transition by weighted average method. Experimental results indicate that the image-stitching algorithm can be applied for the curved BCE in large field.

  3. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    NASA Technical Reports Server (NTRS)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  4. Chirp-coded excitation imaging with a high-frequency ultrasound annular array.

    PubMed

    Mamou, Jonathan; Ketterling, Jeffrey A; Silverman, Ronald H

    2008-02-01

    High-frequency ultrasound (HFU, > 15 MHz) is an effective means of obtaining fine-resolution images of biological tissues for applications such as opthalmologic, dermatologic, and small animal imaging. HFU has two inherent drawbacks. First, HFU images have a limited depth of field (DOF) because of the short wavelength and the low fixed F-number of conventional HFU transducers. Second, HFU can be used to image only a few millimeters deep into a tissue because attenuation increases with frequency. In this study, a five-element annular array was used in conjunction with a synthetic-focusing algorithm to extend the DOF. The annular array had an aperture of 10 mm, a focal length of 31 mm, and a center frequency of 17 MHz. To increase penetration depth, 8-micros, chirp-coded signals were designed, input into an arbitrary waveform generator, and used to excite each array element. After data acquisition, the received signals were linearly filtered to restore axial resolution and increase the SNR. To compare the chirpcoded imaging method with conventional impulse imaging in terms of resolution, a 25-microm diameter wire was scanned and the -6-dB axial and lateral resolutions were computed at depths ranging from 20.5 to 40.5 mm. The results demonstrated that chirp-coded excitation did not degrade axial or lateral resolution. A tissue-mimicking phantom containing 10-microm glass beads was scanned, and backscattered signals were analyzed to evaluate SNR and penetration depth. Finally, ex vivo ophthalmic images were formed and chirpcoded images showed features that were not visible in conventional impulse images.

  5. Computations on the massively parallel processor at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Strong, James P.

    1991-01-01

    Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.

  6. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  7. 3D augmented reality with integral imaging display

    NASA Astrophysics Data System (ADS)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  8. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  9. Evaluation of a hyperspectral image database for demosaicking purposes

    NASA Astrophysics Data System (ADS)

    Larabi, Mohamed-Chaker; Süsstrunk, Sabine

    2011-01-01

    We present a study on the the applicability of hyperspectral images to evaluate color filter array (CFA) design and the performance of demosaicking algorithms. The aim is to simulate a typical digital still camera processing pipe-line and to compare two different scenarios: evaluate the performance of demosaicking algorithms applied to raw camera RGB values before color rendering to sRGB, and evaluate the performance of demosaicking algorithms applied on the final sRGB color rendered image. The second scenario is the most frequently used one in literature because CFA design and algorithms are usually tested on a set of existing images that are already rendered, such as the Kodak Photo CD set containing the well-known lighthouse image. We simulate the camera processing pipe-line with measured spectral sensitivity functions of a real camera. Modeling a Bayer CFA, we select three linear demosaicking techniques in order to perform the tests. The evaluation is done using CMSE, CPSNR, s-CIELAB and MSSIM metrics to compare demosaicking results. We find that the performance, and especially the difference between demosaicking algorithms, is indeed significant depending if the mosaicking/demosaicking is applied to camera raw values as opposed to already rendered sRGB images. We argue that evaluating the former gives a better indication how a CFA/demosaicking combination will work in practice, and that it is in the interest of the community to create a hyperspectral image dataset dedicated to that effect.

  10. Quantitative ultrasonic testing of acoustically anisotropic materials with verification on austenitic and dissimilar weld joints

    NASA Astrophysics Data System (ADS)

    Boller, C.; Pudovikov, S.; Bulavinov, A.

    2012-05-01

    Austenitic stainless steel materials are widely used in a variety of industry sectors. In particular, the material is qualified to meet the design criteria of high quality in safety related applications. For example, the primary loop of the most of the nuclear power plants in the world, due to high durability and corrosion resistance, is made of this material. Certain operating conditions may cause a range of changes in the integrity of the component, and therefore require nondestructive testing at reasonable intervals. These in-service inspections are often performed using ultrasonic techniques, in particular when cracking is of specific concern. However, the coarse, dendritic grain structure of the weld material, formed during the welding process, is extreme and unpredictably anisotropic. Such structure is no longer direction-independent to the ultrasonic wave propagation; therefore, the ultrasonic beam deflects and redirects and the wave front becomes distorted. Thus, the use of conventional ultrasonic testing techniques using fixed beam angles is very limited and the application of ultrasonic Phased Array techniques becomes desirable. The "Sampling Phased Array" technique, invented and developed by Fraunhofer IZFP, allows the acquisition of time signals (A-scans) for each individual transducer element of the array along with fast image reconstruction techniques based on synthetic focusing algorithms. The reconstruction considers the sound propagation from each image pixel to the individual sensor element. For anisotropic media, where the sound beam is deflected and the sound path is not known a-priori, a novel phase adjustment technique called "Reverse Phase Matching" is implemented. By taking into account the anisotropy and inhomogeneity of the weld structure, a ray tracing algorithm for modeling the acoustic wave propagation and calculating the sound propagation time is applied. This technique can be utilized for 2D and 3D real time image reconstruction. The "Gradient Constant Descent Method" (GECDM), an iterative algorithm, is implemented, which is essential for examination of inhomogeneous anisotropic media having unknown properties (elastic constants). The Sampling Phased Array technique with Reverse Phase Matching extended by GECDM-technique determines unknown elastic constants and provides reliable and efficient quantitative flaw detection in the austenitic welds. The validation of ray-tracing algorithm and GECDM-method is performed by number of experiments on test specimens with artificial as well as natural material flaws. A mechanized system for ultrasonic testing of stainless steel and dissimilar welds is developed. The system works on both conventional and Sampling Phased Array techniques. The new frontend ultrasonic unit with optical data link allows the 3D visualization of the inspection results in real time.

  11. Array signal recovery algorithm for a single-RF-channel DBF array

    NASA Astrophysics Data System (ADS)

    Zhang, Duo; Wu, Wen; Fang, Da Gang

    2016-12-01

    An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.

  12. Learning receptor positions from imperfectly known motions

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1990-01-01

    An algorithm is described for learning image interpolation functions for sensor arrays whose sensor positions are somewhat disordered. The learning is based on failures of translation invariance, so it does not require knowledge of the images being presented to the visual system. Previously reported implementations of the method assumed the visual system to have precise knowledge of the translations. It is demonstrated that translation estimates computed from the imperfectly interpolated images can have enough accuracy to allow the learning process to converge to a correct interpolation.

  13. Fizeau Fourier transform imaging spectroscopy: missing data reconstruction.

    PubMed

    Thurman, Samuel T; Fienup, James R

    2008-04-28

    Fizeau Fourier transform imaging spectroscopy yields both spatial and spectral information about an object. Spectral information, however, is not obtained for a finite area of low spatial frequencies. A nonlinear reconstruction algorithm based on a gray-world approximation is presented. Reconstruction results from simulated data agree well with ideal Michelson interferometer-based spectral imagery. This result implies that segmented-aperture telescopes and multiple telescope arrays designed for conventional imaging can be used to gather useful spectral data through Fizeau FTIS without the need for additional hardware.

  14. Development of an HL7 interface engine, based on tree structure and streaming algorithm, for large-size messages which include image data.

    PubMed

    Um, Ki Sung; Kwak, Yun Sik; Cho, Hune; Kim, Il Kon

    2005-11-01

    A basic assumption of Health Level Seven (HL7) protocol is 'No limitation of message length'. However, most existing commercial HL7 interface engines do limit message length because they use the string array method, which is run in the main memory for the HL7 message parsing process. Specifically, messages with image and multi-media data create a long string array and thus cause the computer system to raise critical and fatal problem. Consequently, HL7 messages cannot handle the image and multi-media data necessary in modern medical records. This study aims to solve this problem with the 'streaming algorithm' method. This new method for HL7 message parsing applies the character-stream object which process character by character between the main memory and hard disk device with the consequence that the processing load on main memory could be alleviated. The main functions of this new engine are generating, parsing, validating, browsing, sending, and receiving HL7 messages. Also, the engine can parse and generate XML-formatted HL7 messages. This new HL7 engine successfully exchanged HL7 messages with 10 megabyte size images and discharge summary information between two university hospitals.

  15. Nanophotonic projection system.

    PubMed

    Aflatouni, Firooz; Abiri, Behrooz; Rekhi, Angad; Hajimiri, Ali

    2015-08-10

    Low-power integrated projection technology can play a key role in development of low-cost mobile devices with built-in high-resolution projectors. Low-cost 3D imaging and holography systems are also among applications of such a technology. In this paper, an integrated projection system based on a two-dimensional optical phased array with fast beam steering capability is reported. Forward biased p-i-n phase modulators with 200MHz bandwidth are used per each array element for rapid phase control. An optimization algorithm is implemented to compensate for the phase dependent attenuation of the p-i-n modulators. Using rapid vector scanning technique, images were formed and recorded within a single snapshot of the IR camera.

  16. Tomographic and analog 3-D simulations using NORA. [Non-Overlapping Redundant Image Array formed by multiple pinholes

    NASA Technical Reports Server (NTRS)

    Yin, L. I.; Trombka, J. I.; Bielefeld, M. J.; Seltzer, S. M.

    1984-01-01

    The results of two computer simulations demonstrate the feasibility of using the nonoverlapping redundant array (NORA) to form three-dimensional images of objects with X-rays. Pinholes admit the X-rays to nonoverlapping points on a detector. The object is reconstructed in the analog mode by optical correlation and in the digital mode by tomographic computations. Trials were run with a stick-figure pyramid and extended objects with out-of-focus backgrounds. Substitution of spherical optical lenses for the pinholes increased the light transmission sufficiently that objects could be easily viewed in a dark room. Out-of-focus aberrations in tomographic reconstruction could be eliminated using Chang's (1976) algorithm.

  17. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  18. Annular phased-array high-intensity focused ultrasound device for image-guided therapy of uterine fibroids.

    PubMed

    Held, Robert Thomas; Zderic, Vesna; Nguyen, Thuc Nghi; Vaezy, Shahram

    2006-02-01

    An ultrasound (US), image-guided high-intensity focused ultrasound (HIFU) device was developed for noninvasive ablation of uterine fibroids. The HIFU device was an annular phased array, with a focal depth range of 30-60 mm, a natural focus of 50 mm, and a resonant frequency of 3 MHz. The in-house control software was developed to operate the HIFU electronics drive system for inducing tissue coagulation at different distances from the array. A novel imaging algorithm was developed to minimize the HIFU-induced noise in the US images. The device was able to produce lesions in bovine serum albumin-embedded polyacrylamide gels and excised pig liver. The lesions could be seen on the US images as hyperechoic regions. Depths ranging from 30 to 60 mm were sonicated at acoustic intensities of 4100 and 6100 W/cm2 for 15 s each, with the latter producing average lesion volumes at least 63% larger than the former. Tissue sonication patterns that began distal to the transducer produced longer lesions than those that began proximally. The variation in lesion dimensions indicates the possible development of HIFU protocols that increase HIFU throughput and shorten tumor treatment times.

  19. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  20. Eight channel transmit array volume coil using on-coil radiofrequency current sources

    PubMed Central

    Kurpad, Krishna N.; Boskamp, Eddy B.

    2014-01-01

    Background At imaging frequencies associated with high-field MRI, the combined effects of increased load-coil interaction and shortened wavelength results in degradation of circular polarization and B1 field homogeneity in the imaging volume. Radio frequency (RF) shimming is known to mitigate the problem of B1 field inhomogeneity. Transmit arrays with well decoupled transmitting elements enable accurate B1 field pattern control using simple, non-iterative algorithms. Methods An eight channel transmit array was constructed. Each channel consisted of a transmitting element driven by a dedicated on-coil RF current source. The coil current distributions of characteristic transverse electromagnetic (TEM) coil resonant modes were non-iteratively set up on each transmitting element and 3T MRI images of a mineral oil phantom were obtained. Results B1 field patterns of several linear and quadrature TEM coil resonant modes that typically occur at different resonant frequencies were replicated at 128 MHz without having to retune the transmit array. The generated B1 field patterns agreed well with simulation in most cases. Conclusions Independent control of current amplitude and phase on each transmitting element was demonstrated. The transmit array with on-coil RF current sources enables B1 field shimming in a simple and predictable manner. PMID:24834418

  1. Time-Reversal MUSIC Imaging with Time-Domain Gating Technique

    NASA Astrophysics Data System (ADS)

    Choi, Heedong; Ogawa, Yasutaka; Nishimura, Toshihiko; Ohgane, Takeo

    A time-reversal (TR) approach with multiple signal classification (MUSIC) provides super-resolution for detection and localization using multistatic data collected from an array antenna system. The theory of TR-MUSIC assumes that the number of antenna elements is greater than that of scatterers (targets). Furthermore, it requires many sets of frequency-domain data (snapshots) in seriously noisy environments. Unfortunately, these conditions are not practical for real environments due to the restriction of a reasonable antenna structure as well as limited measurement time. We propose an approach that treats both noise reduction and relaxation of the transceiver restriction by using a time-domain gating technique accompanied with the Fourier transform before applying the TR-MUSIC imaging algorithm. Instead of utilizing the conventional multistatic data matrix (MDM), we employ a modified MDM obtained from the gating technique. The resulting imaging functions yield more reliable images with only a few snapshots regardless of the limitation of the antenna arrays.

  2. An Adaptive Cross-Correlation Algorithm for Extended-Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    This viewgraph presentation reviews the Adaptive Cross-Correlation (ACC) Algorithm for extended scene-Shack Hartmann wavefront (WF) sensing. A Shack-Hartmann sensor places a lenslet array at a plane conjugate to the WF error source. Each sub-aperture lenslet samples the WF in the corresponding patch of the WF. A description of the ACC algorithm is included. The ACC has several benefits; amongst them are: ACC requires only about 4 image-shifting iterations to achieve 0.01 pixel accuracy and ACC is insensitive to both background light and noise much more robust than centroiding,

  3. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  4. Similar Tensor Arrays - A Framework for Storage of Tensor Array Data

    NASA Astrophysics Data System (ADS)

    Brun, Anders; Martin-Fernandez, Marcos; Acar, Burak; Munoz-Moreno, Emma; Cammoun, Leila; Sigfridsson, Andreas; Sosa-Cabrera, Dario; Svensson, Björn; Herberthson, Magnus; Knutsson, Hans

    This chapter describes a framework for storage of tensor array data, useful to describe regularly sampled tensor fields. The main component of the framework, called Similar Tensor Array Core (STAC), is the result of a collaboration between research groups within the SIMILAR network of excellence. It aims to capture the essence of regularly sampled tensor fields using a minimal set of attributes and can therefore be used as a “greatest common divisor” and interface between tensor array processing algorithms. This is potentially useful in applied fields like medical image analysis, in particular in Diffusion Tensor MRI, where misinterpretation of tensor array data is a common source of errors. By promoting a strictly geometric perspective on tensor arrays, with a close resemblance to the terminology used in differential geometry, (STAC) removes ambiguities and guides the user to define all necessary information. In contrast to existing tensor array file formats, it is minimalistic and based on an intrinsic and geometric interpretation of the array itself, without references to other coordinate systems.

  5. Regression analysis for LED color detection of visual-MIMO system

    NASA Astrophysics Data System (ADS)

    Banik, Partha Pratim; Saha, Rappy; Kim, Ki-Doo

    2018-04-01

    Color detection from a light emitting diode (LED) array using a smartphone camera is very difficult in a visual multiple-input multiple-output (visual-MIMO) system. In this paper, we propose a method to determine the LED color using a smartphone camera by applying regression analysis. We employ a multivariate regression model to identify the LED color. After taking a picture of an LED array, we select the LED array region, and detect the LED using an image processing algorithm. We then apply the k-means clustering algorithm to determine the number of potential colors for feature extraction of each LED. Finally, we apply the multivariate regression model to predict the color of the transmitted LEDs. In this paper, we show our results for three types of environmental light condition: room environmental light, low environmental light (560 lux), and strong environmental light (2450 lux). We compare the results of our proposed algorithm from the analysis of training and test R-Square (%) values, percentage of closeness of transmitted and predicted colors, and we also mention about the number of distorted test data points from the analysis of distortion bar graph in CIE1931 color space.

  6. Dedicated hardware processor and corresponding system-on-chip design for real-time laser speckle imaging.

    PubMed

    Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming

    2011-11-01

    Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.

  7. Automatic identification of cochlear implant electrode arrays for post-operative assessment

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Schuman, Theodore A.; Wright, Charles G.; Labadie, Robert F.; Dawant, Benoit M.

    2011-03-01

    Cochlear implantation is a procedure performed to treat profound hearing loss. Accurately determining the postoperative position of the implant in vivo would permit studying the correlations between implant position and hearing restoration. To solve this problem, we present an approach based on parametric Gradient Vector Flow snakes to segment the electrode array in post-operative CT. By combining this with existing methods for localizing intra-cochlear anatomy, we have developed a system that permits accurate assessment of the implant position in vivo. The system is validated using a set of seven temporal bone specimens. The algorithms were run on pre- and post-operative CTs of the specimens, and the results were compared to histological images. It was found that the position of the arrays observed in the histological images is in excellent agreement with the position of their automatically generated 3D reconstructions in the CT scans.

  8. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  9. Flight Results from the HST SM4 Relative Navigation Sensor System

    NASA Technical Reports Server (NTRS)

    Naasz, Bo; Eepoel, John Van; Queen, Steve; Southward, C. Michael; Hannah, Joel

    2010-01-01

    On May 11, 2009, Space Shuttle Atlantis roared off of Launch Pad 39A enroute to the Hubble Space Telescope (HST) to undertake its final servicing of HST, Servicing Mission 4. Onboard Atlantis was a small payload called the Relative Navigation Sensor experiment, which included three cameras of varying focal ranges, avionics to record images and estimate, in real time, the relative position and attitude (aka "pose") of the telescope during rendezvous and deploy. The avionics package, known as SpaceCube and developed at the Goddard Space Flight Center, performed image processing using field programmable gate arrays to accelerate this process, and in addition executed two different pose algorithms in parallel, the Goddard Natural Feature Image Recognition and the ULTOR Passive Pose and Position Engine (P3E) algorithms

  10. Research of centroiding algorithms for extended and elongated spot of sodium laser guide star

    NASA Astrophysics Data System (ADS)

    Shao, Yayun; Zhang, Yudong; Wei, Kai

    2016-10-01

    Laser guide stars (LGSs) increase the sky coverage of astronomical adaptive optics systems. But spot array obtained by Shack-Hartmann wave front sensors (WFSs) turns extended and elongated, due to the thickness and size limitation of sodium LGS, which affects the accuracy of the wave front reconstruction algorithm. In this paper, we compared three different centroiding algorithms , the Center-of-Gravity (CoG), weighted CoG (WCoG) and Intensity Weighted Centroid (IWC), as well as those accuracies for various extended and elongated spots. In addition, we compared the reconstructed image data from those three algorithms with theoretical results, and proved that WCoG and IWC are the best wave front reconstruction algorithms for extended and elongated spot among all the algorithms.

  11. An Algorithm to Identify and Localize Suitable Dock Locations from 3-D LiDAR Scans

    DTIC Science & Technology

    2013-05-10

    Locations from 3-D LiDAR Scans 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Graves, Mitchell Robert 5d. PROJECT NUMBER...Ranging ( LiDAR ) scans. A LiDAR sensor is a sensor that collects range images from a rotating array of vertically aligned lasers. Our solution leverages...Algorithm, Dock, Locations, Point Clouds, LiDAR , Identify 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a

  12. SAR Reduction in 7T C-Spine Imaging Using a “Dark Modes” Transmit Array Strategy

    PubMed Central

    Eryaman, Yigitcan; Guerin, Bastien; Keil, Boris; Mareyam, Azma; Herraiz, Joaquin L.; Kosior, Robert K.; Martin, Adrian; Torrado-Carvajal, Angel; Malpica, Norberto; Hernandez-Tamames, Juan A.; Schiavi, Emanuele; Adalsteinsson, Elfar; Wald, Lawrence L.

    2016-01-01

    Purpose Local specific absorption rate (SAR) limits many applications of parallel transmit (pTx) in ultra high-field imaging. In this Note, we introduce the use of an array element, which is intentionally inefficient at generating spin excitation (a “dark mode”) to attempt a partial cancellation of the electric field from those elements that do generate excitation. We show that adding dipole elements oriented orthogonal to their conventional orientation to a linear array of conventional loop elements can lower the local SAR hotspot in a C-spine array at 7 T. Methods We model electromagnetic fields in a head/torso model to calculate SAR and excitation B1+ patterns generated by conventional loop arrays and loop arrays with added electric dipole elements. We utilize the dark modes that are generated by the intentional and inefficient orientation of dipole elements in order to reduce peak 10g local SAR while maintaining excitation fidelity. Results For B1+ shimming in the spine, the addition of dipole elements did not significantly alter the B1+ spatial pattern but reduced local SAR by 36%. Conclusion The dipole elements provide a sufficiently complimentary B1+ and electric field pattern to the loop array that can be exploited by the radiofrequency shimming algorithm to reduce local SAR. PMID:24753012

  13. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  14. Design of a Lossless Image Compression System for Video Capsule Endoscopy and Its Performance in In-Vivo Trials

    PubMed Central

    Khan, Tareq H.; Wahid, Khan A.

    2014-01-01

    In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression. PMID:25375753

  15. Array architectures for iterative algorithms

    NASA Technical Reports Server (NTRS)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  16. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.

    PubMed

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M

    2018-04-12

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.

  17. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes

    PubMed Central

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.

    2018-01-01

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114

  18. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  19. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  20. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    NASA Astrophysics Data System (ADS)

    Dong, Bing; Ren, De-Qing; Zhang, Xi

    2011-08-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  1. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  2. Cygnus A super-resolved via convex optimization from VLA data

    NASA Astrophysics Data System (ADS)

    Dabbech, A.; Onose, A.; Abdulaziz, A.; Perley, R. A.; Smirnov, O. M.; Wiaux, Y.

    2018-05-01

    We leverage the Sparsity Averaging Re-weighted Analysis approach for interferometric imaging, that is based on convex optimization, for the super-resolution of Cyg A from observations at the frequencies 8.422 and 6.678 GHz with the Karl G. Jansky Very Large Array (VLA). The associated average sparsity and positivity priors enable image reconstruction beyond instrumental resolution. An adaptive Preconditioned primal-dual algorithmic structure is developed for imaging in the presence of unknown noise levels and calibration errors. We demonstrate the superior performance of the algorithm with respect to the conventional CLEAN-based methods, reflected in super-resolved images with high fidelity. The high-resolution features of the recovered images are validated by referring to maps of Cyg A at higher frequencies, more precisely 17.324 and 14.252 GHz. We also confirm the recent discovery of a radio transient in Cyg A, revealed in the recovered images of the investigated data sets. Our MATLAB code is available online on GitHub.

  3. Novel application of windowed beamforming function imaging for FLGPR

    NASA Astrophysics Data System (ADS)

    Xique, Ismael J.; Burns, Joseph W.; Thelen, Brian J.; LaRose, Ryan M.

    2018-04-01

    Backprojection of cross-correlated array data, using algorithms such as coherent interferometric imaging (Borcea, et al., 2006), has been advanced as a method to improve the statistical stability of images of targets in an inhomogeneous medium. Recently, the Windowed Beamforming Energy (WBE) function algorithm has been introduced as a functionally equivalent approach, which is significantly less computationally burdensome (Borcea, et al., 2011). WBE produces similar results through the use of a quadratic function summing signals after beamforming in transmission and reception, and windowing in the time domain. We investigate the application of WBE to improve the detection of buried targets with forward looking ground penetrating MIMO radar (FLGPR) data. The formulation of WBE as well the software implementation of WBE for the FLGPR data collection will be discussed. WBE imaging results are compared to standard backprojection and Coherence Factor imaging. Additionally, the effectiveness of WBE on field-collected data is demonstrated qualitatively through images and quantitatively through the use of a CFAR statistic on buried targets of a variety of contrast levels.

  4. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    ERIC Educational Resources Information Center

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  5. Robust frequency diversity based algorithm for clutter noise reduction of ultrasonic signals using multiple sub-spectrum phase coherence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongzhang, R.; Xiao, B.; Lardner, T.

    2014-02-18

    This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less

  6. High-resolution lithospheric imaging with seismic interferometry

    NASA Astrophysics Data System (ADS)

    Ruigrok, Elmer; Campman, Xander; Draganov, Deyan; Wapenaar, Kees

    2010-10-01

    In recent years, there has been an increase in the deployment of relatively dense arrays of seismic stations. The availability of spatially densely sampled global and regional seismic data has stimulated the adoption of industry-style imaging algorithms applied to converted- and scattered-wave energy from distant earthquakes, leading to relatively high-resolution images of the lower crust and upper mantle. We use seismic interferometry to extract reflection responses from the coda of transmitted energy from distant earthquakes. In theory, higher-resolution images can be obtained when migrating reflections obtained with seismic interferometry rather than with conversions, traditionally used in lithospheric imaging methods. Moreover, reflection data allow the straightforward application of algorithms previously developed in exploration seismology. In particular, the availability of reflection data allows us to extract from it a velocity model using standard multichannel data-processing methods. However, the success of our approach relies mainly on a favourable distribution of earthquakes. In this paper, we investigate how the quality of the reflection response obtained with interferometry is influenced by the distribution of earthquakes and the complexity of the transmitted wavefields. Our analysis shows that a reasonable reflection response could be extracted if (1) the array is approximately aligned with an active zone of earthquakes, (2) different phase responses are used to gather adequate angular illumination of the array and (3) the illumination directions are properly accounted for during processing. We illustrate our analysis using a synthetic data set with similar illumination and source-side reverberation characteristics as field data recorded during the 2000-2001 Laramie broad-band experiment. Finally, we apply our method to the Laramie data, retrieving reflection data. We extract a 2-D velocity model from the reflections and use this model to migrate the data. On the final reflectivity image, we observe a discontinuity in the reflections. We interpret this discontinuity as the Cheyenne Belt, a suture zone between Archean and Proterozoic terranes.

  7. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  8. Scene-based nonuniformity correction technique for infrared focal-plane arrays.

    PubMed

    Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong

    2009-04-20

    A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.

  9. Geometrical superresolved imaging using nonperiodic spatial masking.

    PubMed

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  10. NASA Tech Briefs, April 2006

    NASA Technical Reports Server (NTRS)

    2006-01-01

    The topics covered include: 1) Replaceable Sensor System for Bioreactor Monitoring; 2) Unitary Shaft-Angle and Shaft-Speed Sensor Assemblies; 3) Arrays of Nano Tunnel Junctions as Infrared Image Sensors; 4) Catalytic-Metal/PdO(sub x)/SiC Schottky-Diode Gas Sensors; 5) Compact, Precise Inertial Rotation Sensors for Spacecraft; 6) Universal Controller for Spacecraft Mechanisms; 7) The Flostation - an Immersive Cyberspace System; 8) Algorithm for Aligning an Array of Receiving Radio Antennas; 9) Single-Chip T/R Module for 1.2 GHz; 10) Quantum Entanglement Molecular Absorption Spectrum Simulator; 11) FuzzObserver; 12) Internet Distribution of Spacecraft Telemetry Data; 13) Semi-Automated Identification of Rocks in Images; 14) Pattern-Recognition Algorithm for Locking Laser Frequency; 15) Designing Cure Cycles for Matrix/Fiber Composite Parts; 16) Controlling Herds of Cooperative Robots; 17) Modification of a Limbed Robot to Favor Climbing; 18) Vacuum-Assisted, Constant-Force Exercise Device; 19) Production of Tuber-Inducing Factor; 20) Quantum-Dot Laser for Wavelengths of 1.8 to 2.3 micron; 21) Tunable Filter Made From Three Coupled WGM Resonators; and 22) Dynamic Pupil Masking for Phasing Telescope Mirror Segments.

  11. Advanced texture filtering: a versatile framework for reconstructing multi-dimensional image data on heterogeneous architectures

    NASA Astrophysics Data System (ADS)

    Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich

    2015-01-01

    Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.

  12. Self characterization of a coded aperture array for neutron source imaging

    DOE PAGES

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; ...

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning DT plasma during the stagnation stage of ICF implosions. Since the neutron source is small (~100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be preciselymore » aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.« less

  13. Development and Characterization of a Dither-Based Super-Resolution Reconstruction Method for Fiber Imaging Arrays

    NASA Astrophysics Data System (ADS)

    Languirand, Eric Robert

    Chemical imaging is an important tool for providing insight into function, role, and spatial distribution of analytes. This thesis describes the use of imaging fiber bundles (IFB) for super-resolution reconstruction using surface enhanced Raman scattering (SERS) showing improvement in resolution with arrayed bundles for the first time. Additionally this thesis describes characteristics of the IFB with regards to cross-talk as a function of aperture size. The first part of this thesis characterizes the IFB for both tapered and untapered bundles in terms of cross-talk. Cross-talk is defined as the amount of light leaking from a central fiber element in the imaging fiber bundle to surrounding fiber elements. To make this measurement ubiquitous for all imaging bundles, quantum dots were employed. Untapered and tapered IFB possess cross-talk of 2% or less, with fiber elements down to 32nm. The second part of this thesis employs a super resolution reconstruction algorithm using projection onto convex sets for resolution improvement. When using IFB arrays, the point spread function (PSF) of the array can be known accurately if the fiber elements over fill the pixel detector array. Therefore, the use of the known PSF compared to a general blurring kernel was evaluated. Relative increases in resolution of 12% and 2% at the 95% confidence level are found, when compared to a reference image, for the general blurring kernel and PSF, respectively. The third part of this thesis shows for the first time the use of SERS with a dithered IFB array coupled with super-resolution reconstruction. The resolution improvement across a step-edge is shown to be approximately 20% when compared to a reference image. This provides an additional means of increasing the resolution of fiber bundles beyond that of just tapering. Furthermore, this provides a new avenue for nanoscale imaging using these bundles. Lastly, synthetic data with varying degrees of signal-to-noise (S/N) were employed to explore the relationship S/N has with the reconstruction process. It is generally shown that increasing the number images used in the reconstruction process and increasing the S/N will improve the reconstruction providing larger increases in resolution.

  14. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    NASA Astrophysics Data System (ADS)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  15. Computational microscopy: illumination coding and nonlinear optimization enables gigapixel 3D phase imaging

    NASA Astrophysics Data System (ADS)

    Tian, Lei; Waller, Laura

    2017-05-01

    Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.

  16. Post-launch validation of Multispectral Thermal Imager (MTI) data and algorithms

    NASA Astrophysics Data System (ADS)

    Garrett, Alfred J.; Kurzeja, Robert J.; O'Steen, B. L.; Parker, Matthew J.; Pendergast, Malcolm M.; Villa-Aleman, Eliel

    1999-10-01

    Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL) and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern U.S., Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V&V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in Nevada. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V&V.

  17. Full-field fan-beam x-ray fluorescence computed tomography system design with linear-array detectors and pinhole collimation: a rapid Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Zhang, Siyuan; Li, Liang; Li, Ruizhe; Chen, Zhiqiang

    2017-11-01

    We present the design concept and initial simulations for a polychromatic full-field fan-beam x-ray fluorescence computed tomography (XFCT) device with pinhole collimators and linear-array photon counting detectors. The phantom is irradiated by a fan-beam polychromatic x-ray source filtered by copper. Fluorescent photons are stimulated and then collected by two linear-array photon counting detectors with pinhole collimators. The Compton scatter correction and the attenuation correction are applied in the data processing, and the maximum-likelihood expectation maximization algorithm is applied for the image reconstruction of XFCT. The physical modeling of the XFCT imaging system was described, and a set of rapid Monte Carlo simulations was carried out to examine the feasibility and sensitivity of the XFCT system. Different concentrations of gadolinium (Gd) and gold (Au) solutions were used as contrast agents in simulations. Results show that 0.04% of Gd and 0.065% of Au can be well reconstructed with the full scan time set at 6 min. Compared with using the XFCT system with a pencil-beam source or a single-pixel detector, using a full-field fan-beam XFCT device with linear-array detectors results in significant scanning time reduction and may satisfy requirements of rapid imaging, such as in vivo imaging experiments.

  18. Comparison of the Extended Kalman Filter and the Unscented Kalman Filter for Magnetocardiography activation time imaging

    NASA Astrophysics Data System (ADS)

    Ahrens, H.; Argin, F.; Klinkenbusch, L.

    2013-07-01

    The non-invasive and radiation-free imaging of the electrical activity of the heart with Electrocardiography (ECG) or Magnetocardiography (MCG) can be helpful for physicians for instance in the localization of the origin of cardiac arrhythmia. In this paper we compare two Kalman Filter algorithms for the solution of a nonlinear state-space model and for the subsequent imaging of the activation/depolarization times of the heart muscle: the Extended Kalman Filter (EKF) and the Unscented Kalman Filter (UKF). The algorithms are compared for simulations of a (6×6) magnetometer array, a torso model with piecewise homogeneous conductivities, 946 current dipoles located in a small part of the heart (apex), and several noise levels. It is found that for all tested noise levels the convergence of the activation times is faster for the UKF.

  19. A novel parallel architecture for local histogram equalization

    NASA Astrophysics Data System (ADS)

    Ohannessian, Mesrob I.; Choueiter, Ghinwa F.; Diab, Hassan

    2005-07-01

    Local histogram equalization is an image enhancement algorithm that has found wide application in the pre-processing stage of areas such as computer vision, pattern recognition and medical imaging. The computationally intensive nature of the procedure, however, is a main limitation when real time interactive applications are in question. This work explores the possibility of performing parallel local histogram equalization, using an array of special purpose elementary processors, through an HDL implementation that targets FPGA or ASIC platforms. A novel parallelization scheme is presented and the corresponding architecture is derived. The algorithm is reduced to pixel-level operations. Processing elements are assigned image blocks, to maintain a reasonable performance-cost ratio. To further simplify both processor and memory organizations, a bit-serial access scheme is used. A brief performance assessment is provided to illustrate and quantify the merit of the approach.

  20. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.

    PubMed

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-06-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.

  1. Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm.

    PubMed

    Cooley, Clarissa Zimmerman; Haskell, Melissa W; Cauley, Stephen F; Sappo, Charlotte; Lapierre, Cristen D; Ha, Christopher G; Stockmann, Jason P; Wald, Lawrence L

    2018-01-01

    Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B 0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.

  2. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  3. Shift-connected SIMD array architectures for digital optical computing systems, with algorithms for numerical transforms and partial differential equations

    NASA Astrophysics Data System (ADS)

    Drabik, Timothy J.; Lee, Sing H.

    1986-11-01

    The intrinsic parallelism characteristics of easily realizable optical SIMD arrays prompt their present consideration in the implementation of highly structured algorithms for the numerical solution of multidimensional partial differential equations and the computation of fast numerical transforms. Attention is given to a system, comprising several spatial light modulators (SLMs), an optical read/write memory, and a functional block, which performs simple, space-invariant shifts on images with sufficient flexibility to implement the fastest known methods for partial differential equations as well as a wide variety of numerical transforms in two or more dimensions. Either fixed or floating-point arithmetic may be used. A performance projection of more than 1 billion floating point operations/sec using SLMs with 1000 x 1000-resolution and operating at 1-MHz frame rates is made.

  4. Operational verification of a 40-MHz annular array transducer

    PubMed Central

    Ketterling, Jeffrey A.; Ramachandran, Sarayu; Aristizäbal, Orlando

    2006-01-01

    An experimental system to take advantage of the imaging capabilities of a 5-ring polyvinylidene fluoride (PVDF) based annular array is presented. The array has a 6 mm total aperture and a 12 mm geometric focus. The experimental system is designed to pulse a single element of the array and then digitize the received data of all array channels simultaneously. All transmit/receive pairs are digitized and then the data are post-processed with a synthetic focusing technique to achieve an enhanced depth of field (DOF). The performance of the array is experimentally tested with a wire phantom consisting of 25-μm diameter wires diagonally spaced at 1 mm by 1 mm intervals. The phantom permitted the efficacy of the synthetic focusing algorithm to be tested and was also used for two-way beam characterization. Experimental results are compared to a spatial impulse response method beam simulation. After synthetic focusing, the two-way echo amplitude was enhanced over the range of 8 to 19 mm and the 6-dB DOF spanned from 9 to 15 mm. For a wire at a fixed axial depth, the relative time delays between transmit/receive ring pairs agreed with theoretical predictions to within ± 2 ns. To further test the system, B-mode images of an excised bovine eye are rendered. PMID:16555771

  5. Direction of Arrival Estimation Using a Reconfigurable Array

    DTIC Science & Technology

    2005-05-06

    civilian world. Keywords: Direction-of-arrival Estimation MUSIC algorithm Reconfigurable Array Experimental Created by Neevia Personal...14. SUBJECT TERMS: Direction-of-arrival ; Estimation ; MUSIC algorithm ; Reconfigurable ; Array ; Experimental 16. PRICE CODE 17...9 1.5 MuSiC Algorithm

  6. The approximate entropy concept extended to three dimensions for calibrated, single parameter structural complexity interrogation of volumetric images.

    PubMed

    Moore, Christopher; Marchant, Thomas

    2017-07-12

    Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.

  7. The approximate entropy concept extended to three dimensions for calibrated, single parameter structural complexity interrogation of volumetric images

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Marchant, Thomas

    2017-08-01

    Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.

  8. Handheld ultrasound array imaging device

    NASA Astrophysics Data System (ADS)

    Hwang, Juin-Jet; Quistgaard, Jens

    1999-06-01

    A handheld ultrasound imaging device, one that weighs less than five pounds, has been developed for diagnosing trauma in the combat battlefield as well as a variety of commercial mobile diagnostic applications. This handheld device consists of four component ASICs, each is designed using the state of the art microelectronics technologies. These ASICs are integrated with a convex array transducer to allow high quality imaging of soft tissues and blood flow in real time. The device is designed to be battery driven or ac powered with built-in image storage and cineloop playback capability. Design methodologies of a handheld device are fundamentally different to those of a cart-based system. As system architecture, signal and image processing algorithm as well as image control circuit and software in this device is deigned suitably for large-scale integration, the image performance of this device is designed to be adequate to the intent applications. To elongate the battery life, low power design rules and power management circuits are incorporated in the design of each component ASIC. The performance of the prototype device is currently being evaluated for various applications such as a primary image screening tool, fetal imaging in Obstetrics, foreign object detection and wound assessment for emergency care, etc.

  9. Combination of Rivest-Shamir-Adleman Algorithm and End of File Method for Data Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Amalia, Amalia; Elviwani

    2018-03-01

    Data security is one of the crucial issues in the delivery of information. One of the ways which used to secure the data is by encoding it into something else that is not comprehensible by human beings by using some crypto graphical techniques. The Rivest-Shamir-Adleman (RSA) cryptographic algorithm has been proven robust to secure messages. Since this algorithm uses two different keys (i.e., public key and private key) at the time of encryption and decryption, it is classified as asymmetric cryptography algorithm. Steganography is a method that is used to secure a message by inserting the bits of the message into a larger media such as an image. One of the known steganography methods is End of File (EoF). In this research, the cipher text resulted from the RSA algorithm is compiled into an array form and appended to the end of the image. The result of the EoF is the image which has a line with black gradations under it. This line contains the secret message. This combination of cryptography and steganography in securing the message is expected to increase the security of the message, since the message encryption technique (RSA) is mixed with the data hiding technique (EoF).

  10. Directed branch growth in aligned nanowire arrays.

    PubMed

    Beaudry, Allan L; LaForge, Joshua M; Tucker, Ryan T; Sorge, Jason B; Adamski, Nicholas L; Li, Peng; Taschuk, Michael T; Brett, Michael J

    2014-01-01

    Branch growth is directed along two, three, or four in-plane directions in vertically aligned nanowire arrays using vapor-liquid-solid glancing angle deposition (VLS-GLAD) flux engineering. In this work, a dynamically controlled collimated vapor flux guides branch placement during the self-catalyzed epitaxial growth of branched indium tin oxide nanowire arrays. The flux is positioned to grow branches on select nanowire facets, enabling fabrication of aligned nanotree arrays with L-, T-, or X-branching. In addition, a flux motion algorithm is designed to selectively elongate branches along one in-plane axis. Nanotrees are found to be aligned across large areas by X-ray diffraction pole figure analysis and through branch length and orientation measurements collected over 140 μm(2) from scanning electron microscopy images for each array. The pathway to guided assembly of nanowire architectures with controlled interconnectivity in three-dimensions using VLS-GLAD is discussed.

  11. A New FPGA Architecture of FAST and BRIEF Algorithm for On-Board Corner Detection and Matching.

    PubMed

    Huang, Jingjin; Zhou, Guoqing; Zhou, Xiang; Zhang, Rongting

    2018-03-28

    Although some researchers have proposed the Field Programmable Gate Array (FPGA) architectures of Feature From Accelerated Segment Test (FAST) and Binary Robust Independent Elementary Features (BRIEF) algorithm, there is no consideration of image data storage in these traditional architectures that will result in no image data that can be reused by the follow-up algorithms. This paper proposes a new FPGA architecture that considers the reuse of sub-image data. In the proposed architecture, a remainder-based method is firstly designed for reading the sub-image, a FAST detector and a BRIEF descriptor are combined for corner detection and matching. Six pairs of satellite images with different textures, which are located in the Mentougou district, Beijing, China, are used to evaluate the performance of the proposed architecture. The Modelsim simulation results found that: (i) the proposed architecture is effective for sub-image reading from DDR3 at a minimum cost; (ii) the FPGA implementation is corrected and efficient for corner detection and matching, such as the average value of matching rate of natural areas and artificial areas are approximately 67% and 83%, respectively, which are close to PC's and the processing speed by FPGA is approximately 31 and 2.5 times faster than those by PC processing and by GPU processing, respectively.

  12. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Matthies, Larry H.

    1998-01-01

    Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.

  13. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  14. High-Frequency Chirp Ultrasound Imaging with an Annular-array for Ophthalmologic and Small-Animal Imaging

    PubMed Central

    Mamou, Jonathan; Aristizábal, Orlando; Silverman, Ronald H.; Ketterling, Jeffrey A.; Turnbull, Daniel H.

    2009-01-01

    High-frequency ultrasound (HFU, > 20 MHz) is an attractive means of obtaining fine-resolution images of biological tissues for ophthalmologic, dermatological, and small-animal imaging applications. Even with current improvements in circuit designs and high-frequency equipment, HFU suffers from two inherent limitations. First, HFU images have a limited depth of field (DOF) because of the short wavelength and the low fixed F-number of conventional HFU transducers. Second, HFU is usually limited to shallow imaging because of the significant attenuation in most tissues. In a previous study, a five-element annular array with a 17-MHz center frequency was excited using chirp-coded signals, and a synthetic-focusing algorithm was used to extend the DOF and increase penetration depth. In the present study, a similar approach with two different five-element annular arrays operating near a center frequency of 35-MHz is implemented and validated. Following validation studies, the chirp-imaging methods were applied to imaging vitreous-hemorrhage mimicking phantoms and mouse embryos. Images of the vitreous phantom showed increased sensitivity using the chirp method compared to a standard monocycle imaging method, and blood droplets could be visualized 4 mm deeper into the phantom. Three-dimensional datasets of 12.5-day-old, mouse-embryo heads were acquired in utero using chirp and conventional excitations. Images were formed and brains ventricles were segmented and reconstructed in three dimensions. The brain-ventricle volumes for the monocycle excitation exhibited artifacts that were not apparent on the chirp-based dataset reconstruction. PMID:19394754

  15. Radiometric calibration of an ultra-compact microbolometer thermal imaging module

    NASA Astrophysics Data System (ADS)

    Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.

    2017-05-01

    As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.

  16. High-Resolution Array with Prony, MUSIC, and ESPRIT Algorithms

    DTIC Science & Technology

    1992-08-25

    N avalI Research La bora tory AD-A255 514 Washington, DC 20375-5320 NRL/FR/5324-92-9397 High-resolution Array with Prony, music , and ESPRIT...unlimited t"orm n pprovoiREPORT DOCUMENTATION PAGE OMB. o 0 104 0188 4. TITLE AND SUBTITLE S. FUNDING NUMBERS High-resolution Array with Prony. MUSIC . and...the array high-resolution properties of three algorithms: the Prony algo- rithm, the MUSIC algorithm, and the ESPRIT algorithm. MUSIC has been much

  17. NASA Tech Briefs, December 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics include: A Deep Space Network Portable Radio Science Receiver; Detecting Phase Boundaries in Hard-Sphere Suspensions; Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery; Very-Long-Distance Remote Hearing and Vibrometry; Using GPS to Detect Imminent Tsunamis; Stream Flow Prediction by Remote Sensing and Genetic Programming; Pilotless Frame Synchronization Using LDPC Code Constraints; Radiometer on a Chip; Measuring Luminescence Lifetime With Help of a DSP; Modulation Based on Probability Density Functions; Ku Telemetry Modulator for Suborbital Vehicles; Photonic Links for High-Performance Arraying of Antennas; Reconfigurable, Bi-Directional Flexfet Level Shifter for Low-Power, Rad-Hard Integration; Hardware-Efficient Monitoring of I/O Signals; Video System for Viewing From a Remote or Windowless Cockpit; Spacesuit Data Display and Management System; IEEE 1394 Hub With Fault Containment; Compact, Miniature MMIC Receiver Modules for an MMIC Array Spectrograph; Waveguide Transition for Submillimeter-Wave MMICs; Magnetic-Field-Tunable Superconducting Rectifier; Bonded Invar Clip Removal Using Foil Heaters; Fabricating Radial Groove Gratings Using Projection Photolithography; Gratings Fabricated on Flat Surfaces and Reproduced on Non-Flat Substrates; Method for Measuring the Volume-Scattering Function of Water; Method of Heating a Foam-Based Catalyst Bed; Small Deflection Energy Analyzer for Energy and Angular Distributions; Polymeric Bladder for Storing Liquid Oxygen; Pyrotechnic Simulator/Stray-Voltage Detector; Inventions Utilizing Microfluidics and Colloidal Particles; RuO2 Thermometer for Ultra-Low Temperatures; Ultra-Compact, High-Resolution LADAR System for 3D Imaging; Dual-Channel Multi-Purpose Telescope; Objective Lens Optimized for Wavefront Delivery, Pupil Imaging, and Pupil Ghosting; CMOS Camera Array With Onboard Memory; Quickly Approximating the Distance Between Two Objects; Processing Images of Craters for Spacecraft Navigation; Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System; Rover Slip Validation and Prediction Algorithm; Safety and Quality Training Simulator; Supply-Chain Optimization Template; Algorithm for Computing Particle/Surface Interactions; Cryogenic Pupil Alignment Test Architecture for Aberrated Pupil Images; and Thermal Transport Model for Heat Sink Design.

  18. Processing Cones: A Computational Structure for Image Analysis.

    DTIC Science & Technology

    1981-12-01

    image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward

  19. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    NASA Astrophysics Data System (ADS)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  20. Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C

    2015-05-01

    We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-π,π) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2π. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper, we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.

  1. Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-05-01

    We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-pi, pi) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2pi. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.

  2. Low-cost, high-speed back-end processing system for high-frequency ultrasound B-mode imaging.

    PubMed

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T; Shung, K Kirk

    2009-07-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution.

  3. Low-Cost, High-Speed Back-End Processing System for High-Frequency Ultrasound B-Mode Imaging

    PubMed Central

    Chang, Jin Ho; Sun, Lei; Yen, Jesse T.; Shung, K. Kirk

    2009-01-01

    For real-time visualization of the mouse heart (6 to 13 beats per second), a back-end processing system involving high-speed signal processing functions to form and display images has been developed. This back-end system was designed with new signal processing algorithms to achieve a frame rate of more than 400 images per second. These algorithms were implemented in a simple and cost-effective manner with a single field-programmable gate array (FPGA) and software programs written in C++. The operating speed of the back-end system was investigated by recording the time required for transferring an image to a personal computer. Experimental results showed that the back-end system is capable of producing 433 images per second. To evaluate the imaging performance of the back-end system, a complete imaging system was built. This imaging system, which consisted of a recently reported high-speed mechanical sector scanner assembled with the back-end system, was tested by imaging a wire phantom, a pig eye (in vitro), and a mouse heart (in vivo). It was shown that this system is capable of providing high spatial resolution images with fast temporal resolution. PMID:19574160

  4. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  5. NASA Tech Briefs, March 2008

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Topics covered include: WRATS Integrated Data Acquisition System; Breadboard Signal Processor for Arraying DSN Antennas; Digital Receiver Phase Meter; Split-Block Waveguide Polarization Twist for 220 to 325 GHz; Nano-Multiplication-Region Avalanche Photodiodes and Arrays; Tailored Asymmetry for Enhanced Coupling to WGM Resonators; Disabling CNT Electronic Devices by Use of Electron Beams; Conical Bearingless Motor/Generators; Integrated Force Method for Indeterminate Structures; Carbon-Nanotube-Based Electrodes for Biomedical Applications; Compact Directional Microwave Antenna for Localized Heating; Using Hyperspectral Imagery to Identify Turfgrass Stresses; Shaping Diffraction-Grating Grooves to Optimize Efficiency; Low-Light-Shift Cesium Fountain without Mechanical Shutters; Magnetic Compensation for Second-Order Doppler Shift in LITS; Nanostructures Exploit Hybrid-Polariton Resonances; Microfluidics, Chromatography, and Atomic-Force Microscopy; Model of Image Artifacts from Dust Particles; Pattern-Recognition System for Approaching a Known Target; Orchestrator Telemetry Processing Pipeline; Scheme for Quantum Computing Immune to Decoherence; Spin-Stabilized Microsatellites with Solar Concentrators; Phase Calibration of Antenna Arrays Aimed at Spacecraft; Ring Bus Architecture for a Solid-State Recorder; and Image Compression Algorithm Altered to Improve Stereo Ranging.

  6. Rapid wide-field Mueller matrix polarimetry imaging based on four photoelastic modulators with no moving parts.

    PubMed

    Alali, Sanaz; Gribble, Adam; Vitkin, I Alex

    2016-03-01

    A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second.

  7. Multispectral high-resolution hologram generation using orthographic projection images

    NASA Astrophysics Data System (ADS)

    Muniraj, I.; Guo, C.; Sheridan, J. T.

    2016-08-01

    We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.

  8. Chimenea and other tools: Automated imaging of multi-epoch radio-synthesis data with CASA

    NASA Astrophysics Data System (ADS)

    Staley, T. D.; Anderson, G. E.

    2015-11-01

    In preparing the way for the Square Kilometre Array and its pathfinders, there is a pressing need to begin probing the transient sky in a fully robotic fashion using the current generation of radio telescopes. Effective exploitation of such surveys requires a largely automated data-reduction process. This paper introduces an end-to-end automated reduction pipeline, AMIsurvey, used for calibrating and imaging data from the Arcminute Microkelvin Imager Large Array. AMIsurvey makes use of several component libraries which have been packaged separately for open-source release. The most scientifically significant of these is chimenea, which implements a telescope-agnostic algorithm for automated imaging of pre-calibrated multi-epoch radio-synthesis data, of the sort typically acquired for transient surveys or follow-up. The algorithm aims to improve upon standard imaging pipelines by utilizing iterative RMS-estimation and automated source-detection to avoid so called 'Clean-bias', and makes use of CASA subroutines for the underlying image-synthesis operations. At a lower level, AMIsurvey relies upon two libraries, drive-ami and drive-casa, built to allow use of mature radio-astronomy software packages from within Python scripts. While targeted at automated imaging, the drive-casa interface can also be used to automate interaction with any of the CASA subroutines from a generic Python process. Additionally, these packages may be of wider technical interest beyond radio-astronomy, since they demonstrate use of the Python library pexpect to emulate terminal interaction with an external process. This approach allows for rapid development of a Python interface to any legacy or externally-maintained pipeline which accepts command-line input, without requiring alterations to the original code.

  9. Single scan parameterization of space-variant point spread functions in image space via a printed array: the impact for two PET/CT scanners.

    PubMed

    Kotasidis, F A; Matthews, J C; Angelis, G I; Noonan, P J; Jackson, A; Price, P; Lionheart, W R; Reader, A J

    2011-05-21

    Incorporation of a resolution model during statistical image reconstruction often produces images of improved resolution and signal-to-noise ratio. A novel and practical methodology to rapidly and accurately determine the overall emission and detection blurring component of the system matrix using a printed point source array within a custom-made Perspex phantom is presented. The array was scanned at different positions and orientations within the field of view (FOV) to examine the feasibility of extrapolating the measured point source blurring to other locations in the FOV and the robustness of measurements from a single point source array scan. We measured the spatially-variant image-based blurring on two PET/CT scanners, the B-Hi-Rez and the TruePoint TrueV. These measured spatially-variant kernels and the spatially-invariant kernel at the FOV centre were then incorporated within an ordinary Poisson ordered subset expectation maximization (OP-OSEM) algorithm and compared to the manufacturer's implementation using projection space resolution modelling (RM). Comparisons were based on a point source array, the NEMA IEC image quality phantom, the Cologne resolution phantom and two clinical studies (carbon-11 labelled anti-sense oligonucleotide [(11)C]-ASO and fluorine-18 labelled fluoro-l-thymidine [(18)F]-FLT). Robust and accurate measurements of spatially-variant image blurring were successfully obtained from a single scan. Spatially-variant resolution modelling resulted in notable resolution improvements away from the centre of the FOV. Comparison between spatially-variant image-space methods and the projection-space approach (the first such report, using a range of studies) demonstrated very similar performance with our image-based implementation producing slightly better contrast recovery (CR) for the same level of image roughness (IR). These results demonstrate that image-based resolution modelling within reconstruction is a valid alternative to projection-based modelling, and that, when using the proposed practical methodology, the necessary resolution measurements can be obtained from a single scan. This approach avoids the relatively time-consuming and involved procedures previously proposed in the literature.

  10. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    PubMed Central

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-01-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592

  11. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    NASA Astrophysics Data System (ADS)

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-12-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  12. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification.

    PubMed

    Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard

    2016-12-06

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  13. Imaging through water turbulence with a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2016-09-01

    A plenoptic sensor can be used to improve the image formation process in a conventional camera. Through this process, the conventional image is mapped to an image array that represents the image's photon paths along different angular directions. Therefore, it can be used to resolve imaging problems where severe distortion happens. Especially for objects observed at moderate range (10m to 200m) through turbulent water, the image can be twisted to be entirely unrecognizable and correction algorithms need to be applied. In this paper, we show how to use a plenoptic sensor to recover an unknown object in line of sight through significant water turbulence distortion. In general, our approach can be applied to both atmospheric turbulence and water turbulence conditions.

  14. Detection of pavement cracks using tiled fuzzy Hough transform

    NASA Astrophysics Data System (ADS)

    Mathavan, Senthan; Vaheesan, Kanapathippillai; Kumar, Akash; Chandrakumar, Chanjief; Kamal, Khurram; Rahman, Mujib; Stonecliffe-Jones, Martyn

    2017-09-01

    Surface cracks can be the bellwether of the failure of a road. Hence, crack detection is indispensable for the condition monitoring and quality control of road surfaces. Pavement images have high levels of intensity variation and texture content; hence, the crack detection is generally difficult. Moreover, shallow cracks are very low contrast, making their detection difficult. Therefore, studies on pavement crack detection are active even after years of research. The fuzzy Hough transform is employed, for the first time, to detect cracks from pavement images. A careful consideration is given to the fact that cracks consist of near straight segments embedded in a surface of considerable texture. In this regard, the fuzzy part of the algorithm tackles the segments that are not perfectly straight. Moreover, tiled detection helps reduce the contribution of texture and noise pixels to the accumulator array. The proposed algorithm is compared against a state-of-the-art algorithm for a number of crack datasets, demonstrating its strengths. Precision and recall values of more than 75% are obtained, on different image sets of varying textures and other effects, captured by industrial pavement imagers. The paper also recommends numerical values for parameters used in the proposed method.

  15. Terahertz holography for imaging amplitude and phase objects.

    PubMed

    Hack, Erwin; Zolliker, Peter

    2014-06-30

    A non-monochromatic THz Quantum Cascade Laser and an uncooled micro-bolometer array detector with VGA resolution are used in a beam-splitter free holographic set-up to measure amplitude and phase objects in transmission. Phase maps of the diffraction pattern are retrieved using the Fourier transform carrier fringe method; while a Fresnel-Kirchhoff back propagation algorithm is used to reconstruct the complex object image. A lateral resolution of 280 µm and a relative phase sensitivity of about 0.5 rad are estimated from reconstructed images of a metallic Siemens star and a polypropylene test structure, respectively. Simulations corroborate the experimental results.

  16. HIFU Monitoring and Control with Dual-Mode Ultrasound Arrays

    NASA Astrophysics Data System (ADS)

    Casper, Andrew Jacob

    The biological effects of high-intensity focused ultrasound (HIFU) have been known and studied for decades. HIFU has been shown capable of treating a wide variety of diseases and disorders. However, despite its demonstrated potential, HIFU has been slow to gain clinical acceptance. This is due, in part, to the difficulty associated with robustly monitoring and controlling the delivery of the HIFU energy. The non-invasive nature of the surgery makes the assessment of treatment progression difficult, leading to long treatment times and a significant risk of under treatment. This thesis research develops new techniques and systems for robustly monitoring HIFU therapies for the safe and efficacious delivery of the intended treatment. Systems and algorithms were developed for the two most common modes of HIFU delivery systems: single-element and phased array applicators. Delivering HIFU with a single element transducer is a widely used technique in HIFU therapies. The simplicity of a single element offers many benefits in terms of cost and overall system complexity. Typical monitoring schemes rely on an external device (e.g. diagnostic ultrasound or MRI) to assess the progression of therapy. The research presented in this thesis explores using the same element to both deliver and monitor the HIFU therapy. The use of a dual-mode ultrasound transducer (DMUT) required the development of an FPGA based single-channel arbitrary waveform generator and high-speed data acquisition unit. Data collected from initial uncontrolled ablations led to the development of monitoring and control algorithms which were implemented directly on the FPGA. Close integration between the data acquisition and arbitrary waveform units allowed for fast, low latency control over the ablation process. Results are presented that demonstrate control of HIFU therapies over a broad range of intensities and in multiple in vitro tissues. The second area of investigation expands the DMUT research to an ultrasound phased-array. The phased-array allows for electronic steering of the HIFU focus and imaging of the acoustic medium. Investigating the dual-mode ultrasound array (DMUA) required the design and construction of a novel ultrasound-guided focused ultrasound (USgFUS) platform. The platform consisted of custom hardware designed for the unique requirements of operating a phased-array in both therapeutic and imaging modes. The platform also required the development of FPGA based signal processing and GPU based beamforming algorithms for online monitoring of the therapy process. The results presented in this thesis represent the first demonstration of a real-time USgFUS platform based around a DMUA. Experimental imaging and therapy results from series of animal experiments, including a 12 animal GLP study, are presented. In addition, in vitro control results, which build upon the DMUT work, are presented.

  17. Wide-Field Imaging Interferometry Spatial-Spectral Image Synthesis Algorithms

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.; Leisawitz, David T.; Rinehart, Stephen A.; Memarsadeghi, Nargess; Sinukoff, Evan J.

    2012-01-01

    Developed is an algorithmic approach for wide field of view interferometric spatial-spectral image synthesis. The data collected from the interferometer consists of a set of double-Fourier image data cubes, one cube per baseline. These cubes are each three-dimensional consisting of arrays of two-dimensional detector counts versus delay line position. For each baseline a moving delay line allows collection of a large set of interferograms over the 2D wide field detector grid; one sampled interferogram per detector pixel per baseline. This aggregate set of interferograms, is algorithmically processed to construct a single spatial-spectral cube with angular resolution approaching the ratio of the wavelength to longest baseline. The wide field imaging is accomplished by insuring that the range of motion of the delay line encompasses the zero optical path difference fringe for each detector pixel in the desired field-of-view. Each baseline cube is incoherent relative to all other baseline cubes and thus has only phase information relative to itself. This lost phase information is recovered by having point, or otherwise known, sources within the field-of-view. The reference source phase is known and utilized as a constraint to recover the coherent phase relation between the baseline cubes and is key to the image synthesis. Described will be the mathematical formalism, with phase referencing and results will be shown using data collected from NASA/GSFC Wide-Field Imaging Interferometry Testbed (WIIT).

  18. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.

  19. Smart concrete slabs with embedded tubular PZT transducers for damage detection

    NASA Astrophysics Data System (ADS)

    Gao, Weihang; Huo, Linsheng; Li, Hongnan; Song, Gangbing

    2018-02-01

    The objective of this study is to develop a new concept and methodology of smart concrete slab (SCS) with embedded tubular lead zirconate titanate transducer array for image based damage detection. Stress waves, as the detecting signals, are generated by the embedded tubular piezoceramic transducers in the SCS. Tubular piezoceramic transducers are used due to their capacity of generating radially uniform stress waves in a two-dimensional concrete slab (such as bridge decks and walls), increasing the monitoring range. A circular type delay-and-sum (DAS) imaging algorithm is developed to image the active acoustic sources based on the direct response received by each sensor. After the scattering signals from the damage are obtained by subtracting the baseline response of the concrete structures from those of the defective ones, the elliptical type DAS imaging algorithm is employed to process the scattering signals and reconstruct the image of the damage. Finally, two experiments, including active acoustic source monitoring and damage imaging for concrete structures, are carried out to illustrate and demonstrate the effectiveness of the proposed method.

  20. Supersampling multiframe blind deconvolution resolution enhancement of adaptive-optics-compensated imagery of LEO satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2000-10-01

    A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.

  1. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.

    PubMed

    Yan, Gang; Zhou, Li

    2018-02-21

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.

  2. Solar thematic maps for space weather operations

    USGS Publications Warehouse

    Rigler, E. Joshua; Hill, Steven M.; Reinard, Alysha A.; Steenburgh, Robert A.

    2012-01-01

    Thematic maps are arrays of labels, or "themes", associated with discrete locations in space and time. Borrowing heavily from the terrestrial remote sensing discipline, a numerical technique based on Bayes' theorem captures operational expertise in the form of trained theme statistics, then uses this to automatically assign labels to solar image pixels. Ultimately, regular thematic maps of the solar corona will be generated from high-cadence, high-resolution SUVI images, the solar ultraviolet imager slated to fly on NOAA's next-generation GOES-R series of satellites starting ~2016. These thematic maps will not only provide quicker, more consistent synoptic views of the sun for space weather forecasters, but digital thematic pixel masks (e.g., coronal hole, active region, flare, etc.), necessary for a new generation of operational solar data products, will be generated. This paper presents the mathematical underpinnings of our thematic mapper, as well as some practical algorithmic considerations. Then, using images from the Solar Dynamics Observatory (SDO) Advanced Imaging Array (AIA) as test data, it presents results from validation experiments designed to ascertain the robustness of the technique with respect to differing expert opinions and changing solar conditions.

  3. An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array

    PubMed Central

    Zhou, Li

    2018-01-01

    This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310

  4. Architecture and prototypical implementation of a semantic querying system for big Earth observation image bases

    PubMed Central

    Tiede, Dirk; Baraldi, Andrea; Sudmanns, Martin; Belgiu, Mariana; Lang, Stefan

    2017-01-01

    ABSTRACT Spatiotemporal analytics of multi-source Earth observation (EO) big data is a pre-condition for semantic content-based image retrieval (SCBIR). As a proof of concept, an innovative EO semantic querying (EO-SQ) subsystem was designed and prototypically implemented in series with an EO image understanding (EO-IU) subsystem. The EO-IU subsystem is automatically generating ESA Level 2 products (scene classification map, up to basic land cover units) from optical satellite data. The EO-SQ subsystem comprises a graphical user interface (GUI) and an array database embedded in a client server model. In the array database, all EO images are stored as a space-time data cube together with their Level 2 products generated by the EO-IU subsystem. The GUI allows users to (a) develop a conceptual world model based on a graphically supported query pipeline as a combination of spatial and temporal operators and/or standard algorithms and (b) create, save and share within the client-server architecture complex semantic queries/decision rules, suitable for SCBIR and/or spatiotemporal EO image analytics, consistent with the conceptual world model. PMID:29098143

  5. Ionospheric Specifications for SAR Interferometry (ISSI)

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Chapman, Bruce D; Freeman, Anthony; Szeliga, Walter; Buckley, Sean M.; Rosen, Paul A.; Lavalle, Marco

    2013-01-01

    The ISSI software package is designed to image the ionosphere from space by calibrating and processing polarimetric synthetic aperture radar (PolSAR) data collected from low Earth orbit satellites. Signals transmitted and received by a PolSAR are subject to the Faraday rotation effect as they traverse the magnetized ionosphere. The ISSI algorithms combine the horizontally and vertically polarized (with respect to the radar system) SAR signals to estimate Faraday rotation and ionospheric total electron content (TEC) with spatial resolutions of sub-kilometers to kilometers, and to derive radar system calibration parameters. The ISSI software package has been designed and developed to integrate the algorithms, process PolSAR data, and image as well as visualize the ionospheric measurements. A number of tests have been conducted using ISSI with PolSAR data collected from various latitude regions using the phase array-type L-band synthetic aperture radar (PALSAR) onboard Japan Aerospace Exploration Agency's Advanced Land Observing Satellite mission, and also with Global Positioning System data. These tests have demonstrated and validated SAR-derived ionospheric images and data correction algorithms.

  6. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  7. Radar Array Processing of Experimental Data Via the Scan-MUSIC Algorithm

    DTIC Science & Technology

    2004-06-01

    Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm by Canh Ly ARL-TR-3135 June 2004...Processing of Experimental Data Via the Scan- MUSIC Algorithm Canh Ly Sensors and Electron Devices Directorate, ARL...NUMBER 5b. GRANT NUMBER 4. TITLE AND SUBTITLE Radar Array Processing of Experimental Data Via the Scan- MUSIC Algorithm 5c. PROGRAM ELEMENT NUMBER 5d

  8. Image-Based Focusing

    NASA Astrophysics Data System (ADS)

    Selker, Ted

    1983-05-01

    Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.

  9. Cortical Enhanced Tissue Segmentation of Neonatal Brain MR Images Acquired by a Dedicated Phased Array Coil

    PubMed Central

    Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang

    2010-01-01

    The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268

  10. Albion: the UK 3rd generation high-performance thermal imaging programme

    NASA Astrophysics Data System (ADS)

    McEwen, R. K.; Lupton, M.; Lawrence, M.; Knowles, P.; Wilson, M.; Dennis, P. N. J.; Gordon, N. T.; Lees, D. J.; Parsons, J. F.

    2007-04-01

    The first generation of high performance thermal imaging sensors in the UK was based on two axis opto-mechanical scanning systems and small (4-16 element) arrays of the SPRITE detector, developed during the 1970s. Almost two decades later, a 2nd Generation system, STAIRS C was introduced, based on single axis scanning and a long linear array of approximately 3000 elements. The UK has now begun the industrialisation of 3 rd Generation High Performance Thermal Imaging under a programme known as "Albion". Three new high performance cadmium mercury telluride arrays are being manufactured. The CMT material is grown by MOVPE on low cost substrates and bump bonded to the silicon read out circuit (ROIC). To maintain low production costs, all three detectors are designed to fit with existing standard Integrated Detector Cooling Assemblies (IDCAs). The two largest focal planes are conventional devices operating in the MWIR and LWIR spectral bands. A smaller format LWIR device is also described which has a smart ROIC, enabling much longer stare times than are feasible with conventional pixel circuits, thus achieving very high sensitivity. A new reference surface technology for thermal imaging sensors is described, based on Negative Luminescence (NL), which offers several advantages over conventional peltier references, improving the quality of the Non-Uniformity Correction (NUC) algorithms.

  11. Application of wavefield imaging to characterize scattering from artificial and impact damage in composite laminate panels

    NASA Astrophysics Data System (ADS)

    Williams, Westin B.; Michaels, Thomas E.; Michaels, Jennifer E.

    2018-04-01

    Composite materials used for aerospace applications are highly susceptible to impacts, which can result in barely visible delaminations. Reliable and fast detection of such damage is needed before structural failures occur. One approach is to use ultrasonic guided waves generated from a sparse array consisting of permanently mounted or embedded transducers for performing structural health monitoring. This array can detect introduction of damage after baseline subtraction, and also provide localization and characterization information via the minimum variance imaging algorithm. Imaging performance can vary considerably depending upon where damage is located with respect to the array; however, prior work has shown that knowledge of expected scattering can improve imaging consistency for artificial damage at various locations. In this study, anisotropic material attenuation and wave speed are estimated as a function of propagation angle using wavefield data recorded along radial lines at multiple angles with respect to an omnidirectional guided wave source. Additionally, full wavefield data are recorded before and after the introduction of artificial and impact damage so that wavefield baseline subtraction may be applied. 3-D filtering techniques are then used to reduce noise and isolate scattered waves. A model for estimating scattering of a circular defect is developed and scattering estimates for both artificial and impact damage are presented and compared.

  12. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  13. Maximum-likelihood-based extended-source spatial acquisition and tracking for planetary optical communications

    NASA Astrophysics Data System (ADS)

    Tsou, Haiping; Yan, Tsun-Yee

    1999-04-01

    This paper describes an extended-source spatial acquisition and tracking scheme for planetary optical communications. This scheme uses the Sun-lit Earth image as the beacon signal, which can be computed according to the current Sun-Earth-Probe angle from a pre-stored Earth image or a received snapshot taken by other Earth-orbiting satellite. Onboard the spacecraft, the reference image is correlated in the transform domain with the received image obtained from a detector array, which is assumed to have each of its pixels corrupted by an independent additive white Gaussian noise. The coordinate of the ground station is acquired and tracked, respectively, by an open-loop acquisition algorithm and a closed-loop tracking algorithm derived from the maximum likelihood criterion. As shown in the paper, the optimal spatial acquisition requires solving two nonlinear equations, or iteratively solving their linearized variants, to estimate the coordinate when translation in the relative positions of onboard and ground transceivers is considered. Similar assumption of linearization leads to the closed-loop spatial tracking algorithm in which the loop feedback signals can be derived from the weighted transform-domain correlation. Numerical results using a sample Sun-lit Earth image demonstrate that sub-pixel resolutions can be achieved by this scheme in a high disturbance environment.

  14. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    NASA Astrophysics Data System (ADS)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  15. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface

    PubMed Central

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-01-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668

  16. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  17. Fully digital routing logic for single-photon avalanche diode arrays in highly efficient time-resolved imaging

    NASA Astrophysics Data System (ADS)

    Cominelli, Alessandro; Acconcia, Giulia; Ghioni, Massimo; Rech, Ivan

    2018-03-01

    Time-correlated single-photon counting (TCSPC) is a powerful optical technique, which permits recording fast luminous signals with picosecond precision. Unfortunately, given its repetitive nature, TCSPC is recognized as a relatively slow technique, especially when a large time-resolved image has to be recorded. In recent years, there has been a fast trend toward the development of TCPSC imagers. Unfortunately, present systems still suffer from a trade-off between number of channels and performance. Even worse, the overall measurement speed is still limited well below the saturation of the transfer bandwidth toward the external processor. We present a routing algorithm that enables a smart connection between a 32×32 detector array and five shared high-performance converters able to provide an overall conversion rate up to 10 Gbit/s. The proposed solution exploits a fully digital logic circuit distributed in a tree structure to limit the number and length of interconnections, which is a major issue in densely integrated circuits. The behavior of the logic has been validated by means of a field-programmable gate array, while a fully integrated prototype has been designed in 180-nm technology and analyzed by means of postlayout simulations.

  18. Combinatorial algorithms for design of DNA arrays.

    PubMed

    Hannenhalli, Sridhar; Hubell, Earl; Lipshutz, Robert; Pevzner, Pavel A

    2002-01-01

    Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination (border length minimization problem) and reducing the complexity of masks (mask decomposition problem). We describe algorithms that reduce the number of rectangles in mask decomposition by 20-30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs.

  19. Large-N correlator systems for low frequency radio astronomy

    NASA Astrophysics Data System (ADS)

    Foster, Griffin

    Low frequency radio astronomy has entered a second golden age driven by the development of a new class of large-N interferometric arrays. The low frequency array (LOFAR) and a number of redshifted HI Epoch of Reionization (EoR) arrays are currently undergoing commission and regularly observing. Future arrays of unprecedented sensitivity and resolutions at low frequencies, such as the square kilometer array (SKA) and the hydrogen epoch of reionization array (HERA), are in development. The combination of advancements in specialized field programmable gate array (FPGA) hardware for signal processing, computing and graphics processing unit (GPU) resources, and new imaging and calibration algorithms has opened up the oft underused radio band below 300 MHz. These interferometric arrays require efficient implementation of digital signal processing (DSP) hardware to compute the baseline correlations. FPGA technology provides an optimal platform to develop new correlators. The significant growth in data rates from these systems requires automated software to reduce the correlations in real time before storing the data products to disk. Low frequency, widefield observations introduce a number of unique calibration and imaging challenges. The efficient implementation of FX correlators using FPGA hardware is presented. Two correlators have been developed, one for the 32 element BEST-2 array at Medicina Observatory and the other for the 96 element LOFAR station at Chilbolton Observatory. In addition, calibration and imaging software has been developed for each system which makes use of the radio interferometry measurement equation (RIME) to derive calibrations. A process for generating sky maps from widefield LOFAR station observations is presented. Shapelets, a method of modelling extended structures such as resolved sources and beam patterns has been adapted for radio astronomy use to further improve system calibration. Scaling of computing technology allows for the development of larger correlator systems, which in turn allows for improvements in sensitivity and resolution. This requires new calibration techniques which account for a broad range of systematic effects.

  20. Maximum likelihood positioning algorithm for high-resolution PET scanners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick

    2016-06-15

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less

  1. Damage imaging using Lamb waves for SHM applications

    NASA Astrophysics Data System (ADS)

    Stepinski, Tadeusz; Ambroziński, Łukasz; Uhl, Tadeusz

    2015-03-01

    2-D ultrasonic arrays, due to their beam-steering capability and all azimuth angle coverage are a very promising tool for the inspection of plate-like structures using Lamb waves (LW). Contrary to the classical linear phased arrays (PAs) the 2D arrays enable unequivocal defect localization and they are even capable of mode selectivity of the received LWs . Recently, it has been shown that multistatic synthetic focusing (SF) algorithms applied for 2D arrays are much more effective than the classical phase array mode commonly used in NDT. The multistatic SF assumes multiple transmissions of elements in a transmitting aperture and off-line processing of the data acquired by a receiving aperture. In the simplest implementation of the technique, only a single multiplexed input and a number of output channels are required, which results in significant hardware simplification compared with the PA systems. On the one hand implementation of the multistatic SF to 2D arrays creates additional degrees of freedom during the design of the array topology, which complicates the array design process. On the other hand, it enables designing sparse arrays with performance similar to that of the fully populated dense arrays. In this paper we present a general systematic approach to the design and optimization of imaging systems based on the 2D array operating in the multistatic mode. We start from presenting principles of the SF schemes applied to LW imaging. Then, we outline the coarray concept and demonstrate how it can be used for reducing number of elements of the 2D arrays. Finally, efficient tools for the investigation and experimental verification of the designed 2D array prototypes are presented. The first step in the investigation is theoretical evaluation performed using frequency-dependent structure transfer function (STF), which enables approximate simulation of an array excited with a tone-burst in a dispersive medium. Finally, we show how scanning laser vibrometer, sensing waves in multiple points corresponding to the locations of the 2D receiving array elements, can be used as a tool for rapid experimental verification of the developed topologies. The presented methods are discussed in terms of the beampatterns and sparse versions of the fully populated array topologies are be presented. The effect of apodization applied to the array elements is also investigated. Both simulated and experimental results are included.

  2. New inverse synthetic aperture radar algorithm for translational motion compensation

    NASA Astrophysics Data System (ADS)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  3. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  4. Radio Observations of the Ionosphere From an Imaging Array and a CubeSat

    NASA Astrophysics Data System (ADS)

    Isham, B.; Gustavsson, B.; Bullett, T. W.; Bergman, J. E. S.; Rincón-Charris, A.; Bruhn, F.; Funk, P.

    2017-12-01

    The ionosphere is a source of many radio emissions in the various low-frequency, medium-frequency, and high-frequency bands (0 to 30 MHz). In addition to natural radio emissions, artificial emissions can be stimulated using high-power radiowave ionospheric modification facilities. Two complementary projects are underway for the purpose of improving our knowledge of the processes of radio emissions from the ionosphere. One project is the Aguadilla radio array, located in northwestern Puerto Rico. The Aguadilla array is intended to produce 2 to 25 MHz radio images of the ionosphere, as well as to perform bistatic radar imaging of the ionosphere over Puerto Rico. The array will consist of multiple antenna elements, each of which is a single active (electromagnetically short) crossed electric dipole. The elements are arranged within a roughly 200 by 300-meter core array, in a semi-random pattern providing an optimal distribution of baseline vectors, with 6-meter minimum spacing to eliminate spacial aliasing. In addition, several elements are arranged in a partial ring around the central core, providing a roughly four times expanded region in u-v space for improved image resolution and quality. Phase is maintained via cabled connections to a central location. A remote array is also being developed, in which phase is maintained between elements by through the use of GPS-disciplined rubidium clocks. The other project involves the GimmeRF radio instrument, designed for 0.3 to 30 MHz vector observation of the radio electric field, and planned for launch in 2020 on a CubeSat. The data rate that can be sustained by GimmeRF far exceeds any available communication strategy. By exploiting fast on-board computing and efficient artificial intelligence (AI) algorithms for analysis and data selection, the usage of the telemetry link can be optimized and value added to the mission. Radio images recorded by the radio array from below the ionosphere can be directly compared with the radio data received by GimmeRF in the topside ionosphere, with the goal of better understanding the geometry and therefore the mechanisms of the radio emission processes.

  5. Fizeau interferometric cophasing of segmented mirrors: experimental validation.

    PubMed

    Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter

    2014-06-02

    We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.

  6. Towards real-time non contact spatial resolved oxygenation monitoring using a multi spectral filter array camera in various light conditions

    NASA Astrophysics Data System (ADS)

    Bauer, Jacob R.; van Beekum, Karlijn; Klaessens, John; Noordmans, Herke Jan; Boer, Christa; Hardeberg, Jon Y.; Verdaasdonk, Rudolf M.

    2018-02-01

    Non contact spatial resolved oxygenation measurements remain an open challenge in the biomedical field and non contact patient monitoring. Although point measurements are the clinical standard till this day, regional differences in the oxygenation will improve the quality and safety of care. Recent developments in spectral imaging resulted in spectral filter array cameras (SFA). These provide the means to acquire spatial spectral videos in real-time and allow a spatial approach to spectroscopy. In this study, the performance of a 25 channel near infrared SFA camera was studied to obtain spatial oxygenation maps of hands during an occlusion of the left upper arm in 7 healthy volunteers. For comparison a clinical oxygenation monitoring system, INVOS, was used as a reference. In case of the NIRS SFA camera, oxygenation curves were derived from 2-3 wavelength bands with a custom made fast analysis software using a basic algorithm. Dynamic oxygenation changes were determined with the NIR SFA camera and INVOS system at different regional locations of the occluded versus non-occluded hands and showed to be in good agreement. To increase the signal to noise ratio, algorithm and image acquisition were optimised. The measurement were robust to different illumination conditions with NIR light sources. This study shows that imaging of relative oxygenation changes over larger body areas is potentially possible in real time.

  7. SMI adaptive antenna arrays for weak interfering signals. [Sample Matrix Inversion

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.

    1986-01-01

    The performance of adaptive antenna arrays in the presence of weak interfering signals (below thermal noise) is studied. It is shown that a conventional adaptive antenna array sample matrix inversion (SMI) algorithm is unable to suppress such interfering signals. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is redefined such that the effect of thermal noise on the weights of adaptive arrays is reduced. Thus, the weights are dictated by relatively weak signals. It is shown that the modified algorithm provides the desired interference protection.

  8. A class of parallel algorithms for computation of the manipulator inertia matrix

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1989-01-01

    Parallel and parallel/pipeline algorithms for computation of the manipulator inertia matrix are presented. An algorithm based on composite rigid-body spatial inertia method, which provides better features for parallelization, is used for the computation of the inertia matrix. Two parallel algorithms are developed which achieve the time lower bound in computation. Also described is the mapping of these algorithms with topological variation on a two-dimensional processor array, with nearest-neighbor connection, and with cardinality variation on a linear processor array. An efficient parallel/pipeline algorithm for the linear array was also developed, but at significantly higher efficiency.

  9. TH-A-BRF-11: Image Intensity Non-Uniformities Between MRI Simulation and Diagnostic MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, E

    2014-06-15

    Purpose: MRI simulation for MRI-based radiotherapy demands that patients be setup in treatment position, which frequently involves use of alternative radiofrequency (RF) coil configurations to accommodate immobilized patients. However, alternative RF coil geometries may exacerbate image intensity non-uniformities (IINU) beyond those observed in diagnostic MRI, which may challenge image segmentation and registration accuracy as well as confound studies assessing radiotherapy response when MR simulation images are used as baselines for evaluation. The goal of this work was to determine whether differences in IINU exist between MR simulation and diagnostic MR images. Methods: ACR-MRI phantom images were acquired at 3T usingmore » a spin-echo sequence (TE/TR:20/500ms, rBW:62.5kHz, TH/skip:5/5mm). MR simulation images were obtained by wrapping two flexible phased-array RF coils around the phantom. Diagnostic MR images were obtained by placing the phantom into a commercial phased-array head coil. Pre-scan normalization was enabled in both cases. Images were transferred offline and corrected for IINU using the MNI N3 algorithm. Coefficients of variation (CV=σ/μ) were calculated for each slice. Wilcoxon matched-pairs and Mann-Whitney tests compared CV values between original and N3 images and between MR simulation and diagnostic MR images. Results: Significant differences in CV were detected between original and N3 images in both MRI simulation and diagnostic MRI groups (p=0.010, p=0.010). In addition, significant differences in CV were detected between original MR simulation and original and N3 diagnostic MR images (p=0.0256, p=0.0016). However, no significant differences in CV were detected between N3 MR simulation images and original or N3 diagnostic MR images, demonstrating the importance of correcting MR simulation images beyond pre-scan normalization prior to use in radiotherapy. Conclusions: Alternative RF coil configurations used in MRI simulation can Result in significant IINU differences compared to diagnostic MR images. The MNI N3 algorithm reduced MR simulation IINU to levels observed in diagnostic MR images. Funding provided by Advancing a Healthier Wisconsin.« less

  10. SU-F-T-41: 3D MTP-TRUS for Prostate Implant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, P

    Purpose: Prostate brachytherapy is an effective treatment for early prostate cancer. The current prostate implant is limited to using 2D transrectal ultrassound (TRUS) or machenical motor driven 2D array either in the end or on the side. Real-time 3D images can improve the accuracy of the guidance of prostate implant. The concept of our system is to allow realtime full visualization of the entire prostate with the multiple transverse scan. Methods: The prototype of 3D Multiple-Transverse-Plane Transrectal Ultrasound probe (MTP-TRUS) has been designed by us and manufactured by Blatek inc. It has 7 convex linear arrays and each array hasmore » 96 elements. It is connected to cQuest Fire bird research system (Cephasonics inc.) which is a flexible and configurable ultrasound-development platform. The size of cQuest Firebird system is compact and supports the real-time wireless image transferring. A relay based mux board is designed for the cQuest Firebird system to be able to connect 672 elements. Results: The center frequency of probe is 6MHz±10%. The diameter of probe is 3cm and the length is 20cm. The element pitch is 0.205 mm. Array focus is 30mm and spacing 1.6cm. The beam data for each array was measured and met our expectation. The interface board of MTP-TURS is made and able to connect to cQuest Firebird system. The image display interface is still under the development. Our real-time needle tracking algorithm will be implemented too. Conclusion: Our MTP-TRUS system for prostate implant will be able to acquire real-time 3D images of prostate and do the real-time needle segmentation and tracking. The system is compact and have wireless function.« less

  11. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array.

    PubMed

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-03-11

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.

  12. Alignment of sensor arrays in optical instruments using a geometric approach.

    PubMed

    Sawyer, Travis W

    2018-02-01

    Alignment of sensor arrays in optical instruments is critical to maximize the instrument's performance. While many commercial systems use standardized mounting threads for alignment, custom systems require specialized equipment and alignment procedures. These alignment procedures can be time-consuming, dependent on operator experience, and have low repeatability. Furthermore, each alignment solution must be considered on a case-by-case basis, leading to additional time and resource cost. Here I present a method to align a sensor array using geometric analysis. By imaging a grid pattern of dots, I show that it is possible to calculate the misalignment for a sensor in five degrees of freedom simultaneously. I first test the approach by simulating different cases of misalignment using Zemax before applying the method to experimentally acquired data of sensor misalignment for an echelle spectrograph. The results show that the algorithm effectively quantifies misalignment in five degrees of freedom for an F/5 imaging system, accurate to within ±0.87  deg in rotation and ±0.86  μm in translation. Furthermore, the results suggest that the method can also be applied to non-imaging systems with a small penalty to precision. This general approach can potentially improve the alignment of sensor arrays in custom instruments by offering an accurate, quantitative approach to calculating misalignment in five degrees of freedom simultaneously.

  13. Using Passive Cavitation Images to Classify High-Intensity Focused Ultrasound Lesions

    PubMed Central

    Haworth, Kevin J.; Salgaonkar, Vasant A.; Corregan, Nicholas M.; Holland, Christy K.; Mast, T. Douglas

    2015-01-01

    Passive cavitation imaging provides spatially resolved monitoring of cavitation emissions. However the diffraction limit of a linear imaging array results in relatively poor range resolution. Poor range resolution has limited prior analyses of the spatial specificity and sensitivity of passive cavitation imaging for predicting thermal lesion formation. In this study, this limitation is overcome by orienting a linear array orthogonal to the HIFU propagation direction and performing passive imaging. Fourteen lesions were formed in ex vivo bovine liver samples as a result of 1.1 MHz continuous-wave ultrasound exposure. The lesions were classified as focal, “tadpole”, or pre-focal based on their shape and location. Passive cavitation images were beam-formed from emissions at the fundamental, harmonic, ultraharmonic, and inharmonic frequencies with an established algorithm. Using the area under a receiver operator characteristic curve (AUROC), fundamental, harmonic, and ultraharmonic emissions were shown to be significant predictors of lesion formation for all lesion types. For both harmonic and ultraharmonic emissions, pre-focal lesions were classified most successfully (AUROC values of 0.87 and 0.88, respectively), followed by tadpole lesions (AUROC values of 0.77 and 0.64, respectively), and focal lesions (AUROC values of 0.65 and 0.60, respectively). PMID:26051309

  14. Stationary intraoral tomosynthesis for dental imaging

    NASA Astrophysics Data System (ADS)

    Inscoe, Christina R.; Wu, Gongting; Soulioti, Danai E.; Platin, Enrique; Mol, Andre; Gaalaas, Laurence R.; Anderson, Michael R.; Tucker, Andrew W.; Boyce, Sarah; Shan, Jing; Gonzales, Brian; Lu, Jianping; Zhou, Otto

    2017-03-01

    Despite recent advances in dental radiography, the diagnostic accuracies for some of the most common dental diseases have not improved significantly, and in some cases remain low. Intraoral x-ray is the most commonly used x-ray diagnostic tool in dental clinics. It however suffers from the typical limitations of a 2D imaging modality including structure overlap. Cone-beam computed tomography (CBCT) uses high radiation dose and suffers from image artifacts and relatively low resolution. The purpose of this study is to investigate the feasibility of developing a stationary intraoral tomosynthesis (s-IOT) using spatially distributed carbon nanotube (CNT) x-ray array technology, and to evaluate its diagnostic accuracy compared to conventional 2D intraoral x-ray. A bench-top s-IOT device was constructed using a linear CNT based X-ray source array and a digital intraoral detector. Image reconstruction was performed using an iterative reconstruction algorithm. Studies were performed to optimize the imaging configuration. For evaluation of s-IOT's diagnostic accuracy, images of a dental quality assurance phantom, and extracted human tooth specimens were acquired. Results show s-IOT increases the diagnostic sensitivity for caries compared to intraoral x-ray at a comparable dose level.

  15. Synthetic aperture integration (SAI) algorithm for SAR imaging

    DOEpatents

    Chambers, David H; Mast, Jeffrey E; Paglieroni, David W; Beer, N. Reginald

    2013-07-09

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  16. SMI adaptive antenna arrays for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.

    1987-01-01

    The performance of adaptive antenna arrays is studied when a sample matrix inversion (SMI) algorithm is used to control array weights. It is shown that conventional SMI adaptive antennas, like other adaptive antennas, are unable to suppress weak interfering signals (below thermal noise) encountered in broadcasting satellite communication systems. To overcome this problem, the SMI algorithm is modified. In the modified algorithm, the covariance matrix is modified such that the effect of thermal noise on the weights of the adaptive array is reduced. Thus, the weights are dictated by relatively weak coherent signals. It is shown that the modified algorithm provides the desired interference protection. The use of defocused feeds as auxiliary elements of an SMI adaptive array is also discussed.

  17. Comparison of Quantum Dots-in-a-Double-Well and Quantum Dots-in-a-Well Focal Plane Arrays in the Long-Wave Infrared

    DTIC Science & Technology

    2011-07-01

    taken with the same camera head, operating temperature, range of calibrated blackbody illuminations, and using the same long-wavelength IR ( LWIR ) f/2...measurements shown in this article and are tabulated for comparison purposes only. Images were taken with all four devices using an f/2 LWIR lens (8–12 μm...These were acquired after a nonuniformity correction. A custom image-scaling algorithm was used to avoid the standard nonuniformity corrected scaling

  18. A micro-CL system and its applications

    NASA Astrophysics Data System (ADS)

    Wei, Zenghui; Yuan, Lulu; Liu, Baodong; Wei, Cunfeng; Sun, Cuili; Yin, Pengfei; Wei, Long

    2017-11-01

    The computed laminography (CL) method is preferable to computed tomography for the non-destructive testing of plate-like objects. A micro-CL system is developed for three-dimensional imaging of plate-like objects. The details of the micro-CL system are described, including the system architecture, scanning modes, and reconstruction algorithm. The experiment results of plate-like fossils, insulated gate bipolar translator module, ball grid array packaging, and printed circuit board are also presented to demonstrate micro-CL's ability for 3D imaging of flat specimens and universal applicability in various fields.

  19. A micro-CL system and its applications.

    PubMed

    Wei, Zenghui; Yuan, Lulu; Liu, Baodong; Wei, Cunfeng; Sun, Cuili; Yin, Pengfei; Wei, Long

    2017-11-01

    The computed laminography (CL) method is preferable to computed tomography for the non-destructive testing of plate-like objects. A micro-CL system is developed for three-dimensional imaging of plate-like objects. The details of the micro-CL system are described, including the system architecture, scanning modes, and reconstruction algorithm. The experiment results of plate-like fossils, insulated gate bipolar translator module, ball grid array packaging, and printed circuit board are also presented to demonstrate micro-CL's ability for 3D imaging of flat specimens and universal applicability in various fields.

  20. Linear antenna array optimization using flower pollination algorithm.

    PubMed

    Saxena, Prerna; Kothari, Ashwin

    2016-01-01

    Flower pollination algorithm (FPA) is a new nature-inspired evolutionary algorithm used to solve multi-objective optimization problems. The aim of this paper is to introduce FPA to the electromagnetics and antenna community for the optimization of linear antenna arrays. FPA is applied for the first time to linear array so as to obtain optimized antenna positions in order to achieve an array pattern with minimum side lobe level along with placement of deep nulls in desired directions. Various design examples are presented that illustrate the use of FPA for linear antenna array optimization, and subsequently the results are validated by benchmarking along with results obtained using other state-of-the-art, nature-inspired evolutionary algorithms such as particle swarm optimization, ant colony optimization and cat swarm optimization. The results suggest that in most cases, FPA outperforms the other evolutionary algorithms and at times it yields a similar performance.

  1. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  2. Pulse-encoded ultrasound imaging of the vitreous with an annular array.

    PubMed

    Silverman, Ronald H; Ketterling, Jeffrey A; Mamou, Jonathan; Lloyd, Harriet O; Filoux, Erwan; Coleman, D Jackson

    2012-01-01

    The vitreous body is nearly transparent both optically and ultrasonically. Conventional 10- to 12-MHz diagnostic ultrasound can detect vitreous inhomogeneities at high gain settings, but has limited resolution and sensitivity, especially outside the fixed focal zone near the retina. To improve visualization of faint intravitreal fluid/gel interfaces, the authors fabricated a spherically curved 20-MHz five-element annular array ultrasound transducer, implemented a synthetic-focusing algorithm to extend the depth-of-field, and used a pulse-encoding strategy to increase sensitivity. The authors evaluated a human subject with a recent posterior vitreous detachment and compared the annular array with conventional 10-MHz ultrasound and spectral-domain optical coherence tomography. With synthetic focusing and chirp pulse-encoding, the array allowed visualization of the formed and fluid components of the vitreous with improved sensitivity and resolution compared with the conventional B-scan. Although optical coherence tomography allowed assessment of the posterior vitreoretinal interface, the ultrasound array allowed evaluation of the entire vitreous body. Copyright 2012, SLACK Incorporated.

  3. Volumetric Light-field Encryption at the Microscopic Scale

    PubMed Central

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu

    2017-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale. PMID:28059149

  4. Volumetric Light-field Encryption at the Microscopic Scale

    NASA Astrophysics Data System (ADS)

    Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu

    2017-01-01

    We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.

  5. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    NASA Astrophysics Data System (ADS)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  6. Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.

    PubMed

    Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng

    2018-05-14

    In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.

  7. Efficient image acquisition design for a cancer detection system

    NASA Astrophysics Data System (ADS)

    Nguyen, Dung; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2013-09-01

    Modern imaging modalities, such as Computed Tomography (CT), Digital Breast Tomosynthesis (DBT) or Magnetic Resonance Tomography (MRT) are able to acquire volumetric images with an isotropic resolution in micrometer (um) or millimeter (mm) range. When used in interactive telemedicine applications, these raw images need a huge storage unit, thereby necessitating the use of high bandwidth data communication link. To reduce the cost of transmission and enable archiving, especially for medical applications, image compression is performed. Recent advances in compression algorithms have resulted in a vast array of data compression techniques, but because of the characteristics of these images, there are challenges to overcome to transmit these images efficiently. In addition, the recent studies raise the low dose mammography risk on high risk patient. Our preliminary studies indicate that by bringing the compression before the analog-to-digital conversion (ADC) stage is more efficient than other compression techniques after the ADC. The linearity characteristic of the compressed sensing and ability to perform the digital signal processing (DSP) during data conversion open up a new area of research regarding the roles of sparsity in medical image registration, medical image analysis (for example, automatic image processing algorithm to efficiently extract the relevant information for the clinician), further Xray dose reduction for mammography, and contrast enhancement.

  8. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  9. Stereo Cloud Height and Wind Determination Using Measurements from a Single Focal Plane

    NASA Astrophysics Data System (ADS)

    Demajistre, R.; Kelly, M. A.

    2014-12-01

    We present here a method for extracting cloud heights and winds from an aircraft or orbital platform using measurements from a single focal plane, exploiting the motion of the platform to provide multiple views of the cloud tops. To illustrate this method we use data acquired during aircraft flight tests of a set of simple stereo imagers that are well suited to this purpose. Each of these imagers has three linear arrays on the focal plane, one looking forward, one looking aft, and one looking down. Push-broom images from each of these arrays are constructed, and then a spatial correlation analysis is used to deduce the delays and displacements required for wind and cloud height determination. We will present the algorithms necessary for the retrievals, as well as the methods used to determine the uncertainties of the derived cloud heights and winds. We will apply the retrievals and uncertainty determination to a number of image sets acquired by the airborne sensors. We then generalize these results to potential space based observations made by similar types of sensors.

  10. Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2002-09-01

    We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.

  11. Matrix phased array (MPA) imaging technology for resistance spot welds

    NASA Astrophysics Data System (ADS)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-01

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth of scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.

  12. Matrix phased array (MPA) imaging technology for resistance spot welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-18

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth ofmore » scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.« less

  13. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    NASA Astrophysics Data System (ADS)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  14. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  15. Transvaginal 3D Image-Guided High Intensity Focused Ultrasound Array

    NASA Astrophysics Data System (ADS)

    Held, Robert; Nguyen, Thuc Nghi; Vaezy, Shahram

    2005-03-01

    The goal of this project is to develop a transvaginal image-guided High Intensity Focused Ultrasound (HIFU) device using piezocomposite HIFU array technology, and commercially-available ultrasound imaging. Potential applications include treatment of uterine fibroids and abnormal uterine bleeding. The HIFU transducer was an annular phased array, with a focal length range of 30-60 mm, an elliptically-shaped aperture of 35×60 mm, and an operating frequency of 3 MHz. A pillow-shaped bag with water circulation will be used for coupling the HIFU energy into the tissue. An intra-cavity imaging probe (C9-5, Philips) was integrated with the HIFU array such that the focal axis of the HIFU transducer was within the image plane. The entire device will be covered by a gel-filled condom when inserted in the vaginal cavity. To control it, software packages were developed in the LabView programming environment. An imaging algorithm processed the ultrasound image to remove noise patterns due to the HIFU signal. The device will be equipped with a three-dimensional tracking system, using a six-degrees-of-freedom articulating arm. Necrotic lesions were produced in a tissue-mimicking phantom and a turkey breast sample for all focal lengths. Various HIFU doses allow various necrotic lesion shapes, including thin ellipsoidal, spherical, wide cylindrical, and teardrop-shaped. Software control of the device allows multiple foci to be activated sequentially for desired lesion patterns. Ultrasound imaging synchronization can be achieved using hardware signals obtained from the imaging system, or software signals determined empirically for various imaging probes. The image-guided HIFU device will provide a valuable tool in visualization of uterine fibroid tumors for the purposes of planning and subsequent HIFU treatment of the tumor, all in a 3D environment. The control system allows for various lesions of different shapes to be optimally positioned in the tumor to cover the entire tumor volume. Real-time ultrasound imaging for guidance and monitoring of HIFU treatment provides an effective method for outpatient-based procedures.

  16. Investigation of radio astronomy image processing techniques for use in the passive millimetre-wave security screening environment

    NASA Astrophysics Data System (ADS)

    Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.

    2014-06-01

    Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.

  17. Pre-Hardware Optimization of Spacecraft Image Processing Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Petrick, David J.; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Day, John H. (Technical Monitor)

    2002-01-01

    Spacecraft telemetry rates and telemetry product complexity have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image data processing and color picture generation application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The proposed solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms, and reconfigurable computing hardware (RC) technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processors (DSP). It has been shown that this approach can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft.

  18. Synthesis of concentric circular antenna arrays using dragonfly algorithm

    NASA Astrophysics Data System (ADS)

    Babayigit, B.

    2018-05-01

    Due to the strong non-linear relationship between the array factor and the array elements, concentric circular antenna array (CCAA) synthesis problem is challenging. Nature-inspired optimisation techniques have been playing an important role in solving array synthesis problems. Dragonfly algorithm (DA) is a novel nature-inspired optimisation technique which is based on the static and dynamic swarming behaviours of dragonflies in nature. This paper presents the design of CCAAs to get low sidelobes using DA. The effectiveness of the proposed DA is investigated in two different (with and without centre element) cases of two three-ring (having 4-, 6-, 8-element or 8-, 10-, 12-element) CCAA design. The radiation pattern of each design cases is obtained by finding optimal excitation weights of the array elements using DA. Simulation results show that the proposed algorithm outperforms the other state-of-the-art techniques (symbiotic organisms search, biogeography-based optimisation, sequential quadratic programming, opposition-based gravitational search algorithm, cat swarm optimisation, firefly algorithm, evolutionary programming) for all design cases. DA can be a promising technique for electromagnetic problems.

  19. Combined spectroscopic imaging and chemometric approach for automatically partitioning tissue types in human prostate tissue biopsies

    NASA Astrophysics Data System (ADS)

    Haka, Abigail S.; Kidder, Linda H.; Lewis, E. Neil

    2001-07-01

    We have applied Fourier transform infrared (FTIR) spectroscopic imaging, coupling a mercury cadmium telluride (MCT) focal plane array detector (FPA) and a Michelson step scan interferometer, to the investigation of various states of malignant human prostate tissue. The MCT FPA used consists of 64x64 pixels, each 61 micrometers 2, and has a spectral range of 2-10.5 microns. Each imaging data set was collected at 16-1 resolution, resulting in 512 image planes and a total of 4096 interferograms. In this article we describe a method for separating different tissue types contained within FTIR spectroscopic imaging data sets of human prostate tissue biopsies. We present images, generated by the Fuzzy C-Means clustering algorithm, which demonstrate the successful partitioning of distinct tissue type domains. Additionally, analysis of differences in the centroid spectra corresponding to different tissue types provides an insight into their biochemical composition. Lastly, we demonstrate the ability to partition tissue type regions in a different data set using centroid spectra calculated from the original data set. This has implications for the use of the Fuzzy C-Means algorithm as an automated technique for the separation and examination of tissue domains in biopsy samples.

  20. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  1. Radial magnetic resonance imaging (MRI) using a rotating radiofrequency (RF) coil at 9.4 T.

    PubMed

    Li, Mingyan; Weber, Ewald; Jin, Jin; Hugger, Thimo; Tesiram, Yasvir; Ullmann, Peter; Stark, Simon; Fuentes, Miguel; Junge, Sven; Liu, Feng; Crozier, Stuart

    2018-02-01

    The rotating radiofrequency coil (RRFC) has been developed recently as an alternative approach to multi-channel phased-array coils. The single-element RRFC avoids inter-channel coupling and allows a larger coil element with better B 1 field penetration when compared with an array counterpart. However, dedicated image reconstruction algorithms require accurate estimation of temporally varying coil sensitivities to remove artefacts caused by coil rotation. Various methods have been developed to estimate unknown sensitivity profiles from a few experimentally measured sensitivity maps, but these methods become problematic when the RRFC is used as a transceiver coil. In this work, a novel and practical radial encoding method is introduced for the RRFC to facilitate image reconstruction without the measurement or estimation of rotation-dependent sensitivity profiles. Theoretical analyses suggest that the rotation-dependent sensitivities of the RRFC can be used to create a uniform profile with careful choice of sampling positions and imaging parameters. To test this new imaging method, dedicated electronics were designed and built to control the RRFC speed and hence positions in synchrony with imaging parameters. High-quality phantom and animal images acquired on a 9.4 T pre-clinical scanner demonstrate the feasibility and potential of this new RRFC method. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Breast imaging using the Twente photoacoustic mammoscope (PAM): new clinical measurements

    NASA Astrophysics Data System (ADS)

    Heijblom, Michelle; Piras, Daniele; Ten Tije, Ellen; Xia, Wenfeng; van Hespen, Johan; Klaase, Joost; van den Engh, Frank; van Leeuwen, Ton; Steenbergen, Wiendelt; Manohar, Srirang

    2011-07-01

    Worldwide, yearly about 450,000 women die from the consequences of breast cancer. Current imaging modalities are not optimal in discriminating benign from malignant tissue. Visualizing the malignancy-associated increased hemoglobin concentration might significantly improve early diagnosis of breast cancer. Since photoacoustic imaging can visualize hemoglobin in tissue with optical contrast and ultrasound-like resolution, it is potentially an ideal method for early breast cancer imaging. The Twente Photoacoustic Mammoscope (PAM) has been developed specifically for breast imaging. Recently, a large clinical study has been started in the Medisch Spectrum Twente in Oldenzaal using PAM. In PAM, the breast is slightly compressed between a window for laser light illumination and a flat array ultrasound detector. The measurements are performed using a Q-switched Nd:YAG laser, pulsed at 1064 nm and a 1 MHz unfocused ultrasound detector array. Three-dimensional data are reconstructed using a delay and sum reconstruction algorithm. Those reconstructed images are compared with conventional imaging and histopathology. In the first phase of the study 12 patients with a malignant lesion and 2 patients with a benign cyst have been measured. The results are used to guide developments in photoacoustic mammography in order to pave the way towards an optimal technique for early diagnosis of breast cancer.

  3. Non-contact finger vein acquisition system using NIR laser

    NASA Astrophysics Data System (ADS)

    Kim, Jiman; Kong, Hyoun-Joong; Park, Sangyun; Noh, SeungWoo; Lee, Seung-Rae; Kim, Taejeong; Kim, Hee Chan

    2009-02-01

    Authentication using finger vein pattern has substantial advantage than other biometrics. Because human vein patterns are hidden inside the skin and tissue, it is hard to forge vein structure. But conventional system using NIR LED array has two drawbacks. First, direct contact with LED array raise sanitary problem. Second, because of discreteness of LEDs, non-uniform illumination exists. We propose non-contact finger vein acquisition system using NIR laser and Laser line generator lens. Laser line generator lens makes evenly distributed line laser from focused laser light. Line laser is aimed on the finger longitudinally. NIR camera was used for image acquisition. 200 index finger vein images from 20 candidates are collected. Same finger vein pattern extraction algorithm was used to evaluate two sets of images. Acquired images from proposed non-contact system do not show any non-uniform illumination in contrary with conventional system. Also results of matching are comparable to conventional system. We developed Non-contact finger vein acquisition system. It can prevent potential cross contamination of skin diseases. Also the system can produce uniformly illuminated images unlike conventional system. With the benefit of non-contact, proposed system shows almost equivalent performance compared with conventional system.

  4. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  5. Three-Dimensional ISAR Imaging Method for High-Speed Targets in Short-Range Using Impulse Radar Based on SIMO Array

    PubMed Central

    Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei

    2016-01-01

    This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile’s rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm. PMID:26978372

  6. Feasibility study of hidden flow imaging based on laser speckle technique using multiperspectives contrast images

    NASA Astrophysics Data System (ADS)

    Abookasis, David; Moshe, Tomer

    2014-11-01

    This paper demonstrates the insertion of lens array in the front of a CCD camera in a laser speckle imaging (LSI) like-technique to acquire multiple speckle reflectance projections for imaging blood flow in an intact biological tissue. In some of LSI applications, flow imaging is obtained by thinning or removing of the upper tissue layers to access blood vessels. In contrast, with the proposed approach flow imaging can be achieved while the tissue is intact. In the system, each lens from an hexagonal lens array observed the sample from slightly different perspectives and captured with a CCD camera. In the computer, these multiview raw images are converted to speckled contrast maps. Then, a self-deconvolution shift-and-add algorithm is employed for processing yields high contrast flow information. The method is experimentally validated first with a plastic tube filled with scattering liquid running at different controlled flow rates hidden in a biological tissue and then extensively tested for imaging of cerebral blood flow in an intact rodent head experience different conditions. A total of fifteen mice were used in the experiments divided randomly into three groups as follows: Group 1 (n=5) consisted of injured mice experience hypoxic ischemic brain injury monitored for ~40 min. Group 2 (n=5) injured mice experience anoxic brain injury monitored up to 20 min. Group 3 (n=5) experience functional activation monitored up to ~35 min. To increase tissue transparency and the penetration depth of photons through head tissue layers, an optical clearing method was employed. To our knowledge, this work presents for the first time the use of lens array in LSI scheme.

  7. Antenna analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Smith, William T.

    1992-01-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary). A comparison between the simulated and actual W-L techniques is shown for a triangular-shaped pattern. Dolph-Chebyshev is a different class of synthesis technique in that D-C is used for side lobe control as opposed to pattern shaping. The interesting thing about D-C synthesis is that the side lobes have the same amplitude. Five-element arrays were used. Again, 41 pattern samples were used for the input. Nine actual D-C patterns ranging from -10 dB to -30 dB side lobe levels were used to train the network. A comparison between simulated and actual D-C techniques for a pattern with -22 dB side lobe level is shown. The goal for this research was to evaluate the performance of neural network computing with antennas. Future applications will employ the backpropagation training algorithm to drastically reduce the computational complexity involved in performing EM compensation for surface errors in large space reflector antennas.

  8. Antenna analysis using neural networks

    NASA Astrophysics Data System (ADS)

    Smith, William T.

    1992-09-01

    Conventional computing schemes have long been used to analyze problems in electromagnetics (EM). The vast majority of EM applications require computationally intensive algorithms involving numerical integration and solutions to large systems of equations. The feasibility of using neural network computing algorithms for antenna analysis is investigated. The ultimate goal is to use a trained neural network algorithm to reduce the computational demands of existing reflector surface error compensation techniques. Neural networks are computational algorithms based on neurobiological systems. Neural nets consist of massively parallel interconnected nonlinear computational elements. They are often employed in pattern recognition and image processing problems. Recently, neural network analysis has been applied in the electromagnetics area for the design of frequency selective surfaces and beam forming networks. The backpropagation training algorithm was employed to simulate classical antenna array synthesis techniques. The Woodward-Lawson (W-L) and Dolph-Chebyshev (D-C) array pattern synthesis techniques were used to train the neural network. The inputs to the network were samples of the desired synthesis pattern. The outputs are the array element excitations required to synthesize the desired pattern. Once trained, the network is used to simulate the W-L or D-C techniques. Various sector patterns and cosecant-type patterns (27 total) generated using W-L synthesis were used to train the network. Desired pattern samples were then fed to the neural network. The outputs of the network were the simulated W-L excitations. A 20 element linear array was used. There were 41 input pattern samples with 40 output excitations (20 real parts, 20 imaginary).

  9. Wide field imaging problems in radio astronomy

    NASA Astrophysics Data System (ADS)

    Cornwell, T. J.; Golap, K.; Bhatnagar, S.

    2005-03-01

    The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.

  10. In vivo ultrasound imaging of the bone cortex

    NASA Astrophysics Data System (ADS)

    Renaud, Guillaume; Kruizinga, Pieter; Cassereau, Didier; Laugier, Pascal

    2018-06-01

    Current clinical ultrasound scanners cannot be used to image the interior morphology of bones because these scanners fail to address the complicated physics involved for exact image reconstruction. Here, we show that if the physics is properly addressed, bone cortex can be imaged using a conventional transducer array and a programmable ultrasound scanner. We provide in vivo proof for this technique by scanning the radius and tibia of two healthy volunteers and comparing the thickness of the radius bone with high-resolution peripheral x-ray computed tomography. Our method assumes a medium that is composed of different homogeneous layers with unique elastic anisotropy and ultrasonic wave-speed values. The applicable values of these layers are found by optimizing image sharpness and intensity over a range of relevant values. In the algorithm of image reconstruction we take wave refraction between the layers into account using a ray-tracing technique. The estimated values of the ultrasonic wave-speed and anisotropy in cortical bone are in agreement with ex vivo studies reported in the literature. These parameters are of interest since they were proposed as biomarkers for cortical bone quality. In this paper we discuss the physics involved with ultrasound imaging of bone and provide an algorithm to successfully image the first segment of cortical bone.

  11. A real-time tracking system of infrared dim and small target based on FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Rong, Sheng-hui; Zhou, Hui-xin; Qin, Han-lin; Wang, Bing-jian; Qian, Kun

    2014-11-01

    A core technology in the infrared warning system is the detection tracking of dim and small targets with complicated background. Consequently, running the detection algorithm on the hardware platform has highly practical value in the military field. In this paper, a real-time detection tracking system of infrared dim and small target which is used FPGA (Field Programmable Gate Array) and DSP (Digital Signal Processor) as the core was designed and the corresponding detection tracking algorithm and the signal flow is elaborated. At the first stage, the FPGA obtain the infrared image sequence from the sensor, then it suppresses background clutter by mathematical morphology method and enhances the target intensity by Laplacian of Gaussian operator. At the second stage, the DSP obtain both the original image and the filtered image form the FPGA via the video port. Then it segments the target from the filtered image by an adaptive threshold segmentation method and gets rid of false target by pipeline filter. Experimental results show that our system can achieve higher detection rate and lower false alarm rate.

  12. A VLSI-Based High-Performance Raster Image System.

    DTIC Science & Technology

    1986-05-08

    and data in broadcast form to the array of memory -hips in the frame buffer, shown in the bottom block. This is simply a physical structure to hold up...Principal Investigator: John Poulton Collaboration on algorithm development: Prof. Jack Goldfeather (Dept. of Mathematics, Carleton Collge ...1983) Cheng-Hong Hsieh (MS, Computer Science, May, 1985) Jeff P. Hultquist Susan Spach Undergraduate ResearLh Assistant: Sonya Holder (BS, Physics , May

  13. Crustal thickness variations across the Blue Ridge mountains, southern Appalachians: an alternative procedure for migrating wide-angle reflection data

    Treesearch

    Robert B. Hawman

    2008-01-01

    Migration of wide-angle reflections generated by quarry blasts suggests that crustal thickness increases from 38 km beneath the Carolina Terrane to 47–51 km along the southeastern flank of the Blue Ridge. The migration algorithm, developed for generating single-fold images from explosions and earthquakes recorded with isolated, short-aperture arrays, uses the localized...

  14. Automatic localization of landmark sets in head CT images with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2016-03-01

    Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.

  15. Development of methods for the analysis of multi-mode TFM images

    NASA Astrophysics Data System (ADS)

    Sy, K.; Bredif, P.; Iakovleva, E.; Roy, O.; Lesselier, D.

    2018-05-01

    TFM (Total Focusing Method) is an advanced post-processing imaging algorithm of ultrasonic array data that shows good potential in defect detection and characterization. It can be employed using an infinite number of paths between transducer and focusing point. Depending upon the geometry and the characteristics of the defect in a given part, there are not the same modes that are appropriate for the defect reconstruction. Furthermore, non-physical indications can be observed, prone to misinterpretation. These imaging artifacts are due to the coexistence of several contributions involving several modes of propagation and interactions with possible defects and/or the geometry of the part. Two methods for filtering artifacts and reducing the number of TFM images are developed and illustrated.

  16. Measuring and correcting wobble in large-scale transmission radiography.

    PubMed

    Rogers, Thomas W; Ollier, James; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (∼5 m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. We consider the measurement of wobble and amelioration of the consequent loss of image precision. Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.

  17. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing for control and for robotics missions using vision sensors. It presents a top-level description of technologies required for the design and construction of SVIP and EASI and to advance the spatial-spectral imaging and large-scale space interferometry science and engineering.

  18. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  19. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    NASA Astrophysics Data System (ADS)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  20. Real-time free-viewpoint DIBR for large-size 3DLED

    NASA Astrophysics Data System (ADS)

    Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru

    2017-10-01

    Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.

  1. Sparse array angle estimation using reduced-dimension ESPRIT-MUSIC in MIMO radar.

    PubMed

    Zhang, Chaozhu; Pang, Yucai

    2013-01-01

    Sparse linear arrays provide better performance than the filled linear arrays in terms of angle estimation and resolution with reduced size and low cost. However, they are subject to manifold ambiguity. In this paper, both the transmit array and receive array are sparse linear arrays in the bistatic MIMO radar. Firstly, we present an ESPRIT-MUSIC method in which ESPRIT algorithm is used to obtain ambiguous angle estimates. The disambiguation algorithm uses MUSIC-based procedure to identify the true direction cosine estimate from a set of ambiguous candidate estimates. The paired transmit angle and receive angle can be estimated and the manifold ambiguity can be solved. However, the proposed algorithm has high computational complexity due to the requirement of two-dimension search. Further, the Reduced-Dimension ESPRIT-MUSIC (RD-ESPRIT-MUSIC) is proposed to reduce the complexity of the algorithm. And the RD-ESPRIT-MUSIC only demands one-dimension search. Simulation results demonstrate the effectiveness of the method.

  2. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  3. Preliminary earthquake locations in the Kenai Peninsula recorded by the MOOS Array and their relationship to structure in the 1964 great earthquake zone

    NASA Astrophysics Data System (ADS)

    Li, J.; Abers, G. A.; Christensen, D. H.; Kim, Y.; Calkins, J. A.

    2011-12-01

    Earthquakes in subduction zones are mostly generated at the interface between the subducting and overlying plates. In 2006-2009, the MOOS (Multidisciplinary Observations Of Subduction) seismic array was deployed around the Kenai Peninsula, Alaska, consisting of 34 broadband seismometers recording for 1-3 years. This region spans the eastern end of the Aleutian megathrust that ruptured in the 1964 Mw 9.2 great earthquake, the second largest recorded earthquake, and ongoing seismicity is abundant. Here, we report an initial analysis of seismicity recorded by MOOS, in the context of preliminary imaging. There were 16,462 events detected in one year from initial STA/LTA signal detections and subsequent event associations from the MOOS Array. We manually reviewed them to eliminate distant earthquakes and noise, leaving 11,879 local earthquakes. To refine this catalog, an adaptive auto-regressive onset estimation algorithm was applied, doubling the original dataset and producing 20,659 P picks and 22,999 S picks for one month (September 2007). Inspection shows that this approach lead to almost negligible false alarms and many more events than hand picking. Within the well-sampled part of the array, roughly 200 km by 300 km, we locate 250% more earthquakes for one month than the permanent network catalog, or 10 earthquakes per day on this patch of the megathrust. Although the preliminary locations of earthquakes still show some scatter, we can see a concentration of events in a ~20-km-wide belt, part of which can be interpreted as seismogenic thrust zone. In conjunction with the seismicity study, we are imaging the plate interface with receiver functions. The main seismicity zone corresponds to the top of a low-velocity layer imaged in receiver functions, nominally attributed to the top of the downgoing plate. As we refine velocity models and apply relative relocation algorithms, we expect to improve the precision of the locations substantially. When combined with image of velocity structure from scattered wave migration, we can test whether the thrust zone is above the Yakutat terrane or between the Yakutat terrane and the subducting Pacific plate. Our refined relocations will also improve our understanding of other active faults (e.g., splay faults) and their relationship to the plate boundary.

  4. A Systolic Array-Based FPGA Parallel Architecture for the BLAST Algorithm

    PubMed Central

    Guo, Xinyu; Wang, Hong; Devabhaktuni, Vijay

    2012-01-01

    A design of systolic array-based Field Programmable Gate Array (FPGA) parallel architecture for Basic Local Alignment Search Tool (BLAST) Algorithm is proposed. BLAST is a heuristic biological sequence alignment algorithm which has been used by bioinformatics experts. In contrast to other designs that detect at most one hit in one-clock-cycle, our design applies a Multiple Hits Detection Module which is a pipelining systolic array to search multiple hits in a single-clock-cycle. Further, we designed a Hits Combination Block which combines overlapping hits from systolic array into one hit. These implementations completed the first and second step of BLAST architecture and achieved significant speedup comparing with previously published architectures. PMID:25969747

  5. Research on illumination uniformity of high-power LED array light source

    NASA Astrophysics Data System (ADS)

    Yu, Xiaolong; Wei, Xueye; Zhang, Ou; Zhang, Xinwei

    2018-06-01

    Uniform illumination is one of the most important problem that must be solved in the application of high-power LED array. A numerical optimization algorithm, is applied to obtain the best LED array typesetting so that the light intensity of the target surface is evenly distributed. An evaluation function is set up through the standard deviation of the illuminance function, then the particle swarm optimization algorithm is utilized to optimize different arrays. Furthermore, the light intensity distribution is obtained by optical ray tracing method. Finally, a hybrid array is designed and the optical ray tracing method is applied to simulate the array. The simulation results, which is consistent with the traditional theoretical calculation, show that the algorithm introduced in this paper is reasonable and effective.

  6. A tetrahedron beam computed tomography benchtop system with a multiple pixel field emission x-ray tube.

    PubMed

    Xu, Xiaochao; Kim, Joshua; Laganis, Philip; Schulze, Derek; Liang, Yongguang; Zhang, Tiezhi

    2011-10-01

    To demonstrate the feasibility of Tetrahedron Beam Computed Tomography (TBCT) using a carbon nanotube (CNT) multiple pixel field emission x-ray (MPFEX) tube. A multiple pixel x-ray source facilitates the creation of novel x-ray imaging modalities. In a previous publication, the authors proposed a Tetrahedron Beam Computed Tomography (TBCT) imaging system which comprises a linear source array and a linear detector array that are orthogonal to each other. TBCT is expected to reduce scatter compared with Cone Beam Computed Tomography (CBCT) and to have better detector performance. Therefore, it may produce improved image quality for image guided radiotherapy. In this study, a TBCT benchtop system has been developed with an MPFEX tube. The tube has 75 CNT cold cathodes, which generate 75 x-ray focal spots on an elongated anode, and has 4 mm pixel spacing. An in-house-developed, 5-row CT detector array using silicon photodiodes and CdWO(4) scintillators was employed in the system. Hardware and software were developed for tube control and detector data acquisition. The raw data were preprocessed for beam hardening and detector response linearity and were reconstructed with an FDK-based image reconstruction algorithm. The focal spots were measured at about 1 × 2 mm(2) using a star phantom. Each cathode generates around 3 mA cathode current with 2190 V gate voltage. The benchtop system is able to perform TBCT scans with a prolonged scanning time. Images of a commercial CT phantom were successfully acquired. A prototype system was developed, and preliminary phantom images were successfully acquired. MPFEX is a promising x-ray source for TBCT. Further improvement of tube output is needed in order for it to be used in clinical TBCT systems.

  7. Three-Dimensional Super-Resolution: Theory, Modeling, and Field Tests Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Vincent E.; Hines, Glenn; Pierrottet, Diego; Reisse, Robert

    2014-01-01

    Many flash lidar applications continue to demand higher three-dimensional image resolution beyond the current state-of-the-art technology of the detector arrays and their associated readout circuits. Even with the available number of focal plane pixels, the required number of photons for illuminating all the pixels may impose impractical requirements on the laser pulse energy or the receiver aperture size. Therefore, image resolution enhancement by means of a super-resolution algorithm in near real time presents a very attractive solution for a wide range of flash lidar applications. This paper describes a superresolution technique and illustrates its performance and merits for generating three-dimensional image frames at a video rate.

  8. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  9. High-Resolution Gamma-Ray Imaging Measurements Using Externally Segmented Germanium Detectors

    NASA Technical Reports Server (NTRS)

    Callas, J.; Mahoney, W.; Skelton, R.; Varnell, L.; Wheaton, W.

    1994-01-01

    Fully two-dimensional gamma-ray imaging with simultaneous high-resolution spectroscopy has been demonstrated using an externally segmented germanium sensor. The system employs a single high-purity coaxial detector with its outer electrode segmented into 5 distinct charge collection regions and a lead coded aperture with a uniformly redundant array (URA) pattern. A series of one-dimensional responses was collected around 511 keV while the system was rotated in steps through 180 degrees. A non-negative, linear least-squares algorithm was then employed to reconstruct a 2-dimensional image. Corrections for multiple scattering in the detector, and the finite distance of source and detector are made in the reconstruction process.

  10. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    PubMed

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  11. Accurate permittivity measurements for microwave imaging via ultra-wideband removal of spurious reflectors.

    PubMed

    Pelletier, Mathew G; Viera, Joseph A; Wanjura, John; Holt, Greg

    2010-01-01

    The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties.

  12. 3D near-infrared imaging based on a single-photon avalanche diode array sensor

    NASA Astrophysics Data System (ADS)

    Mata Pavia, Juan; Charbon, Edoardo; Wolf, Martin

    2011-07-01

    An imager for optical tomography was designed based on a detector with 128×128 single-photon pixels that included a bank of 32 time-to-digital converters. Due to the high spatial resolution and the possibility of performing time resolved measurements, a new contact-less setup has been conceived in which scanning of the object is not necessary. This enables one to perform high-resolution optical tomography with much higher acquisition rate, which is fundamental in clinical applications. The setup has a resolution of 97ps and operates with a laser source with an average power of 3mW. This new imaging system generated a high amount of data that could not be processed by established methods, therefore new concepts and algorithms were developed to take full advantage of it. Images were generated using a new reconstruction algorithm that combined general inverse problem methods with Fourier transforms in order to reduce the complexity of the problem. Simulations show that the potential resolution of the new setup is in the order of millimeters. Experiments have been performed to confirm this potential. Images derived from the measurements demonstrate that we have already reached a resolution of 5mm.

  13. An embedded system for face classification in infrared video using sparse representation

    NASA Astrophysics Data System (ADS)

    Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel

    2017-09-01

    We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).

  14. Generalized enhanced suffix array construction in external memory.

    PubMed

    Louza, Felipe A; Telles, Guilherme P; Hoffmann, Steve; Ciferri, Cristina D A

    2017-01-01

    Suffix arrays, augmented by additional data structures, allow solving efficiently many string processing problems. The external memory construction of the generalized suffix array for a string collection is a fundamental task when the size of the input collection or the data structure exceeds the available internal memory. In this article we present and analyze [Formula: see text] [introduced in CPM (External memory generalized suffix and [Formula: see text] arrays construction. In: Proceedings of CPM. pp 201-10, 2013)], the first external memory algorithm to construct generalized suffix arrays augmented with the longest common prefix array for a string collection. Our algorithm relies on a combination of buffers, induced sorting and a heap to avoid direct string comparisons. We performed experiments that covered different aspects of our algorithm, including running time, efficiency, external memory access, internal phases and the influence of different optimization strategies. On real datasets of size up to 24 GB and using 2 GB of internal memory, [Formula: see text] showed a competitive performance when compared to [Formula: see text] and [Formula: see text], which are efficient algorithms for a single string according to the related literature. We also show the effect of disk caching managed by the operating system on our algorithm. The proposed algorithm was validated through performance tests using real datasets from different domains, in various combinations, and showed a competitive performance. Our algorithm can also construct the generalized Burrows-Wheeler transform of a string collection with no additional cost except by the output time.

  15. An Improved Scheduling Algorithm for Data Transmission in Ultrasonic Phased Arrays with Multi-Group Ultrasonic Sensors

    PubMed Central

    Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji

    2017-01-01

    High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345

  16. Scene-based nonuniformity correction algorithm based on interframe registration.

    PubMed

    Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao

    2011-06-01

    In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.

  17. Magnetic resonance imaging diffusion tensor tractography: evaluation of anatomic accuracy of different fiber tracking software packages.

    PubMed

    Feigl, Guenther C; Hiergeist, Wolfgang; Fellner, Claudia; Schebesch, Karl-Michael M; Doenitz, Christian; Finkenzeller, Thomas; Brawanski, Alexander; Schlaier, Juergen

    2014-01-01

    Diffusion tensor imaging (DTI)-based tractography has become an integral part of preoperative diagnostic imaging in many neurosurgical centers, and other nonsurgical specialties depend increasingly on DTI tractography as a diagnostic tool. The aim of this study was to analyze the anatomic accuracy of visualized white matter fiber pathways using different, readily available DTI tractography software programs. Magnetic resonance imaging scans of the head of 20 healthy volunteers were acquired using a Siemens Symphony TIM 1.5T scanner and a 12-channel head array coil. The standard settings of the scans in this study were 12 diffusion directions and 5-mm slices. The fornices were chosen as an anatomic structure for the comparative fiber tracking. Identical data sets were loaded into nine different fiber tracking packages that used different algorithms. The nine software packages and algorithms used were NeuroQLab (modified tensor deflection [TEND] algorithm), Sörensen DTI task card (modified streamline tracking technique algorithm), Siemens DTI module (modified fourth-order Runge-Kutta algorithm), six different software packages from Trackvis (interpolated streamline algorithm, modified FACT algorithm, second-order Runge-Kutta algorithm, Q-ball [FACT algorithm], tensorline algorithm, Q-ball [second-order Runge-Kutta algorithm]), DTI Query (modified streamline tracking technique algorithm), Medinria (modified TEND algorithm), Brainvoyager (modified TEND algorithm), DTI Studio modified FACT algorithm, and the BrainLab DTI module based on the modified Runge-Kutta algorithm. Three examiners (a neuroradiologist, a magnetic resonance imaging physicist, and a neurosurgeon) served as examiners. They were double-blinded with respect to the test subject and the fiber tracking software used in the presented images. Each examiner evaluated 301 images. The examiners were instructed to evaluate screenshots from the different programs based on two main criteria: (i) anatomic accuracy of the course of the displayed fibers and (ii) number of fibers displayed outside the anatomic boundaries. The mean overall grade for anatomic accuracy was 2.2 (range, 1.1-3.6) with a standard deviation (SD) of 0.9. The mean overall grade for incorrectly displayed fibers was 2.5 (range, 1.6-3.5) with a SD of 0.6. The mean grade of the overall program ranking was 2.3 with a SD of 0.6. The overall mean grade of the program ranked number one (NeuroQLab) was 1.7 (range, 1.5-2.8). The mean overall grade of the program ranked last (BrainLab iPlan Cranial 2.6 DTI Module) was 3.3 (range, 1.7-4). The difference between the mean grades of these two programs was statistically highly significant (P < 0.0001). There was no statistically significant difference between the programs ranked 1-3: NeuroQLab, Sörensen DTI Task Card, and Siemens DTI module. The results of this study show that there is a statistically significant difference in the anatomic accuracy of the tested DTI fiber tracking programs. Although incorrectly displayed fibers could lead to wrong conclusions in the neurosciences field, which relies heavily on this noninvasive imaging technique, incorrectly displayed fibers in neurosurgery could lead to surgical decisions potentially harmful for the patient if used without intraoperative cortical stimulation. DTI fiber tracking presents a valuable noninvasive preoperative imaging tool, which requires further validation after important standardization of the acquisition and processing techniques currently available. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Imaging electrical conductivity, permeability, and/or permittivity contrasts using the Born Scattering Inversion (BSI)

    NASA Astrophysics Data System (ADS)

    Darrh, A.; Downs, C. M.; Poppeliers, C.

    2017-12-01

    Born Scattering Inversion (BSI) of electromagnetic (EM) data is a geophysical imaging methodology for mapping weak conductivity, permeability, and/or permittivity contrasts in the subsurface. The high computational cost of full waveform inversion is reduced by adopting the First Born Approximation for scattered EM fields. This linearizes the inverse problem in terms of Born scattering amplitudes for a set of effective EM body sources within a 3D imaging volume. Estimation of scatterer amplitudes is subsequently achieved by solving the normal equations. Our present BSI numerical experiments entail Fourier transforming real-valued synthetic EM data to the frequency-domain, and minimizing the L2 residual between complex-valued observed and predicted data. We are testing the ability of BSI to resolve simple scattering models. For our initial experiments, synthetic data are acquired by three-component (3C) electric field receivers distributed on a plane above a single point electric dipole within a homogeneous and isotropic wholespace. To suppress artifacts, candidate Born scatterer locations are confined to a volume beneath the receiver array. Also, we explore two different numerical linear algebra algorithms for solving the normal equations: Damped Least Squares (DLS), and Non-Negative Least Squares (NNLS). Results from NNLS accurately recover the source location only for a large dense 3C receiver array, but fail when the array is decimated, or is restricted to horizontal component data. Using all receiver stations and all components per station, NNLS results are relatively insensitive to a sub-sampled frequency spectrum, suggesting that coarse frequency-domain sampling may be adequate for good target resolution. Results from DLS are insensitive to diminishing array density, but contain spatially oscillatory structure. DLS-generated images are consistently centered at the known point source location, despite an abundance of surrounding structure.

  19. Multi-microphone adaptive array augmented with visual cueing.

    PubMed

    Gibson, Paul L; Hedin, Dan S; Davies-Venn, Evelyn E; Nelson, Peggy; Kramer, Kevin

    2012-01-01

    We present the development of an audiovisual array that enables hearing aid users to converse with multiple speakers in reverberant environments with significant speech babble noise where their hearing aids do not function well. The system concept consists of a smartphone, a smartphone accessory, and a smartphone software application. The smartphone accessory concept is a multi-microphone audiovisual array in a form factor that allows attachment to the back of the smartphone. The accessory will also contain a lower power radio by which it can transmit audio signals to compatible hearing aids. The smartphone software application concept will use the smartphone's built in camera to acquire images and perform real-time face detection using the built-in face detection support of the smartphone. The audiovisual beamforming algorithm uses the location of talking targets to improve the signal to noise ratio and consequently improve the user's speech intelligibility. Since the proposed array system leverages a handheld consumer electronic device, it will be portable and low cost. A PC based experimental system was developed to demonstrate the feasibility of an audiovisual multi-microphone array and these results are presented.

  20. Design of a Synthetic Aperture Array to Support Experiments in Active Control of Scattering

    DTIC Science & Technology

    1990-06-01

    becomes necessary to validate the theory and test the control system algorithms . While experiments in open water would be most like the anticipated...mathematical development of the beamforming algorithms used as well as an estimate of their applicability to the specifics of beamforming in a reverberant...Chebyshev array have been proposed. The method used in ARRAY, a nested product algorithm , proposed by Bresler [21] is recommended by Pozar [19] and

  1. Analysis of L-band Multi-Channel Sea Clutter

    DTIC Science & Technology

    2010-08-01

    Some researchers found that the use of a hybrid algorithm of PS and GA could accelerate the convergence for array beamforming designs (Yeo and Lu...to be shown is array failure correction using the PS algorithm . Assume element 5 of a 32 half-wavelength spacing linear array is in failure. The goal... algorithm . The blue one is the 20 dB Chebyshev pattern and the template in red is the goal pattern to achieve. Two corrected beam patterns are

  2. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    USDA-ARS?s Scientific Manuscript database

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  3. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  4. Configuration Considerations for Low Frequency Arrays

    NASA Astrophysics Data System (ADS)

    Lonsdale, C. J.

    2005-12-01

    The advance of digital signal processing capabilities has spurred a new effort to exploit the lowest radio frequencies observable from the ground, from ˜10 MHz to a few hundred MHz. Multiple scientifically and technically complementary instruments are planned, including the Mileura Widefield Array (MWA) in the 80-300 MHz range, and the Long Wavelength Array (LWA) in the 20-80 MHz range. The latter instrument will target relatively high angular resolution, and baselines up to a few hundred km. An important practical question for the design of such an array is how to distribute the collecting area on the ground. The answer to this question profoundly affects both cost and performance. In this contribution, the factors which determine the anticipated performance of any such array are examined, paying particular attention to the viability and accuracy of array calibration. It is argued that due to the severity of ionospheric effects in particular, it will be difficult or impossible to achieve routine, high dynamic range imaging with a geographically large low frequency array, unless a large number of physically separate array stations is built. This conclusion is general, is based on the need for adequate sampling of ionospheric irregularities, and is independent of the calibration algorithms and techniques that might be employed. It is further argued that array configuration figures of merit that are traditionally used for higher frequency arrays are inappropriate, and a different set of criteria are proposed.

  5. Short- and long-term memory effects in intensified array detectors - Influence on airborne laser fluorosensor measurements

    NASA Astrophysics Data System (ADS)

    Bristow, Michael P.; Edmonds, Curtis M.; Bundy, Donald H.; Turner, Rudolpha M.

    1989-02-01

    Phosphorescence and thermoluminescence memory effects in the phosphors of image intensifiers are investigated, with application to the performance improvement of intensified optical multichannel analyzers. Algorithms have been developed which can be used to remove these effects from airborne measurements of laser-induced fluorescence spectra of aquatic and terrestrial targets. The present method can be adapted to situations involving different gating routines, repetition rates, and diode group sizes.

  6. Performance assessment of a compressive sensing single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2017-04-01

    Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.

  7. The ultrasound brain helmet: new transducers and volume registration for in vivo simultaneous multi-transducer 3-D transcranial imaging.

    PubMed

    Lindsey, Brooks D; Light, Edward D; Nicoletto, Heather A; Bennett, Ellen R; Laskowitz, Daniel T; Smith, Stephen W

    2011-06-01

    Because stroke remains an important and time-sensitive health concern in developed nations, we present a system capable of fusing 3-D transcranial ultrasound volumes acquired from two sides of the head. This system uses custom sparse array transducers built on flexible multilayer circuits that can be positioned for simultaneous imaging through both temporal acoustic windows, allowing for potential registration of multiple real-time 3-D scans of cerebral vasculature. We examine hardware considerations for new matrix arrays-transducer design and interconnects-in this application. Specifically, it is proposed that SNR may be increased by reducing the length of probe cables. This claim is evaluated as part of the presented system through simulation, experimental data, and in vivo imaging. Ultimately, gains in SNR of 7 dB are realized by replacing a standard probe cable with a much shorter flex interconnect; higher gains may be possible using ribbon-based probe cables. In vivo images are presented, showing cerebral arteries with and without the use of microbubble contrast agent; they have been registered and fused using a simple algorithm which maximizes normalized cross-correlation.

  8. Stereo matching and view interpolation based on image domain triangulation.

    PubMed

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  9. Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array.

    PubMed

    Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan

    2011-04-07

    We report a portable lensless on-chip microscope that can achieve <1 µm resolution over a wide field-of-view of ∼ 24 mm(2) without the use of any mechanical scanning. This compact on-chip microscope weighs ∼ 95 g and is based on partially coherent digital in-line holography. Multiple fiber-optic waveguides are butt-coupled to light emitting diodes, which are controlled by a low-cost micro-controller to sequentially illuminate the sample. The resulting lensfree holograms are then captured by a digital sensor-array and are rapidly processed using a pixel super-resolution algorithm to generate much higher resolution holographic images (both phase and amplitude) of the objects. This wide-field and high-resolution on-chip microscope, being compact and light-weight, would be important for global health problems such as diagnosis of infectious diseases in remote locations. Toward this end, we validate the performance of this field-portable microscope by imaging human malaria parasites (Plasmodium falciparum) in thin blood smears. Our results constitute the first-time that a lensfree on-chip microscope has successfully imaged malaria parasites.

  10. Data Reduction and Image Reconstruction Techniques for Non-redundant Masking

    NASA Astrophysics Data System (ADS)

    Sallum, S.; Eisner, J.

    2017-11-01

    The technique of non-redundant masking (NRM) transforms a conventional telescope into an interferometric array. In practice, this provides a much better constrained point-spread function than a filled aperture and thus higher resolution than traditional imaging methods. Here, we describe an NRM data reduction pipeline. We discuss strategies for NRM observations regarding dithering patterns and calibrator selection. We describe relevant image calibrations and use example Large Binocular Telescope data sets to show their effects on the scatter in the Fourier measurements. We also describe the various ways to calculate Fourier quantities, and discuss different calibration strategies. We present the results of image reconstructions from simulated observations where we adjust prior images, weighting schemes, and error bar estimation. We compare two imaging algorithms and discuss implications for reconstructing images from real observations. Finally, we explore how the current state of the art compares to next-generation Extremely Large Telescopes.

  11. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  12. Modelling spatiotemporal change using multidimensional arrays Meng

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Appel, Marius; Pebesma, Edzer

    2017-04-01

    The large variety of remote sensors, model simulations, and in-situ records provide great opportunities to model environmental change. The massive amount of high-dimensional data calls for methods to integrate data from various sources and to analyse spatiotemporal and thematic information jointly. An array is a collection of elements ordered and indexed in arbitrary dimensions, which naturally represent spatiotemporal phenomena that are identified by their geographic locations and recording time. In addition, array regridding (e.g., resampling, down-/up-scaling), dimension reduction, and spatiotemporal statistical algorithms are readily applicable to arrays. However, the role of arrays in big geoscientific data analysis has not been systematically studied: How can arrays discretise continuous spatiotemporal phenomena? How can arrays facilitate the extraction of multidimensional information? How can arrays provide a clean, scalable and reproducible change modelling process that is communicable between mathematicians, computer scientist, Earth system scientist and stakeholders? This study emphasises on detecting spatiotemporal change using satellite image time series. Current change detection methods using satellite image time series commonly analyse data in separate steps: 1) forming a vegetation index, 2) conducting time series analysis on each pixel, and 3) post-processing and mapping time series analysis results, which does not consider spatiotemporal correlations and ignores much of the spectral information. Multidimensional information can be better extracted by jointly considering spatial, spectral, and temporal information. To approach this goal, we use principal component analysis to extract multispectral information and spatial autoregressive models to account for spatial correlation in residual based time series structural change modelling. We also discuss the potential of multivariate non-parametric time series structural change methods, hierarchical modelling, and extreme event detection methods to model spatiotemporal change. We show how array operations can facilitate expressing these methods, and how the open-source array data management and analytics software SciDB and R can be used to scale the process and make it easily reproducible.

  13. Path planning on cellular nonlinear network using active wave computing technique

    NASA Astrophysics Data System (ADS)

    Yeniçeri, Ramazan; Yalçın, Müstak E.

    2009-05-01

    This paper introduces a simple algorithm to solve robot path finding problem using active wave computing techniques. A two-dimensional Cellular Neural/Nonlinear Network (CNN), consist of relaxation oscillators, has been used to generate active waves and to process the visual information. The network, which has been implemented on a Field Programmable Gate Array (FPGA) chip, has the feature of being programmed, controlled and observed by a host computer. The arena of the robot is modelled as the medium of the active waves on the network. Active waves are employed to cover the whole medium with their own dynamics, by starting from an initial point. The proposed algorithm is achieved by observing the motion of the wave-front of the active waves. Host program first loads the arena model onto the active wave generator network and command to start the generation. Then periodically pulls the network image from the generator hardware to analyze evolution of the active waves. When the algorithm is completed, vectorial data image is generated. The path from any of the pixel on this image to the active wave generating pixel is drawn by the vectors on this image. The robot arena may be a complicated labyrinth or may have a simple geometry. But, the arena surface always must be flat. Our Autowave Generator CNN implementation which is settled on the Xilinx University Program Virtex-II Pro Development System is operated by a MATLAB program running on the host computer. As the active wave generator hardware has 16, 384 neurons, an arena with 128 × 128 pixels can be modeled and solved by the algorithm. The system also has a monitor and network image is depicted on the monitor simultaneously.

  14. The research on multi-projection correction based on color coding grid array

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu

    2017-10-01

    There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.

  15. Two sided residual refocusing for acoustic lens based photoacoustic imaging system.

    PubMed

    Kalloor Joseph, Francis; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2018-05-30

    In photoacoustic (PA) imaging, an acoustic lens-based system can form a focused image of an object plane. A real-time C-scan PA image can be formed by simply time gating the transducer response. While most of the focusing action is done by the lens, residual refocusing is needed to image multiple depths with high resolution simultaneously. However, a refocusing algorithm for PA camera has not been studied so far in the literature. In this work, we reformulate this residual refocusing problem for a PA camera into a two-sided wave propagation from a planar sensor array. One part of the problem deals with forward wave propagation while the other deals with time reversal. We have chosen a Fast Fourier Transform (FFT) based wave propagation model for the refocusing to maintain the real-time nature of the system. We have conducted Point Spread Function (PSF) measurement experiments at multiple depths and refocused the signal using the proposed method. Full Width at Half Maximum (FWHM), peak value and Signal to Noise Ratio (SNR) of the refocused PSF is analyzed to quantify the effect of refocusing. We believe that using a two-dimensional transducer array combined with the proposed refocusing, can lead to real-time volumetric imaging using a lens based PA imaging system. © 2018 Institute of Physics and Engineering in Medicine.

  16. Phase-aberration correction with a 3-D ultrasound scanner: feasibility study.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Trahey, Gregg E; Smith, Stephen W

    2006-08-01

    We tested the feasibility of using adaptive imaging, namely phase-aberration correction, with two-dimensional (2-D) arrays and real-time, 3-D ultrasound. Because of the high spatial frequency content of aberrators, 2-D arrays, which generally have smaller pitch and thus higher spatial sampling frequency, and 3-D imaging show potential to improve the performance of adaptive imaging. Phase-correction algorithms improve image quality by compensating for tissue-induced errors in beamforming. Using the illustrative example of transcranial ultrasound, we have evaluated our ability to perform adaptive imaging with a real-time, 3-D scanner. We have used a polymer casting of a human temporal bone, root-mean-square (RMS) phase variation of 45.0 ns, full-width-half-maximum (FWHM) correlation length of 3.35 mm, and an electronic aberrator, 100 ns RMS, 3.76 mm correlation, with tissue phantoms as illustrative examples of near-field, phase-screen aberrators. Using the multilag, least-squares, cross-correlation method, we have shown the ability of 3-D adaptive imaging to increase anechoic cyst identification, image brightness, contrast-to-speckle ratio (CSR), and, in 3-D color Doppler experiments, the ability to visualize flow. For a physical aberrator skull casting we saw CSR increase by 13% from 1.01 to 1.14, while the number of detectable cysts increased from 4.3 to 7.7.

  17. High-performance technology for indexing of high volumes of Earth remote sensing data

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Kolesenkov, Aleksandr N.; Kostrov, Boris V.

    2017-10-01

    The present paper has suggested a technology for search, indexing, cataloging and distribution of aerospace images on the basis of geo-information approach, cluster and spectral analysis. It has considered information and algorithmic support of the system. Functional circuit of the system and structure of the geographical data base have been developed on the basis of the geographical online portal technology. Taking into account heterogeneity of information obtained from various sources it is reasonable to apply a geoinformation platform that allows analyzing space location of objects and territories and executing complex processing of information. Geoinformation platform is based on cartographic fundamentals with the uniform coordinate system, the geographical data base, a set of algorithms and program modules for execution of various tasks. The technology for adding by particular users and companies of images taken by means of professional and amateur devices and also processed by various software tools to the array system has been suggested. Complex usage of visual and instrumental approaches allows significantly expanding an application area of Earth remote sensing data. Development and implementation of new algorithms based on the complex usage of new methods for processing of structured and unstructured data of high volumes will increase periodicity and rate of data updating. The paper has shown that application of original algorithms for search, indexing and cataloging of aerospace images will provide an easy access to information spread by hundreds of suppliers and allow increasing an access rate to aerospace images up to 5 times in comparison with current analogues.

  18. Methods for validating the presence of and characterizing proteins deposited onto an array

    DOEpatents

    Schabacker, Daniel S.

    2010-09-21

    A method of determining if proteins have been transferred from liquid-phase protein fractions to an array comprising staining the array with a total protein stain and imaging the array, optionally comparing the staining with a standard curve generated by staining known amounts of a known protein on the same or a similar array; a method of characterizing proteins transferred from liquid-phase protein fractions to an array including staining the array with a post-translational modification-specific (PTM-specific) stain and imaging the array and, optionally, after staining the array with a PTM-specific stain and imaging the array, washing the array, re-staining the array with a total protein stain, imaging the array, and comparing the imaging with the PTM-specific stain with the imaging with the total protein stain; stained arrays; and images of stained arrays.

  19. Implicit, nonswitching, vector-oriented algorithm for steady transonic flow

    NASA Technical Reports Server (NTRS)

    Lottati, I.

    1983-01-01

    A rapid computation of a sequence of transonic flow solutions has to be performed in many areas of aerodynamic technology. The employment of low-cost vector array processors makes the conduction of such calculations economically feasible. However, for a full utilization of the new hardware, the developed algorithms must take advantage of the special characteristics of the vector array processor. The present investigation has the objective to develop an efficient algorithm for solving transonic flow problems governed by mixed partial differential equations on an array processor.

  20. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    PubMed

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate information regarding the environment. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Materials, devices, techniques, and applications for Z-plane focal plane array technology II; Proceedings of the Meeting, San Diego, CA, July 12, 13, 1990

    NASA Astrophysics Data System (ADS)

    Carson, John C.

    1990-11-01

    Various papers on materials, devices, techniques, and applications for X-plane focal plane array technology are presented. Individual topics addressed include: application of Z-plane technology to the remote sensing of the earth from GEO, applications of smart neuromorphic focal planes, image-processing of Z-plane technology, neural network Z-plane implementation with very high interconnection rates, using a small IR surveillance satellite for tactical applications, establishing requirements for homing applications, Z-plane technology. Also discussed are: on-array spike suppression signal processing, algorithms for on-focal-plane gamma circumvention and time-delay integration, current HYMOSS Z-technology, packaging of electrons for on- and off-FPA signal processing, space/performance qualification of tape automated bonded devices, automation in tape automated bonding, high-speed/high-volume radiometric testing of Z-technology focal planes, 128-layer HYMOSS-module fabrication issues, automation of IRFPA production processes.

  2. The design and realization of a three-dimensional video system by means of a CCD array

    NASA Astrophysics Data System (ADS)

    Boizard, J. L.

    1985-12-01

    Design features and principles and initial tests of a prototype three-dimensional robot vision system based on a laser source and a CCD detector array is described. The use of a laser as a coherent illumination source permits the determination of the relief using one emitter since the location of the source is a known quantity with low distortion. The CCD signal detector array furnishes an acceptable signal/noise ratio and, when wired to an appropriate signal processing system, furnishes real-time data on the return signals, i.e., the characteristic points of an object being scanned. Signal processing involves integration of 29 kB of data per 100 samples, with sampling occurring at a rate of 5 MHz (the CCDs) and yielding an image every 12 msec. Algorithms for filtering errors from the data stream are discussed.

  3. Quarterly Research Performance Progress Report (2015 Q3). Ultrasonic Phased Arrays and Interactive Reflectivity Tomography for Nondestructive Inspection of Injection and Production Wells in Geothermal Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Polsky, Yarom; Kisner, Roger A

    2015-09-01

    For the past quarter, we have placed our effort in implementing the first version of the ModelBased Iterative Reconstruction (MBIR) algorithm, assembling and testing the electronics, designing transducers mounts, and defining our laboratory test samples. We have successfully developed the first implementation of MBIR for ultrasound imaging. The current algorithm was tested with synthetic data and we are currently making new modifications for the reconstruction of real ultrasound data. Beside assembling and testing the electronics, we developed a LabView graphic user interface (GUI) to fully control the ultrasonic phased array, adjust the time-delays of the transducers, and store the measuredmore » reflections. As part of preparing for a laboratory-scale demonstration, the design and fabrication of the laboratory samples has begun. Three cement blocks with embedded objects will be fabricated, characterized, and used to demonstrate the capabilities of the system. During the next quarter, we will continue to improve the current MBIR forward model and integrate the reconstruction code with the LabView GUI. In addition, we will define focal laws for the ultrasonic phased array and perform the laboratory demonstration. We expect to perform laboratory demonstration by the end of October 2015.« less

  4. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase-D

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Amin, N.; Fung, J.

    1989-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) was found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach was found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through continued analysis of maximum SJNR met with limited success. A method to acquire and track an incoming laser beam is proposed.

  5. Experiment on a three-beam adaptive array for EHF frequency-hopped signals using a fast algorithm, phase E

    NASA Astrophysics Data System (ADS)

    Yen, J. L.; Kremer, P.; Fung, J.

    1990-05-01

    The Department of National Defence (Canada) has been conducting studies into multi-beam adaptive arrays for extremely high frequency (EHF) frequency hopped signals. A three-beam 43 GHz adaptive antenna and a beam control processor is under development. An interactive software package for the operation of the array, capable of applying different control algorithms is being written. A maximum signal to jammer plus noise ratio (SJNR) has been found to provide superior performance in preventing degradation of user signals in the presence of nearby jammers. A new fast algorithm using a modified conjugate gradient approach has been found to be a very efficient way to implement anti-jamming arrays based on maximum SJNR criterion. The present study was intended to refine and simplify this algorithm and to implement the algorithm on an experimental array for real-time evaluation of anti-jamming performance. A three-beam adaptive array was used. A simulation package was used in the evaluation of multi-beam systems using more than three beams and different user-jammer scenarios. An attempt to further reduce the computation burden through further analysis of maximum SJNR met with limited success. The investigation of a new angle detector for spatial tracking in heterodyne laser space communications was completed.

  6. FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples

    PubMed Central

    Scrofani, G.; Sola-Pikabea, J.; Llavador, A.; Sanchez-Ortiga, E.; Barreiro, J. C.; Saavedra, G.; Garcia-Sucerquia, J.; Martínez-Corral, M.

    2017-01-01

    In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and 2-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens. PMID:29359107

  7. New-style defect inspection system of film

    NASA Astrophysics Data System (ADS)

    Liang, Yan; Liu, Wenyao; Liu, Ming; Lee, Ronggang

    2002-09-01

    An inspection system has been developed for on-line detection of film defects, which bases on combination of photoelectric imaging and digital image processing. The system runs in high speed of maximum 60m/min. Moving film is illuminated by LED array which emits even infrared (peak wavelength λp=940nm), and infrared images are obtained with a high quality and high speed CCD camera. The application software based on Visual C++6.0 under Windows processes images in real time by means of such algorithms as median filter, edge detection and projection, etc. The system is made up of four modules, which are introduced in detail in the paper. On-line experiment results shows that the inspection system can recognize defects precisely in high speed and run reliably in practical application.

  8. A novel algorithm for the reconstruction of an entrance beam fluence from treatment exit patient portal dosimetry images

    NASA Astrophysics Data System (ADS)

    Sperling, Nicholas Niven

    The problem of determining the in vivo dosimetry for patients undergoing radiation treatment has been an area of interest since the development of the field. Most methods which have found clinical acceptance work by use of a proxy dosimeter, e.g.: glass rods, using radiophotoluminescence; thermoluminescent dosimeters (TLD), typically CaF or LiF; Metal Oxide Silicon Field Effect Transistor (MOSFET) dosimeters, using threshold voltage shift; Optically Stimulated Luminescent Dosimeters (OSLD), composed of Carbon doped Aluminum Dioxide crystals; RadioChromic film, using leuko-dye polymers; Silicon Diode dosimeters, typically p-type; and ion chambers. More recent methods employ Electronic Portal Image Devices (EPID), or dosimeter arrays, for entrance or exit beam fluence determination. The difficulty with the proxy in vivo dosimetery methods is the requirement that they be placed at the particular location where the dose is to be determined. This precludes measurements across the entire patient volume. These methods are best suited where the dose at a particular location is required. The more recent methods of in vivo dosimetry make use of detector arrays and reconstruction techniques to determine dose throughout the patient volume. One method uses an array of ion chambers located upstream of the patient. This requires a special hardware device and places an additional attenuator in the beam path, which may not be desirable. A final approach is to use the existing EPID, which is part of most modern linear accelerators, to image the patient using the treatment beam. Methods exist to deconvolve the detector function of the EPID using a series of weighted exponentials. Additionally, this method has been extended to determine in vivo dosimetry. The method developed here employs the use of EPID images and an iterative deconvolution algorithm to reconstruct the impinging primary beam fluence on the patient. This primary fluence may then be employed to determine dose through the entire patient volume. The method requires patient specific information, including a CT for deconvolution/dose reconstruction. With the large-scale adoption of Cone Beam CT (CBCT) systems on modern linear accelerators, a treatment time CT is readily available for use in this deconvolution and in dose representation.

  9. GOES-R Geostationary Lightning Mapper Performance Specifications and Algorithms

    NASA Technical Reports Server (NTRS)

    Mach, Douglas M.; Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Petersen, William A.; Boldi, Robert A.; Carey, Lawrence D.; Bateman, Monte G.; Buchler, Dennis E.; McCaul, E. William, Jr.

    2008-01-01

    The Geostationary Lightning Mapper (GLM) is a single channel, near-IR imager/optical transient event detector, used to detect, locate and measure total lightning activity over the full-disk. The next generation NOAA Geostationary Operational Environmental Satellite (GOES-R) series will carry a GLM that will provide continuous day and night observations of lightning. The mission objectives for the GLM are to: (1) Provide continuous, full-disk lightning measurements for storm warning and nowcasting, (2) Provide early warning of tornadic activity, and (2) Accumulate a long-term database to track decadal changes of lightning. The GLM owes its heritage to the NASA Lightning Imaging Sensor (1997- present) and the Optical Transient Detector (1995-2000), which were developed for the Earth Observing System and have produced a combined 13 year data record of global lightning activity. GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms and applications. The science data will consist of lightning "events", "groups", and "flashes". The algorithm is being designed to be an efficient user of the computational resources. This may include parallelization of the code and the concept of sub-dividing the GLM FOV into regions to be processed in parallel. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds (e.g., Lightning Mapping Arrays in North Alabama, Oklahoma, Central Florida, and the Washington DC Metropolitan area) are being used to develop the prelaunch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution.

  10. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    NASA Astrophysics Data System (ADS)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  11. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  12. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  13. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  14. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    PubMed

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  15. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selectedmore » from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.« less

  16. Evaluation of PET Imaging Resolution Using 350 mu{m} Pixelated CZT as a VP-PET Insert Detector

    NASA Astrophysics Data System (ADS)

    Yin, Yongzhi; Chen, Ximeng; Li, Chongzheng; Wu, Heyu; Komarov, Sergey; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2014-02-01

    A cadmium-zinc-telluride (CZT) detector with 350 μm pitch pixels was studied in high-resolution positron emission tomography (PET) imaging applications. The PET imaging system was based on coincidence detection between a CZT detector and a lutetium oxyorthosilicate (LSO)-based Inveon PET detector in virtual-pinhole PET geometry. The LSO detector is a 20 ×20 array, with 1.6 mm pitches, and 10 mm thickness. The CZT detector uses ac 20 ×20 ×5 mm substrate, with 350 μm pitch pixelated anodes and a coplanar cathode. A NEMA NU4 Na-22 point source of 250 μm in diameter was imaged by this system. Experiments show that the image resolution of single-pixel photopeak events was 590 μm FWHM while the image resolution of double-pixel photopeak events was 640 μm FWHM. The inclusion of double-pixel full-energy events increased the sensitivity of the imaging system. To validate the imaging experiment, we conducted a Monte Carlo (MC) simulation for the same PET system in Geant4 Application for Emission Tomography. We defined LSO detectors as a scanner ring and 350 μm pixelated CZT detectors as an insert ring. GATE simulated coincidence data were sorted into an insert-scanner sinogram and reconstructed. The image resolution of MC-simulated data (which did not factor in positron range and acolinearity effect) was 460 μm at FWHM for single-pixel events. The image resolutions of experimental data, MC simulated data, and theoretical calculation are all close to 500 μm FWHM when the proposed 350 μm pixelated CZT detector is used as a PET insert. The interpolation algorithm for the charge sharing events was also investigated. The PET image that was reconstructed using the interpolation algorithm shows improved image resolution compared with the image resolution without interpolation algorithm.

  17. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  18. Phased array inspection of large size forged steel parts

    NASA Astrophysics Data System (ADS)

    Dupont-Marillia, Frederic; Jahazi, Mohammad; Belanger, Pierre

    2018-04-01

    High strength forged steel requires uncompromising quality to warrant advance performance for numerous critical applications. Ultrasonic inspection is commonly used in nondestructive testing to detect cracks and other defects. In steel blocks of relatively small dimensions (at least two directions not exceeding a few centimetres), phased array inspection is a trusted method to generate images of the inside of the blocks and therefore identify and size defects. However, casting of large size forged ingots introduces changes of mechanical parameters such as grain size, the Young's modulus, the Poisson's ratio, and the chemical composition. These heterogeneities affect the wave propagation, and consequently, the reliability of ultrasonic inspection and the imaging capabilities for these blocks. In this context, a custom phased array transducer designed for a 40-ton bainitic forged ingot was investigated. Following a previous study that provided local mechanical parameters for a similar block, two-dimensional simulations were made to compute the optimal transducer parameters including the pitch, width and number of elements. It appeared that depending on the number of elements, backwall reconstruction can generate high amplitude artefacts. Indeed, the large dimensions of the simulated block introduce numerous constructive interferences from backwall reflections which may lead to important artefacts. To increase image quality, the reconstruction algorithm was adapted and promising results were observed and compared with the scattering cone filter method available in the CIVA software.

  19. Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera.

    PubMed

    Cao, Yanpeng; Tisse, Christel-Loic

    2014-02-01

    In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.

  20. Imaging of heart acoustic based on the sub-space methods using a microphone array.

    PubMed

    Moghaddasi, Hanie; Almasganj, Farshad; Zoroufian, Arezoo

    2017-07-01

    Heart disease is one of the leading causes of death around the world. Phonocardiogram (PCG) is an important bio-signal which represents the acoustic activity of heart, typically without any spatiotemporal information of the involved acoustic sources. The aim of this study is to analyze the PCG by employing a microphone array by which the heart internal sound sources could be localized, too. In this paper, it is intended to propose a modality by which the locations of the active sources in the heart could also be investigated, during a cardiac cycle. In this way, a microphone array with six microphones is employed as the recording set up to be put on the human chest. In the following, the Group Delay MUSIC algorithm which is a sub-space based localization method is used to estimate the location of the heart sources in different phases of the PCG. We achieved to 0.14cm mean error for the sources of first heart sound (S 1 ) simulator and 0.21cm mean error for the sources of second heart sound (S 2 ) simulator with Group Delay MUSIC algorithm. The acoustical diagrams created for human subjects show distinct patterns in various phases of the cardiac cycles such as the first and second heart sounds. Moreover, the evaluated source locations for the heart valves are matched with the ones that are obtained via the 4-dimensional (4D) echocardiography applied, to a real human case. Imaging of heart acoustic map presents a new outlook to indicate the acoustic properties of cardiovascular system and disorders of valves and thereby, in the future, could be used as a new diagnostic tool. Copyright © 2017. Published by Elsevier B.V.

  1. Recombination of the steering vector of the triangle grid array in quaternions and the reduction of the MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Bai, Chen; Han, Dongjuan

    2018-04-01

    MUSIC is widely used on DOA estimation. Triangle grid is a common kind of the arrangement of array, but it is more complicated than rectangular array in calculation of steering vector. In this paper, the quaternions algorithm can reduce dimension of vector and make the calculation easier.

  2. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  3. Adaptive Beamforming Algorithms for High Resolution Microwave Imaging

    DTIC Science & Technology

    1991-04-01

    frequency- and phase -locked. With a system of radio camera size it must be assumed that oscillators will drift and, similarly, that electronic circuits in...propagation-induced phase errors an array as large as the one under discussion is likely to experience differ- ent weather conditions across it. The nominal...human optical system. Such a passing-scene display with human optical resolving power would be available to the air - man at night as well as during the

  4. Digital and optical shape representation and pattern recognition; Proceedings of the Meeting, Orlando, FL, Apr. 4-6, 1988

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Editor)

    1988-01-01

    The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.

  5. Depth estimation using a lightfield camera

    NASA Astrophysics Data System (ADS)

    Roper, Carissa

    The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.

  6. Imaging open-path Fourier transform infrared spectrometer for 3D cloud profiling

    NASA Astrophysics Data System (ADS)

    Rentz Dupuis, Julia; Mansur, David J.; Vaillancourt, Robert; Carlson, David; Evans, Thomas; Schundler, Elizabeth; Todd, Lori; Mottus, Kathleen

    2009-05-01

    OPTRA is developing an imaging open-path Fourier transform infrared (I-OP-FTIR) spectrometer for 3D profiling of chemical and biological agent simulant plumes released into test ranges and chambers. An array of I-OP-FTIR instruments positioned around the perimeter of the test site, in concert with advanced spectroscopic algorithms, enables real time tomographic reconstruction of the plume. The approach is intended as a referee measurement for test ranges and chambers. This Small Business Technology Transfer (STTR) effort combines the instrumentation and spectroscopic capabilities of OPTRA, Inc. with the computed tomographic expertise of the University of North Carolina, Chapel Hill.

  7. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  8. Impulse radar imaging system for concealed object detection

    NASA Astrophysics Data System (ADS)

    Podd, F. J. W.; David, M.; Iqbal, G.; Hussain, F.; Morris, D.; Osakue, E.; Yeow, Y.; Zahir, S.; Armitage, D. W.; Peyton, A. J.

    2013-10-01

    Electromagnetic systems for imaging concealed objects at checkpoints typically employ radiation at millimetre and terahertz frequencies. These systems have been shown to be effective and provide a sufficiently high resolution image. However there are difficulties and current electromagnetic systems have limitations particularly in accurately differentiating between threat and innocuous objects based on shape, surface emissivity or reflectivity, which are indicative parameters. In addition, water has a high absorption coefficient at millimetre wavelength and terahertz frequencies, which makes it more difficult for these frequencies to image through thick damp clothing. This paper considers the potential of using ultra wideband (UWB) in the low gigahertz range. The application of this frequency band to security screening appears to be a relatively new field. The business case for implementing the UWB system has been made financially viable by the recent availability of low-cost integrated circuits operating at these frequencies. Although designed for the communication sector, these devices can perform the required UWB radar measurements as well. This paper reports the implementation of a 2 to 5 GHz bandwidth linear array scanner. The paper describes the design and fabrication of transmitter and receiver antenna arrays whose individual elements are a type of antipodal Vivaldi antenna. The antenna's frequency and angular response were simulated in CST Microwave Studio and compared with laboratory measurements. The data pre-processing methods of background subtraction and deconvolution are implemented to improve the image quality. The background subtraction method uses a reference dataset to remove antenna crosstalk and room reflections from the dataset. The deconvolution method uses a Wiener filter to "sharpen" the returned echoes which improves the resolution of the reconstructed image. The filter uses an impulse response reference dataset and a signal-to-noise parameter to determine how the frequencies contained in the echo dataset are normalised. The chosen image reconstruction algorithm is based on the back-projection method. The algorithm was implemented in MATLAB and uses a pre-calculated sensitivity matrix to increase the computation speed. The results include both 2D and 3D image datasets. The 3D datasets were obtained by scanning the dual sixteen element linear antenna array over the test object. The system has been tested on both humans and mannequin test objects. The front surface of an object placed on the human/mannequin torso is clearly visible, but its presence is also seen from a tell-tale imaging characteristic. This characteristic is caused by a reduction in the wave velocity as the electromagnetic radiation passes through the object, and manifests as an indentation in the reconstructed image that is readily identifiable. The prototype system has been shown to easily detect a 12 mm x 30 mm x70 mm plastic object concealed under clothing.

  9. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  10. Computationally Efficient Radio Frequency Source Localization for Radio Interferometric Arrays

    NASA Astrophysics Data System (ADS)

    Steeb, J.-W.; Davidson, David B.; Wijnholds, Stefan J.

    2018-03-01

    Radio frequency interference (RFI) is an ever-increasing problem for remote sensing and radio astronomy, with radio telescope arrays especially vulnerable to RFI. Localizing the RFI source is the first step to dealing with the culprit system. In this paper, a new localization algorithm for interferometric arrays with low array beam sidelobes is presented. The algorithm has been adapted to work both in the near field and far field (only the direction of arrival can be recovered when the source is in the far field). In the near field the computational complexity of the algorithm is linear with search grid size compared to cubic scaling of the state-of-the-art 3-D MUltiple SIgnal Classification (MUSIC) method. The new method is as accurate as 3-D MUSIC. The trade-off is that the proposed algorithm requires a once-off a priori calculation and storing of weighting matrices. The accuracy of the algorithm is validated using data generated by low-frequency array while a hexacopter was flying around it and broadcasting a continuous-wave signal. For the flight, the mean distance between the differential GPS positions and the corresponding estimated positions of the hexacopter is 2 m at a wavelength of 6.7 m.

  11. Comparison of ISS Power System Telemetry with Analytically Derived Data for Shadowed Cases

    NASA Technical Reports Server (NTRS)

    Fincannon, H. James

    2002-01-01

    Accurate International Space Station (ISS) power prediction requires the quantification of solar array shadowing. Prior papers have discussed the NASA Glenn Research Center (GRC) ISS power system tool SPACE (System Power Analysis for Capability Evaluation) and its integrated shadowing algorithms. On-orbit telemetry has become available that permits the correlation of theoretical shadowing predictions with actual data. This paper documents the comparison of a shadowing metric (total solar array current) as derived from SPACE predictions and on-orbit flight telemetry data for representative significant shadowing cases. Images from flight video recordings and the SPACE computer program graphical output are used to illustrate the comparison. The accuracy of the SPACE shadowing capability is demonstrated for the cases examined.

  12. Effect of data truncation in an implementation of pixel clustering on a custom computing machine

    NASA Astrophysics Data System (ADS)

    Leeser, Miriam E.; Theiler, James P.; Estlick, Michael; Kitaryeva, Natalya V.; Szymanski, John J.

    2000-10-01

    We investigate the effect of truncating the precision of hyperspectral image data for the purpose of more efficiently segmenting the image using a variant of k-means clustering. We describe the implementation of the algorithm on field-programmable gate array (FPGA) hardware. Truncating the data to only a few bits per pixel in each spectral channel permits a more compact hardware design, enabling greater parallelism, and ultimately a more rapid execution. It also enables the storage of larger images in the onboard memory. In exchange for faster clustering, however, one trades off the quality of the produced segmentation. We find, however, that the clustering algorithm can tolerate considerable data truncation with little degradation in cluster quality. This robustness to truncated data can be extended by computing the cluster centers to a few more bits of precision than the data. Since there are so many more pixels than centers, the more aggressive data truncation leads to significant gains in the number of pixels that can be stored in memory and processed in hardware concurrently.

  13. Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    PubMed Central

    Torres-Huitzil, Cesar

    2013-01-01

    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k × k kernel requires of k 2 − 1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024 × 1024 images with up to 255 × 255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding. PMID:24288456

  14. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  15. Using passive cavitation images to classify high-intensity focused ultrasound lesions.

    PubMed

    Haworth, Kevin J; Salgaonkar, Vasant A; Corregan, Nicholas M; Holland, Christy K; Mast, T Douglas

    2015-09-01

    Passive cavitation imaging provides spatially resolved monitoring of cavitation emissions. However, the diffraction limit of a linear imaging array results in relatively poor range resolution. Poor range resolution has limited prior analyses of the spatial specificity and sensitivity of passive cavitation imaging in predicting thermal lesion formation. In this study, this limitation is overcome by orienting a linear array orthogonal to the high-intensity focused ultrasound propagation direction and performing passive imaging. Fourteen lesions were formed in ex vivo bovine liver samples as a result of 1.1-MHz continuous-wave ultrasound exposure. The lesions were classified as focal, "tadpole" or pre-focal based on their shape and location. Passive cavitation images were beamformed from emissions at the fundamental, harmonic, ultraharmonic and inharmonic frequencies with an established algorithm. Using the area under a receiver operating characteristic curve (AUROC), fundamental, harmonic and ultraharmonic emissions were found to be significant predictors of lesion formation for all lesion types. For both harmonic and ultraharmonic emissions, pre-focal lesions were classified most successfully (AUROC values of 0.87 and 0.88, respectively), followed by tadpole lesions (AUROC values of 0.77 and 0.64, respectively) and focal lesions (AUROC values of 0.65 and 0.60, respectively). Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  16. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  17. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    PubMed

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  18. Systolic array processing of the sequential decoding algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  19. Superconducting Quantum Interference Device Array Based High Frequency Direction Finding on an Airborne Platform

    DTIC Science & Technology

    is performed using the MUSIC algorithm on the signals received on the non-uniform phased array, and the ESPRIT algorithm is used on the signals...received on the non-colocated vector sensor. The simulation results show that the MUSIC algorithm using 2D Bi-SQUIDs is able to differentiate two signals

  20. Accurate Permittivity Measurements for Microwave Imaging via Ultra-Wideband Removal of Spurious Reflectors

    PubMed Central

    Pelletier, Mathew G.; Viera, Joseph A.; Wanjura, John; Holt, Greg

    2010-01-01

    The use of microwave imaging is becoming more prevalent for detection of interior hidden defects in manufactured and packaged materials. In applications for detection of hidden moisture, microwave tomography can be used to image the material and then perform an inverse calculation to derive an estimate of the variability of the hidden material, such internal moisture, thereby alerting personnel to damaging levels of the hidden moisture before material degradation occurs. One impediment to this type of imaging occurs with nearby objects create strong reflections that create destructive and constructive interference, at the receiver, as the material is conveyed past the imaging antenna array. In an effort to remove the influence of the reflectors, such as metal bale ties, research was conducted to develop an algorithm for removal of the influence of the local proximity reflectors from the microwave images. This research effort produced a technique, based upon the use of ultra-wideband signals, for the removal of spurious reflections created by local proximity reflectors. This improvement enables accurate microwave measurements of moisture in such products as cotton bales, as well as other physical properties such as density or material composition. The proposed algorithm was shown to reduce errors by a 4:1 ratio and is an enabling technology for imaging applications in the presence of metal bale ties. PMID:22163668

  1. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    PubMed

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.

  2. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures

    PubMed Central

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2016-01-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829

  3. List-mode PET image reconstruction for motion correction using the Intel XEON PHI co-processor

    NASA Astrophysics Data System (ADS)

    Ryder, W. J.; Angelis, G. I.; Bashar, R.; Gillam, J. E.; Fulton, R.; Meikle, S.

    2014-03-01

    List-mode image reconstruction with motion correction is computationally expensive, as it requires projection of hundreds of millions of rays through a 3D array. To decrease reconstruction time it is possible to use symmetric multiprocessing computers or graphics processing units. The former can have high financial costs, while the latter can require refactoring of algorithms. The Xeon Phi is a new co-processor card with a Many Integrated Core architecture that can run 4 multiple-instruction, multiple data threads per core with each thread having a 512-bit single instruction, multiple data vector register. Thus, it is possible to run in the region of 220 threads simultaneously. The aim of this study was to investigate whether the Xeon Phi co-processor card is a viable alternative to an x86 Linux server for accelerating List-mode PET image reconstruction for motion correction. An existing list-mode image reconstruction algorithm with motion correction was ported to run on the Xeon Phi coprocessor with the multi-threading implemented using pthreads. There were no differences between images reconstructed using the Phi co-processor card and images reconstructed using the same algorithm run on a Linux server. However, it was found that the reconstruction runtimes were 3 times greater for the Phi than the server. A new version of the image reconstruction algorithm was developed in C++ using OpenMP for mutli-threading and the Phi runtimes decreased to 1.67 times that of the host Linux server. Data transfer from the host to co-processor card was found to be a rate-limiting step; this needs to be carefully considered in order to maximize runtime speeds. When considering the purchase price of a Linux workstation with Xeon Phi co-processor card and top of the range Linux server, the former is a cost-effective computation resource for list-mode image reconstruction. A multi-Phi workstation could be a viable alternative to cluster computers at a lower cost for medical imaging applications.

  4. A Linked List-Based Algorithm for Blob Detection on Embedded Vision-Based Sensors.

    PubMed

    Acevedo-Avila, Ricardo; Gonzalez-Mendoza, Miguel; Garcia-Garcia, Andres

    2016-05-28

    Blob detection is a common task in vision-based applications. Most existing algorithms are aimed at execution on general purpose computers; while very few can be adapted to the computing restrictions present in embedded platforms. This paper focuses on the design of an algorithm capable of real-time blob detection that minimizes system memory consumption. The proposed algorithm detects objects in one image scan; it is based on a linked-list data structure tree used to label blobs depending on their shape and node information. An example application showing the results of a blob detection co-processor has been built on a low-powered field programmable gate array hardware as a step towards developing a smart video surveillance system. The detection method is intended for general purpose application. As such, several test cases focused on character recognition are also examined. The results obtained present a fair trade-off between accuracy and memory requirements; and prove the validity of the proposed approach for real-time implementation on resource-constrained computing platforms.

  5. Image Algebra Matlab language version 2.3 for image processing and compression research

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.; Hayden, Eric

    2010-08-01

    Image algebra is a rigorous, concise notation that unifies linear and nonlinear mathematics in the image domain. Image algebra was developed under DARPA and US Air Force sponsorship at University of Florida for over 15 years beginning in 1984. Image algebra has been implemented in a variety of programming languages designed specifically to support the development of image processing and computer vision algorithms and software. The University of Florida has been associated with development of the languages FORTRAN, Ada, Lisp, and C++. The latter implementation involved a class library, iac++, that supported image algebra programming in C++. Since image processing and computer vision are generally performed with operands that are array-based, the Matlab™ programming language is ideal for implementing the common subset of image algebra. Objects include sets and set operations, images and operations on images, as well as templates and image-template convolution operations. This implementation, called Image Algebra Matlab (IAM), has been found to be useful for research in data, image, and video compression, as described herein. Due to the widespread acceptance of the Matlab programming language in the computing community, IAM offers exciting possibilities for supporting a large group of users. The control over an object's computational resources provided to the algorithm designer by Matlab means that IAM programs can employ versatile representations for the operands and operations of the algebra, which are supported by the underlying libraries written in Matlab. In a previous publication, we showed how the functionality of IAC++ could be carried forth into a Matlab implementation, and provided practical details of a prototype implementation called IAM Version 1. In this paper, we further elaborate the purpose and structure of image algebra, then present a maturing implementation of Image Algebra Matlab called IAM Version 2.3, which extends the previous implementation of IAM to include polymorphic operations over different point sets, as well as recursive convolution operations and functional composition. We also show how image algebra and IAM can be employed in image processing and compression research, as well as algorithm development and analysis.

  6. Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging

    NASA Astrophysics Data System (ADS)

    Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ

    2015-01-01

    Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)

  7. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    PubMed Central

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-01-01

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385

  8. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    PubMed

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  9. A software framework for pipelined arithmetic algorithms in field programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Kim, J. B.; Won, E.

    2018-03-01

    Pipelined algorithms implemented in field programmable gate arrays are extensively used for hardware triggers in the modern experimental high energy physics field and the complexity of such algorithms increases rapidly. For development of such hardware triggers, algorithms are developed in C++, ported to hardware description language for synthesizing firmware, and then ported back to C++ for simulating the firmware response down to the single bit level. We present a C++ software framework which automatically simulates and generates hardware description language code for pipelined arithmetic algorithms.

  10. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  11. Transmission mode adaptive beamforming for planar phased arrays and its application to 3D ultrasonic transcranial imaging

    NASA Astrophysics Data System (ADS)

    Shapoori, Kiyanoosh; Sadler, Jeffrey; Wydra, Adrian; Malyarenko, Eugene; Sinclair, Anthony; Maev, Roman G.

    2013-03-01

    A new adaptive beamforming method for accurately focusing ultrasound behind highly scattering layers of human skull and its application to 3D transcranial imaging via small-aperture planar phased arrays are reported. Due to its undulating, inhomogeneous, porous, and highly attenuative structure, human skull bone severely distorts ultrasonic beams produced by conventional focusing methods in both imaging and therapeutic applications. Strong acoustical mismatch between the skull and brain tissues, in addition to the skull's undulating topology across the active area of a planar ultrasonic probe, could cause multiple reflections and unpredictable refraction during beamforming and imaging processes. Such effects could significantly deflect the probe's beam from the intended focal point. Presented here is a theoretical basis and simulation results of an adaptive beamforming method that compensates for the latter effects in transmission mode, accompanied by experimental verification. The probe is a custom-designed 2 MHz, 256-element matrix array with 0.45 mm element size and 0.1mm kerf. Through its small footprint, it is possible to accurately measure the profile of the skull segment in contact with the probe and feed the results into our ray tracing program. The latter calculates the new time delay patterns adapted to the geometrical and acoustical properties of the skull phantom segment in contact with the probe. The time delay patterns correct for the refraction at the skull-brain boundary and bring the distorted beam back to its intended focus. The algorithms were implemented on the ultrasound open-platform ULA-OP (developed at the University of Florence).

  12. Adaptive algorithms of position and energy reconstruction in Anger-camera type detectors: experimental data processing in ANTS

    NASA Astrophysics Data System (ADS)

    Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.

    2013-05-01

    The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/

  13. OpenMP Parallelization and Optimization of Graph-based Machine Learning Algorithms

    DTIC Science & Technology

    2016-05-01

    composed of hyper - spectral video sequences recording the release of chemical plumes at the Dugway Proving Ground. We use the 329 frames of the...video. Each frame is a hyper - spectral image with dimension 128 × 320 × 129, where 129 is the dimension of the channel of each pixel. The total number of...j=1 . Then we use the nested for- loop to calculate the values of WXY by the formula (1). We then put the corresponding value in an array which

  14. Far-field DOA estimation and source localization for different scenarios in a distributed sensor network

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz

    Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.

  15. Analysis of seismic waves crossing the Santa Clara Valley using the three-component MUSIQUE array algorithm

    NASA Astrophysics Data System (ADS)

    Hobiger, Manuel; Cornou, Cécile; Bard, Pierre-Yves; Le Bihan, Nicolas; Imperatori, Walter

    2016-10-01

    We introduce the MUSIQUE algorithm and apply it to seismic wavefield recordings in California. The algorithm is designed to analyse seismic signals recorded by arrays of three-component seismic sensors. It is based on the MUSIC and the quaternion-MUSIC algorithms. In a first step, the MUSIC algorithm is applied in order to estimate the backazimuth and velocity of incident seismic waves and to discriminate between Love and possible Rayleigh waves. In a second step, the polarization parameters of possible Rayleigh waves are analysed using quaternion-MUSIC, distinguishing retrograde and prograde Rayleigh waves and determining their ellipticity. In this study, we apply the MUSIQUE algorithm to seismic wavefield recordings of the San Jose Dense Seismic Array. This array has been installed in 1999 in the Evergreen Basin, a sedimentary basin in the Eastern Santa Clara Valley. The analysis includes 22 regional earthquakes with epicentres between 40 and 600 km distant from the array and covering different backazimuths with respect to the array. The azimuthal distribution and the energy partition of the different surface wave types are analysed. Love waves dominate the wavefield for the vast majority of the events. For close events in the north, the wavefield is dominated by the first harmonic mode of Love waves, for farther events, the fundamental mode dominates. The energy distribution is different for earthquakes occurring northwest and southeast of the array. In both cases, the waves crossing the array are mostly arriving from the respective hemicycle. However, scattered Love waves arriving from the south can be seen for all earthquakes. Combining the information of all events, it is possible to retrieve the Love wave dispersion curves of the fundamental and the first harmonic mode. The particle motion of the fundamental mode of Rayleigh waves is retrograde and for the first harmonic mode, it is prograde. For both modes, we can also retrieve dispersion and ellipticity curves. Wave motion simulations for two earthquakes are in good agreement with the real data results and confirm the identification of the wave scattering formations to the south of the array, which generate the scattered Love waves visible for all earthquakes.

  16. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  17. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  18. The Goes-R Geostationary Lightning Mapper (GLM): Algorithm and Instrument Status

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.; Blakeslee, Richard J.; Koshak, William J.; Mach, Douglas

    2010-01-01

    The Geostationary Operational Environmental Satellite (GOES-R) is the next series to follow the existing GOES system currently operating over the Western Hemisphere. Superior spacecraft and instrument technology will support expanded detection of environmental phenomena, resulting in more timely and accurate forecasts and warnings. Advancements over current GOES capabilities include a new capability for total lightning detection (cloud and cloud-to-ground flashes) from the Geostationary Lightning Mapper (GLM), and improved capability for the Advanced Baseline Imager (ABI). The Geostationary Lighting Mapper (GLM) will map total lightning activity (in-cloud and cloud-to-ground lighting flashes) continuously day and night with near-uniform spatial resolution of 8 km with a product refresh rate of less than 20 sec over the Americas and adjacent oceanic regions. This will aid in forecasting severe storms and tornado activity, and convective weather impacts on aviation safety and efficiency. In parallel with the instrument development (a prototype and 4 flight models), a GOES-R Risk Reduction Team and Algorithm Working Group Lightning Applications Team have begun to develop the Level 2 algorithms, cal/val performance monitoring tools, and new applications. Proxy total lightning data from the NASA Lightning Imaging Sensor on the Tropical Rainfall Measuring Mission (TRMM) satellite and regional test beds are being used to develop the pre-launch algorithms and applications, and also improve our knowledge of thunderstorm initiation and evolution. A joint field campaign with Brazilian researchers in 2010-2011 will produce concurrent observations from a VHF lightning mapping array, Meteosat multi-band imagery, Tropical Rainfall Measuring Mission (TRMM) Lightning Imaging Sensor (LIS) overpasses, and related ground and in-situ lightning and meteorological measurements in the vicinity of Sao Paulo. These data will provide a new comprehensive proxy data set for algorithm and application development.

  19. Localization and separation of acoustic sources by using a 2.5-dimensional circular microphone array.

    PubMed

    Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen

    2017-07-01

    Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.

  20. Anisotropic scattering of discrete particle arrays.

    PubMed

    Paul, Joseph S; Fu, Wai Chong; Dokos, Socrates; Box, Michael

    2010-05-01

    Far-field intensities of light scattered from a linear centro-symmetric array illuminated by a plane wave of incident light are estimated at a series of detector angles. The intensities are computed from the superposition of E-fields scattered by the individual array elements. An average scattering phase function is used to model the scattered fields of individual array elements. The nature of scattering from the array is investigated using an image (theta-phi plot) of the far-field intensities computed at a series of locations obtained by rotating the detector angle from 0 degrees to 360 degrees, corresponding to each angle of incidence in the interval [0 degrees 360 degrees]. The diffraction patterns observed from the theta-Phi plot are compared with those for isotropic scattering. In the absence of prior information on the array geometry, the intensities corresponding to theta-Phi pairs satisfying the Bragg condition are used to estimate the phase function. An algorithmic procedure is presented for this purpose and tested using synthetic data. The relative error between estimated and theoretical values of the phase function is shown to be determined by the mean spacing factor, the number of elements, and the far-field distance. An empirical relationship is presented to calculate the optimal far-field distance for a given specification of the percentage error.

  1. Defect detection in slab surface: a novel dual Charge-coupled Device imaging-based fuzzy connectedness strategy.

    PubMed

    Zhao, Liming; Ouyang, Qi; Chen, Dengfu; Udupa, Jayaram K; Wang, Huiqian; Zeng, Yuebin

    2014-11-01

    To provide an accurate surface defects inspection system and make the automation of robust image segmentation method a reality in routine production line, a general approach is presented for continuous casting slab (CC-slab) surface defects extraction and delineation. The applicability of the system is not tied to CC-slab exclusively. We combined the line array CCD (Charge-coupled Device) traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging) strategies in designing the system. Its aim is to suppress the respective imaging system's limitations. In the system, the images acquired from the two CCD sensors are carefully aligned in space and in time by maximum mutual information-based full-fledged registration schema. Subsequently, the image information is fused from these two subsystems such as the unbroken 2D information in LS-imaging and 3D depressed information in AL-imaging. Finally, on the basis of the established dual scanning imaging system the region of interest (ROI) localization by seed specification was designed, and the delineation for ROI by iterative relative fuzzy connectedness (IRFC) algorithm was utilized to get a precise inspection result. Our method takes into account the complementary advantages in the two common machine vision (MV) systems and it performs competitively with the state-of-the-art as seen from the comparison of experimental results. For the first time, a joint imaging scanning strategy is proposed for CC-slab surface defect inspection that allows a feasible way of powerful ROI delineation strategies to be applied to the MV inspection field. Multi-ROI delineation by using IRFC in this research field may further improve the results.

  2. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  3. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  4. Precise DOA Estimation Using SAGE Algorithm with a Cylindrical Array

    NASA Astrophysics Data System (ADS)

    Takanashi, Masaki; Nishimura, Toshihiko; Ogawa, Yasutaka; Ohgane, Takeo

    A uniform circular array (UCA) is a well-known array configuration which can accomplish estimation of 360° field of view with identical accuracy. However, a UCA cannot estimate coherent signals because we cannot apply the SSP owing to the structure of UCA. Although a variety of studies on UCA in coherent multipath environments have been done, it is impossible to estimate the DOA of coherent signals with different incident polar angles. Then, we have proposed Root-MUSIC algorithm with a cylindrical array. However, the estimation performance is degraded when incident signals arrive with close polar angles. To solve this problem, in the letter, we propose to use SAGE algorithm with a cylindrical array. Here, we adopt a CLA Root-MUSIC for the initial estimation and decompose two-dimensional search to double one-dimensional search to reduce the calculation load. The results show that the proposal achieves high resolution with low complexity.

  5. Fast beampattern evaluation by polynomial rooting

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Uhlich, S.; Yang, B.

    2011-07-01

    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  6. An iterative algorithm for soft tissue reconstruction from truncated flat panel projections

    NASA Astrophysics Data System (ADS)

    Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.

    2006-03-01

    The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.

  7. Design of a temperature control system using incremental PID algorithm for a special homemade shortwave infrared spatial remote sensor based on FPGA

    NASA Astrophysics Data System (ADS)

    Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting

    2010-11-01

    An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.

  8. Feasibility of Using Wideband Microwave System for Non-Invasive Detection and Monitoring of Pulmonary Oedema

    NASA Astrophysics Data System (ADS)

    Rezaeieh, S. Ahdi; Zamani, A.; Bialkowski, K. S.; Mahmoud, A.; Abbosh, A. M.

    2015-09-01

    Pulmonary oedema is a common manifestation of various fatal diseases that can be caused by cardiac or non-cardiac syndromes. The accumulated fluid has a considerably higher dielectric constant compared to lungs’ tissues, and can thus be detected using microwave techniques. Therefore, a non-invasive microwave system for the early detection of pulmonary oedema is presented. It employs a platform in the form of foam-based bed that contains two linear arrays of wideband antennas covering the band 0.7-1 GHz. The platform is designed such that during the tests, the subject lays on the bed with the back of the torso facing the antenna arrays. The antennas are controlled using a switching network that is connected to a compact network analyzer. A novel frequency-based imaging algorithm is used to process the recorded signals and generate an image of the torso showing any accumulated fluids in the lungs. The system is verified on an artificial torso phantom, and animal organs. As a feasibility study, preclinical tests are conducted on healthy subjects to determinate the type of obtained images, the statistics and threshold levels of their intensity to differentiate between healthy and unhealthy subjects.

  9. Feasibility of Using Wideband Microwave System for Non-Invasive Detection and Monitoring of Pulmonary Oedema

    PubMed Central

    Rezaeieh, S. Ahdi; Zamani, A.; Bialkowski, K. S.; Mahmoud, A.; Abbosh, A. M.

    2015-01-01

    Pulmonary oedema is a common manifestation of various fatal diseases that can be caused by cardiac or non-cardiac syndromes. The accumulated fluid has a considerably higher dielectric constant compared to lungs’ tissues, and can thus be detected using microwave techniques. Therefore, a non-invasive microwave system for the early detection of pulmonary oedema is presented. It employs a platform in the form of foam-based bed that contains two linear arrays of wideband antennas covering the band 0.7–1 GHz. The platform is designed such that during the tests, the subject lays on the bed with the back of the torso facing the antenna arrays. The antennas are controlled using a switching network that is connected to a compact network analyzer. A novel frequency-based imaging algorithm is used to process the recorded signals and generate an image of the torso showing any accumulated fluids in the lungs. The system is verified on an artificial torso phantom, and animal organs. As a feasibility study, preclinical tests are conducted on healthy subjects to determinate the type of obtained images, the statistics and threshold levels of their intensity to differentiate between healthy and unhealthy subjects. PMID:26365299

  10. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  11. Design and development of a dedicated mammary and axillary region positron emission tomography system

    NASA Astrophysics Data System (ADS)

    Doshi, Niraj Kumar

    Breast cancer is the second leading cause of cancer death in women. Currently, mammography and physical breast examination, both non-invasive techniques, provide the two most effective methods available for screening potential breast cancer patients. During the management of patients, however, several invasive techniques such as axillary lymph node dissection, core biopsies and lumpectomies, are utilized to determine the stage or malignancy of the disease with significant cost and morbidity associated with them. Positron Emission Tomography (PET), using [F-18] fluorodeoxyglucose (FDG) tracer is a sensitive and non-invasive imaging modality that may be a cost-effective alternative to certain invasive procedures. In this project we have developed a low cost, high performance, dedicated PET camera (maxPET) for mammary and axillary region imaging. The system consists of two 15x15 cm2 planar scintillation detector arrays composed of modular detectors operating in coincidence. The modular detectors are comprised of a 9x9 array of 3x3x20 mm3 lutetiurn oxyorthosilicate (LSO) detector elements, read out by a 5x5 array of position- sensitive photomultiplier tubes. The average measured intrinsic spatial resolution of a detector module is 2.26 mm with a sensitivity of up to 40% for a central point source. The measured coincidence timing resolution for two modules is 2.4 ns. The average energy resolution measured across the entire two detector plates is 21.6%. The coincidence timing resolution for the entire system is 8.1 ns. A line bar phantom was imaged and images were reconstructed using the focal plane tomography algorithm. A 4 mm projection image resolution was measured based on profiles taken through the line bar phantom images. The goal of the maxPET system will be to aid in breast cancer patient management by assisting in imaging women with dense, fibro-glandular breasts, detecting axillary lymph node metastases without surgery, monitoring chemotherapy effectiveness and assisting in visualization of recurrence and tumoral boundaries.

  12. A tetrahedron beam computed tomography benchtop system with a multiple pixel field emission x-ray tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xiaochao; Kim, Joshua; Laganis, Philip

    2011-10-15

    Purpose: To demonstrate the feasibility of Tetrahedron Beam Computed Tomography (TBCT) using a carbon nanotube (CNT) multiple pixel field emission x-ray (MPFEX) tube. Methods: A multiple pixel x-ray source facilitates the creation of novel x-ray imaging modalities. In a previous publication, the authors proposed a Tetrahedron Beam Computed Tomography (TBCT) imaging system which comprises a linear source array and a linear detector array that are orthogonal to each other. TBCT is expected to reduce scatter compared with Cone Beam Computed Tomography (CBCT) and to have better detector performance. Therefore, it may produce improved image quality for image guided radiotherapy. Inmore » this study, a TBCT benchtop system has been developed with an MPFEX tube. The tube has 75 CNT cold cathodes, which generate 75 x-ray focal spots on an elongated anode, and has 4 mm pixel spacing. An in-house-developed, 5-row CT detector array using silicon photodiodes and CdWO{sub 4} scintillators was employed in the system. Hardware and software were developed for tube control and detector data acquisition. The raw data were preprocessed for beam hardening and detector response linearity and were reconstructed with an FDK-based image reconstruction algorithm. Results: The focal spots were measured at about 1 x 2 mm{sup 2} using a star phantom. Each cathode generates around 3 mA cathode current with 2190 V gate voltage. The benchtop system is able to perform TBCT scans with a prolonged scanning time. Images of a commercial CT phantom were successfully acquired. Conclusions: A prototype system was developed, and preliminary phantom images were successfully acquired. MPFEX is a promising x-ray source for TBCT. Further improvement of tube output is needed in order for it to be used in clinical TBCT systems.« less

  13. An Interactive Program on Digitizing Historical Seismograms

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Xu, T.

    2013-12-01

    Retrieving information from historical seismograms is of great importance since they are considered the unique sources that provide quantitative information of historical earthquakes. Modern techniques of seismology require digital forms of seismograms that are essentially a sequence of time-amplitude pairs. However, the historical seismograms, after scanned into computers, are two dimensional arrays. Each element of the arrays contains the grayscale value or RGB value of the corresponding pixel. The problem of digitizing historical seismograms, referred to as converting historical seismograms to digital seismograms, can be formulated as an inverse problem that generating sequences of time-amplitude pairs from a two dimension arrays. This problem has infinite solutions. The algorithm for automatic digitization of historical seismogram presented considers several features of seismograms, including continuity, smoothness of the seismic traces as the prior information, and assumes that the amplitude is a single-valued function of time. An interactive program based on the algorithm is also presented. The program is developed using Matlab GUI and has both automatic and manual modality digitization. Users can easily switch between them, and try different combinations to get the optimal results. Several examples are given to illustrate the results of digitizing seismograms using the program, including a photographic record and a wide-angle reflection/refraction seismogram. Digitized result of the program (redrawn using Golden Software Surfer for high resolution image). (a) shows the result of automatic digitization, and (b) is the result after manual correction.

  14. A super-resolution ultrasound method for brain vascular mapping

    PubMed Central

    O'Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    Purpose: High-resolution vascular imaging has not been achieved in the brain due to limitations of current clinical imaging modalities. The authors present a method for transcranial ultrasound imaging of single micrometer-size bubbles within a tube phantom. Methods: Emissions from single bubbles within a tube phantom were mapped through an ex vivo human skull using a sparse hemispherical receiver array and a passive beamforming algorithm. Noninvasive phase and amplitude correction techniques were applied to compensate for the aberrating effects of the skull bone. The positions of the individual bubbles were estimated beyond the diffraction limit of ultrasound to produce a super-resolution image of the tube phantom, which was compared with microcomputed tomography (micro-CT). Results: The resulting super-resolution ultrasound image is comparable to results obtained via the micro-CT for small tissue specimen imaging. Conclusions: This method provides superior resolution to deep-tissue contrast ultrasound and has the potential to be extended to provide complete vascular network imaging in the brain. PMID:24320408

  15. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins.

    PubMed

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N Samba; Gopalaswamy, Arjun M; Karanth, K Ullas

    2009-06-23

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin.

  16. A tiger cannot change its stripes: using a three-dimensional model to match images of living tigers and tiger skins

    PubMed Central

    Hiby, Lex; Lovell, Phil; Patil, Narendra; Kumar, N. Samba; Gopalaswamy, Arjun M.; Karanth, K. Ullas

    2009-01-01

    The tiger is one of many species in which individuals can be identified by surface patterns. Camera traps can be used to record individual tigers moving over an array of locations and provide data for monitoring and studying populations and devising conservation strategies. We suggest using a combination of algorithms to calculate similarity scores between pattern samples scanned from the images to automate the search for a match to a new image. We show how using a three-dimensional surface model of a tiger to scan the pattern samples allows comparison of images that differ widely in camera angles and body posture. The software, which is free to download, considerably reduces the effort required to maintain an image catalogue and we suggest it could be used to trace the origin of a tiger skin by searching a central database of living tigers' images for matches to an image of the skin. PMID:19324633

  17. All-optical endoscopic probe for high resolution 3D photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Ansari, R.; Zhang, E.; Desjardins, A. E.; Beard, P. C.

    2017-03-01

    A novel all-optical forward-viewing photoacoustic probe using a flexible coherent fibre-optic bundle and a Fabry- Perot (FP) ultrasound sensor has been developed. The fibre bundle, along with the FP sensor at its distal end, synthesizes a high density 2D array of wideband ultrasound detectors. Photoacoustic waves arriving at the sensor are spatially mapped by optically scanning the proximal end face of the bundle in 2D with a CW wavelength-tunable interrogation laser. 3D images are formed from the detected signals using a time-reversal image reconstruction algorithm. The system has been characterized in terms of its PSF, noise-equivalent pressure and field of view. Finally, the high resolution 3D imaging capability has been demonstrated using arbitrary shaped phantoms and duck embryo.

  18. Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,

    DTIC Science & Technology

    The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been

  19. Segmented Mirror Image Degradation Due to Surface Dust, Alignment and Figure

    NASA Technical Reports Server (NTRS)

    Schreur, Julian J.

    1999-01-01

    In 1996 an algorithm was developed to include the effects of surface roughness in the calculation of the point spread function of a telescope mirror. This algorithm has been extended to include the effects of alignment errors and figure errors for the individual elements, and an overall contamination by surface dust. The final algorithm builds an array for a guard-banded pupil function of a mirror that may or may not have a central hole, a central reflecting segment, or an outer ring of segments. The central hole, central reflecting segment, and outer ring may be circular or polygonal, and the outer segments may have trimmed comers. The modeled point spread functions show that x-tilt and y-tilt, or the corresponding R-tilt and theta-tilt for a segment in an outer ring, is readily apparent for maximum wavefront errors of 0.1 lambda. A similar sized piston error is also apparent, but integral wavelength piston errors are not. Severe piston error introduces a focus error of the opposite sign, so piston could be adjusted to compensate for segments with varying focal lengths. Dust affects the image principally by decreasing the Strehl ratio, or peak intensity of the image. For an eight-meter telescope a 25% coverage by dust produced a scattered light intensity of 10(exp -9) of the peak intensity, a level well below detectability.

  20. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    PubMed

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

Top