Fast algorithm for wavefront reconstruction in XAO/SCAO with pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Shatokhina, Iuliia; Obereder, Andreas; Ramlau, Ronny
2014-08-01
We present a fast wavefront reconstruction algorithm developed for an extreme adaptive optics system equipped with a pyramid wavefront sensor on a 42m telescope. The method is called the Preprocessed Cumulative Reconstructor with domain decomposition (P-CuReD). The algorithm is based on the theoretical relationship between pyramid and Shack-Hartmann wavefront sensor data. The algorithm consists of two consecutive steps - a data preprocessing, and an application of the CuReD algorithm, which is a fast method for wavefront reconstruction from Shack-Hartmann sensor data. The closed loop simulation results show that the P-CuReD method provides the same reconstruction quality and is significantly faster than an MVM.
NASA Astrophysics Data System (ADS)
Chen, Xiao-jun; Dong, Li-zhi; Wang, Shuai; Yang, Ping; Xu, Bing
2017-11-01
In quadri-wave lateral shearing interferometry (QWLSI), when the intensity distribution of the incident light wave is non-uniform, part of the information of the intensity distribution will couple with the wavefront derivatives to cause wavefront reconstruction errors. In this paper, we propose two algorithms to reduce the influence of a non-uniform intensity distribution on wavefront reconstruction. Our simulation results demonstrate that the reconstructed amplitude distribution (RAD) algorithm can effectively reduce the influence of the intensity distribution on the wavefront reconstruction and that the collected amplitude distribution (CAD) algorithm can almost eliminate it.
Two-dimensional wavefront reconstruction based on double-shearing and least squares fitting
NASA Astrophysics Data System (ADS)
Liang, Peiying; Ding, Jianping; Zhu, Yangqing; Dong, Qian; Huang, Yuhua; Zhu, Zhen
2017-06-01
The two-dimensional wavefront reconstruction method based on double-shearing and least squares fitting is proposed in this paper. Four one-dimensional phase estimates of the measured wavefront, which correspond to the two shears and the two orthogonal directions, could be calculated from the differential phase, which solves the problem of the missing spectrum, and then by using the least squares method the two-dimensional wavefront reconstruction could be done. The numerical simulations of the proposed algorithm are carried out to verify the feasibility of this method. The influence of noise generated from different shear amount and different intensity on the accuracy of the reconstruction is studied and compared with the results from the algorithm based on single-shearing and least squares fitting. Finally, a two-grating lateral shearing interference experiment is carried out to verify the wavefront reconstruction algorithm based on doubleshearing and least squares fitting.
Implementation of a rapid correction algorithm for adaptive optics using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2016-09-01
Adaptive optics relies on the accuracy and speed of a wavefront sensor in order to provide quick corrections to distortions in the optical system. In weaker cases of atmospheric turbulence often encountered in astronomical fields, a traditional Shack-Hartmann sensor has proved to be very effective. However, in cases of stronger atmospheric turbulence often encountered near the surface of the Earth, atmospheric turbulence no longer solely causes small tilts in the wavefront. Instead, lasers passing through strong or "deep" atmospheric turbulence encounter beam breakup, which results in interference effects and discontinuities in the incoming wavefront. In these situations, a Shack-Hartmann sensor can no longer effectively determine the shape of the incoming wavefront. We propose a wavefront reconstruction and correction algorithm based around the plenoptic sensor. The plenoptic sensor's design allows it to match and exceed the wavefront sensing capabilities of a Shack-Hartmann sensor for our application. Novel wavefront reconstruction algorithms can take advantage of the plenoptic sensor to provide a rapid wavefront reconstruction necessary for real time turbulence. To test the integrity of the plenoptic sensor and its reconstruction algorithms, we use artificially generated turbulence in a lab scale environment to simulate the structure and speed of outdoor atmospheric turbulence. By analyzing the performance of our system with and without the closed-loop plenoptic sensor adaptive optics system, we can show that the plenoptic sensor is effective in mitigating real time lab generated atmospheric turbulence.
NASA Astrophysics Data System (ADS)
Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen
2016-11-01
To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.
Real-time implementing wavefront reconstruction for adaptive optics
NASA Astrophysics Data System (ADS)
Wang, Caixia; Li, Mei; Wang, Chunhong; Zhou, Luchun; Jiang, Wenhan
2004-12-01
The capability of real time wave-front reconstruction is important for an adaptive optics (AO) system. The bandwidth of system and the real-time processing ability of the wave-front processor is mainly affected by the speed of calculation. The system requires enough number of subapertures and high sampling frequency to compensate atmospheric turbulence. The number of reconstruction operation is increased accordingly. Since the performance of AO system improves with the decrease of calculation latency, it is necessary to study how to increase the speed of wavefront reconstruction. There are two methods to improve the real time of the reconstruction. One is to convert the wavefront reconstruction matrix, such as by wavelet or FFT. The other is enhancing the performance of the processing element. Analysis shows that the latency cutting is performed with the cost of reconstruction precision by the former method. In this article, the latter method is adopted. From the characteristic of the wavefront reconstruction algorithm, a systolic array by FPGA is properly designed to implement real-time wavefront reconstruction. The system delay is reduced greatly by the utilization of pipeline and parallel processing. The minimum latency of reconstruction is the reconstruction calculation of one subaperture.
Wavefront Reconstruction and Mirror Surface Optimizationfor Adaptive Optics
2014-06-01
TERMS Wavefront reconstruction, Adaptive optics , Wavelets, Atmospheric turbulence , Branch points, Mirror surface optimization, Space telescope, Segmented...contribution adapts the proposed algorithm to work when branch points are present from significant atmospheric turbulence . An analysis of vector spaces...estimate the distortion of the collected light caused by the atmosphere and corrected by adaptive optics . A generalized orthogonal wavelet wavefront
Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients.
Solano-Altamirano, Juan Manuel; Vázquez-Otero, Alejandro; Khikhlukha, Danila; Dormido, Raquel; Duro, Natividad
2017-11-30
In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features.
Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients
Solano-Altamirano, Juan Manuel; Khikhlukha, Danila
2017-01-01
In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features. PMID:29189722
Determination of wavefront structure for a Hartmann wavefront sensor using a phase-retrieval method.
Polo, A; Kutchoukov, V; Bociort, F; Pereira, S F; Urbach, H P
2012-03-26
We apply a phase retrieval algorithm to the intensity pattern of a Hartmann wavefront sensor to measure with enhanced accuracy the phase structure of a Hartmann hole array. It is shown that the rms wavefront error achieved by phase reconstruction is one order of magnitude smaller than the one obtained from a typical centroid algorithm. Experimental results are consistent with a phase measurement performed independently using a Shack-Hartmann wavefront sensor.
Layer-oriented multigrid wavefront reconstruction algorithms for multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-02-01
Multi-conjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of AO degrees of freedom. In this paper, we develop an iterative sparse matrix implementation of minimum variance wavefront reconstruction for telescope diameters up to 32m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method, using a multigrid preconditioner incorporating a layer-oriented (block) symmetric Gauss-Seidel iterative smoothing operator. We present open-loop numerical simulation results to illustrate algorithm convergence.
Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control
NASA Astrophysics Data System (ADS)
Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel
2014-08-01
We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
Shatokhina, Iuliia; Obereder, Andreas; Rosensteiner, Matthias; Ramlau, Ronny
2013-04-20
We present a fast method for the wavefront reconstruction from pyramid wavefront sensor (P-WFS) measurements. The method is based on an analytical relation between pyramid and Shack-Hartmann sensor (SH-WFS) data. The algorithm consists of two steps--a transformation of the P-WFS data to SH data, followed by the application of cumulative reconstructor with domain decomposition, a wavefront reconstructor from SH-WFS measurements. The closed loop simulations confirm that our method provides the same quality as the standard matrix vector multiplication method. A complexity analysis as well as speed tests confirm that the method is very fast. Thus, the method can be used on extremely large telescopes, e.g., for eXtreme adaptive optics systems.
Simulation results for a finite element-based cumulative reconstructor
NASA Astrophysics Data System (ADS)
Wagner, Roland; Neubauer, Andreas; Ramlau, Ronny
2017-10-01
Modern ground-based telescopes rely on adaptive optics (AO) systems for the compensation of image degradation caused by atmospheric turbulences. Within an AO system, measurements of incoming light from guide stars are used to adjust deformable mirror(s) in real time that correct for atmospheric distortions. The incoming wavefront has to be derived from sensor measurements, and this intermediate result is then translated into the shape(s) of the deformable mirror(s). Rapid changes of the atmosphere lead to the need for fast wavefront reconstruction algorithms. We review a fast matrix-free algorithm that was developed by Neubauer to reconstruct the incoming wavefront from Shack-Hartmann measurements based on a finite element discretization of the telescope aperture. The method is enhanced by a domain decomposition ansatz. We show that this algorithm reaches the quality of standard approaches in end-to-end simulation while at the same time maintaining the speed of recently introduced solvers with linear order speed.
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
Optimization of the Hartmann-Shack microlens array
NASA Astrophysics Data System (ADS)
de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William
2011-04-01
In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
Convolution- and Fourier-transform-based reconstructors for pyramid wavefront sensor.
Shatokhina, Iuliia; Ramlau, Ronny
2017-08-01
In this paper, we present two novel algorithms for wavefront reconstruction from pyramid-type wavefront sensor data. An overview of the current state-of-the-art in the application of pyramid-type wavefront sensors shows that the novel algorithms can be applied in various scientific fields such as astronomy, ophthalmology, and microscopy. Assuming a computationally very challenging setting corresponding to the extreme adaptive optics (XAO) on the European Extremely Large Telescope, we present the results of the performed end-to-end simulations and compare the achieved AO correction quality (in terms of the long-exposure Strehl ratio) to other methods, such as matrix-vector multiplication and preprocessed cumulative reconstructor with domain decomposition. Also, we provide a comparison in terms of applicability and computational complexity and closed-loop performance of our novel algorithms to other methods existing for this type of sensor.
Shack-Hartmann wavefront sensor using a Raspberry Pi embedded system
NASA Astrophysics Data System (ADS)
Contreras-Martinez, Ramiro; Garduño-Mejía, Jesús; Rosete-Aguilar, Martha; Román-Moreno, Carlos J.
2017-05-01
In this work we present the design and manufacture of a compact Shack-Hartmann wavefront sensor using a Raspberry Pi and a microlens array. The main goal of this sensor is to recover the wavefront of a laser beam and to characterize its spatial phase using a simple and compact Raspberry Pi and the Raspberry Pi embedded camera. The recovery algorithm is based on a modified version of the Southwell method and was written in Python as well as its user interface. Experimental results and reconstructed wavefronts are presented.
Wavefront sensing with a thin diffuser
NASA Astrophysics Data System (ADS)
Berto, Pascal; Rigneault, Hervé; Guillon, Marc
2017-12-01
We propose and implement a broadband, compact, and low-cost wavefront sensing scheme by simply placing a thin diffuser in the close vicinity of a camera. The local wavefront gradient is determined from the local translation of the speckle pattern. The translation vector map is computed thanks to a fast diffeomorphic image registration algorithm and integrated to reconstruct the wavefront profile. The simple translation of speckle grains under local wavefront tip/tilt is ensured by the so-called "memory effect" of the diffuser. Quantitative wavefront measurements are experimentally demonstrated both for the few first Zernike polynomials and for phase-imaging applications requiring high resolution. We finally provided a theoretical description of the resolution limit that is supported experimentally.
Iterative wave-front reconstruction in the Fourier domain.
Bond, Charlotte Z; Correia, Carlos M; Sauvage, Jean-François; Neichel, Benoit; Fusco, Thierry
2017-05-15
The use of Fourier methods in wave-front reconstruction can significantly reduce the computation time for large telescopes with a high number of degrees of freedom. However, Fourier algorithms for discrete data require a rectangular data set which conform to specific boundary requirements, whereas wave-front sensor data is typically defined over a circular domain (the telescope pupil). Here we present an iterative Gerchberg routine modified for the purposes of discrete wave-front reconstruction which adapts the measurement data (wave-front sensor slopes) for Fourier analysis, fulfilling the requirements of the fast Fourier transform (FFT) and providing accurate reconstruction. The routine is used in the adaptation step only and can be coupled to any other Wiener-like or least-squares method. We compare simulations using this method with previous Fourier methods and show an increase in performance in terms of Strehl ratio and a reduction in noise propagation for a 40×40 SPHERE-like adaptive optics system. For closed loop operation with minimal iterations the Gerchberg method provides an improvement in Strehl, from 95.4% to 96.9% in K-band. This corresponds to ~ 40 nm improvement in rms, and avoids the high spatial frequency errors present in other methods, providing an increase in contrast towards the edge of the correctable band.
Combined approach to the Hubble Space Telescope wave-front distortion analysis
NASA Astrophysics Data System (ADS)
Roddier, Claude; Roddier, Francois
1993-06-01
Stellar images taken by the HST at various focus positions have been analyzed to estimate wave-front distortion. Rather than using a single algorithm, we found that better results were obtained by combining the advantages of various algorithms. For the planetary camera, the most accurate algorithms consistently gave a spherical aberration of -0.290-micron rms with a maximum deviation of 0.005 micron. Evidence was found that the spherical aberration is essentially produced by the primary mirror. The illumination in the telescope pupil plane was reconstructed and evidence was found for a slight camera misalignment.
Gilles, L; Ellerbroek, B L
2010-11-01
Real-time turbulence profiling is necessary to tune tomographic wavefront reconstruction algorithms for wide-field adaptive optics (AO) systems on large to extremely large telescopes, and to perform a variety of image post-processing tasks involving point-spread function reconstruction. This paper describes a computationally efficient and accurate numerical technique inspired by the slope detection and ranging (SLODAR) method to perform this task in real time from properly selected Shack-Hartmann wavefront sensor measurements accumulated over a few hundred frames from a pair of laser guide stars, thus eliminating the need for an additional instrument. The algorithm is introduced, followed by a theoretical influence function analysis illustrating its impulse response to high-resolution turbulence profiles. Finally, its performance is assessed in the context of the Thirty Meter Telescope multi-conjugate adaptive optics system via end-to-end wave optics Monte Carlo simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahk, S.-W.
2008-06-17
The analytic frequency responses of the traditional wavefront reconstructors of Hudgin, Fried, and Southwell are presented, which exhibit amplification or attenuation of the original signal at high spatial frequencies. To overcome this problem, a reconstructor with unity frequency response is developed based on a band-limited derivative calculation. The algorithm is both numerically and experimentally confirmed.
NASA Astrophysics Data System (ADS)
Kupke, Renate; Gavel, Don; Johnson, Jess; Reinig, Marc
2008-07-01
We investigate the non-modulating pyramid wave-front sensor's (P-WFS) implementation in the context of Lick Observatory's Villages visible light AO system on the Nickel 1-meter telescope. A complete adaptive optics correction, using a non-modulated P-WFS in slope sensing mode as a boot-strap to a regime in which the P-WFS can act as a direct phase sensor is explored. An iterative approach to reconstructing the wave-front phase, given the pyramid wave-front sensor's non-linear signal, is developed. Using Monte Carlo simulations, the iterative reconstruction method's photon noise propagation behavior is compared to both the pyramid sensor used in slope-sensing mode, and the traditional Shack Hartmann sensor's theoretical performance limits. We determine that bootstrapping using the P-WFS as a slope sensor does not offer enough correction to bring the phase residuals into a regime in which the iterative algorithm can provide much improvement in phase measurement. It is found that both the iterative phase reconstructor and the slope reconstruction methods offer an advantage in noise propagation over Shack Hartmann sensors.
NASA Astrophysics Data System (ADS)
Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.
2016-07-01
We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.
High-NA metrology and sensing on Berkeley MET5
NASA Astrophysics Data System (ADS)
Miyakawa, Ryan; Anderson, Chris; Naulleau, Patrick
2017-03-01
In this paper we compare two non-interferometric wavefront sensors suitable for in-situ high-NA EUV optical testing. The first is the AIS sensor, which has been deployed in both inspection and exposure tools. AIS is a compact, optical test that directly measures a wavefront by probing various parts of the imaging optic pupil and measuring localized wavefront curvature. The second is an image-based technique that uses an iterative algorithm based on simulated annealing to reconstruct a wavefront based on matching aerial images through focus. In this technique, customized illumination is used to probe the pupil at specific points to optimize differences in aberration signatures.
The AOLI Non-Linear Curvature Wavefront Sensor: High sensitivity reconstruction for low-order AO
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; Mackay, Craig
2013-12-01
Many adaptive optics (AO) systems in use today require bright reference objects to determine the effects of atmospheric distortions on incoming wavefronts. This requirement is because Shack Hartmann wavefront sensors (SHWFS) distribute incoming light from reference objects into a large number of sub-apertures. Bright natural reference objects occur infrequently across the sky leading to the use of laser guide stars which add complexity to wavefront measurement systems. The non-linear curvature wavefront sensor as described by Guyon et al. has been shown to offer a significant increase in sensitivity when compared to a SHWFS. This facilitates much greater sky coverage using natural guide stars alone. This paper describes the current status of the non-linear curvature wavefront sensor being developed as part of an adaptive optics system for the Adaptive Optics Lucky Imager (AOLI) project. The sensor comprises two photon-counting EMCCD detectors from E2V Technologies, recording intensity at four near-pupil planes. These images are used with a reconstruction algorithm to determine the phase correction to be applied by an ALPAO 241-element deformable mirror. The overall system is intended to provide low-order correction for a Lucky Imaging based multi CCD imaging camera. We present the current optical design of the instrument including methods to minimise inherent optical effects, principally chromaticity. Wavefront reconstruction methods are discussed and strategies for their optimisation to run at the required real-time speeds are introduced. Finally, we discuss laboratory work with a demonstrator setup of the system.
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2015-09-01
Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.
The AOLI low-order non-linear curvature wavefront sensor: laboratory and on-sky results
NASA Astrophysics Data System (ADS)
Crass, Jonathan; King, David; MacKay, Craig
2014-08-01
Many adaptive optics (AO) systems in use today require the use of bright reference objects to determine the effects of atmospheric distortions. Typically these systems use Shack-Hartmann Wavefront sensors (SHWFS) to distribute incoming light from a reference object between a large number of sub-apertures. Guyon et al. evaluated the sensitivity of several different wavefront sensing techniques and proposed the non-linear Curvature Wavefront Sensor (nlCWFS) offering improved sensitivity across a range of orders of distortion. On large ground-based telescopes this can provide nearly 100% sky coverage using natural guide stars. We present work being undertaken on the nlCWFS development for the Adaptive Optics Lucky Imager (AOLI) project. The wavefront sensor is being developed as part of a low-order adaptive optics system for use in a dedicated instrument providing an AO corrected beam to a Lucky Imaging based science detector. The nlCWFS provides a total of four reference images on two photon-counting EMCCDs for use in the wavefront reconstruction process. We present results from both laboratory work using a calibration system and the first on-sky data obtained with the nlCWFS at the 4.2 metre William Herschel Telescope, La Palma. In addition, we describe the updated optical design of the wavefront sensor, strategies for minimising intrinsic effects and methods to maximise sensitivity using photon-counting detectors. We discuss on-going work to develop the high speed reconstruction algorithm required for the nlCWFS technique. This includes strategies to implement the technique on graphics processing units (GPUs) and to minimise computing overheads to obtain a prior for a rapid convergence of the wavefront reconstruction. Finally we evaluate the sensitivity of the wavefront sensor based upon both data and low-photon count strategies.
A hierarchical wavefront reconstruction algorithm for gradient sensors
NASA Astrophysics Data System (ADS)
Bharmal, Nazim; Bitenc, Urban; Basden, Alastair; Myers, Richard
2013-12-01
ELT-scale extreme adaptive optics systems will require new approaches tocompute the wavefront suitably quickly, when the computational burden ofapplying a MVM is no longer practical. An approach is demonstrated here whichis hierarchical in transforming wavefront slopes from a WFS into a wavefront,and then to actuator values. First, simple integration in 1D is used to create1D-wavefront estimates with unknown starting points at the edges of independentspatial domains. Second, these starting points are estimated globally. By thesestarting points are a sub-set of the overall grid where wavefront values are tobe estimated, sparse representations are produced and numerical complexity canbe chosen by the spacing of the starting point grid relative to the overallgrid. Using a combination of algebraic expressions, sparse representation, anda conjugate gradient solver, the number of non-parallelized operations forreconstruction on a 100x100 sub-aperture sized problem is ~600,000 or O(N^3/2),which is approximately the same as for each thread of a MVM solutionparallelized over 100 threads. To reduce the effects of noise propagationwithin each domain, a noise reduction algorithm can be applied which ensuresthe continuity of the wavefront. To apply this additional step has a cost of~1,200,000 operations. We conclude by briefly discussing how the final step ofconverting from wavefront to actuator values can be achieved.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™
Gomes, Jeremias M.; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H.
2016-01-01
We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel® Xeon Phi™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP’s irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63× on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7× and 1.62×, respectively, as compared to efficient CPU and GPU implementations. PMID:27298591
Efficient irregular wavefront propagation algorithms on Intel® Xeon Phi™.
Gomes, Jeremias M; Teodoro, George; de Melo, Alba; Kong, Jun; Kurc, Tahsin; Saltz, Joel H
2015-10-01
We investigate the execution of the Irregular Wavefront Propagation Pattern (IWPP), a fundamental computing structure used in several image analysis operations, on the Intel ® Xeon Phi ™ co-processor. An efficient implementation of IWPP on the Xeon Phi is a challenging problem because of IWPP's irregularity and the use of atomic instructions in the original IWPP algorithm to resolve race conditions. On the Xeon Phi, the use of SIMD and vectorization instructions is critical to attain high performance. However, SIMD atomic instructions are not supported. Therefore, we propose a new IWPP algorithm that can take advantage of the supported SIMD instruction set. We also evaluate an alternate storage container (priority queue) to track active elements in the wavefront in an effort to improve the parallel algorithm efficiency. The new IWPP algorithm is evaluated with Morphological Reconstruction and Imfill operations as use cases. Our results show performance improvements of up to 5.63 × on top of the original IWPP due to vectorization. Moreover, the new IWPP achieves speedups of 45.7 × and 1.62 × , respectively, as compared to efficient CPU and GPU implementations.
Experimental detection of optical vortices with a Shack-Hartmann wavefront sensor.
Murphy, Kevin; Burke, Daniel; Devaney, Nicholas; Dainty, Chris
2010-07-19
Laboratory experiments are carried out to detect optical vortices in conditions typical of those experienced when a laser beam is propagated through the atmosphere. A Spatial Light Modulator (SLM) is used to mimic atmospheric turbulence and a Shack-Hartmann wavefront sensor is utilised to measure the slopes of the wavefront surface. A matched filter algorithm determines the positions of the Shack-Hartmann spot centroids more robustly than a centroiding algorithm. The slope discrepancy is then obtained by taking the slopes measured by the wavefront sensor away from the slopes calculated from a least squares reconstruction of the phase. The slope discrepancy field is used as an input to the branch point potential method to find if a vortex is present, and if so to give its position and sign. The use of the slope discrepancy technique greatly improves the detection rate of the branch point potential method. This work shows the first time the branch point potential method has been used to detect optical vortices in an experimental setup.
Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha
2014-01-01
Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.
Tehrani, Kayvan F.; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-01-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm. PMID:29188105
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-12-21
In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less
Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter
2017-11-01
Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.
Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor.
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C
2015-05-01
We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-π,π) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2π. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper, we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.
Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-05-01
We have designed a plenoptic sensor to retrieve phase and amplitude changes resulting from a laser beam's propagation through atmospheric turbulence. Compared with the commonly restricted domain of (-pi, pi) in phase reconstruction by interferometers, the reconstructed phase obtained by the plenoptic sensors can be continuous up to a multiple of 2pi. When compared with conventional Shack-Hartmann sensors, ambiguities caused by interference or low intensity, such as branch points and branch cuts, are less likely to happen and can be adaptively avoided by our reconstruction algorithm. In the design of our plenoptic sensor, we modified the fundamental structure of a light field camera into a mini Keplerian telescope array by accurately cascading the back focal plane of its object lens with a microlens array's front focal plane and matching the numerical aperture of both components. Unlike light field cameras designed for incoherent imaging purposes, our plenoptic sensor operates on the complex amplitude of the incident beam and distributes it into a matrix of images that are simpler and less subject to interference than a global image of the beam. Then, with the proposed reconstruction algorithms, the plenoptic sensor is able to reconstruct the wavefront and a phase screen at an appropriate depth in the field that causes the equivalent distortion on the beam. The reconstructed results can be used to guide adaptive optics systems in directing beam propagation through atmospheric turbulence. In this paper we will show the theoretical analysis and experimental results obtained with the plenoptic sensor and its reconstruction algorithms.
Tomographic diffractive microscopy with a wavefront sensor.
Ruan, Y; Bon, P; Mudry, E; Maire, G; Chaumet, P C; Giovannini, H; Belkebir, K; Talneau, A; Wattellier, B; Monneret, S; Sentenac, A
2012-05-15
Tomographic diffractive microscopy is a recent imaging technique that reconstructs quantitatively the three-dimensional permittivity map of a sample with a resolution better than that of conventional wide-field microscopy. Its main drawbacks lie in the complexity of the setup and in the slowness of the image recording as both the amplitude and the phase of the field scattered by the sample need to be measured for hundreds of successive illumination angles. In this Letter, we show that, using a wavefront sensor, tomographic diffractive microscopy can be implemented easily on a conventional microscope. Moreover, the number of illuminations can be dramatically decreased if a constrained reconstruction algorithm is used to recover the sample map of permittivity.
Liu, Rui; Milkie, Daniel E; Kerlin, Aaron; MacLennan, Bryan; Ji, Na
2014-01-27
In traditional zonal wavefront sensing for adaptive optics, after local wavefront gradients are obtained, the entire wavefront can be calculated by assuming that the wavefront is a continuous surface. Such an approach will lead to sub-optimal performance in reconstructing wavefronts which are either discontinuous or undersampled by the zonal wavefront sensor. Here, we report a new method to reconstruct the wavefront by directly measuring local wavefront phases in parallel using multidither coherent optical adaptive technique. This method determines the relative phases of each pupil segment independently, and thus produces an accurate wavefront for even discontinuous wavefronts. We implemented this method in an adaptive optical two-photon fluorescence microscopy and demonstrated its superior performance in correcting large or discontinuous aberrations.
Bimorph deformable mirror: an appropriate wavefront corrector for retinal imaging?
NASA Astrophysics Data System (ADS)
Laut, Sophie; Jones, Steve; Park, Hyunkyu; Horsley, David A.; Olivier, Scot; Werner, John S.
2005-11-01
The purpose of this study was to evaluate the performance of a bimorph deformable mirror from AOptix, inserted into an adaptive optics system designed for in-vivo retinal imaging at high resolution. We wanted to determine its suitability as a wavefront corrector for vision science and ophthalmological instrumentation. We presented results obtained in a closed-loop system, and compared them with previous open-loop performance measurements. Our goal was to obtain precise wavefront reconstruction with rapid convergence of the control algorithm. The quality of the reconstruction was expressed in terms of root-mean-squared wavefront residual error (RMS), and number of frames required to perform compensation. Our instrument used a Hartmann-Shack sensor for the wavefront measurements. We also determined the precision and ability of the deformable mirror to compensate the most common types of aberrations present in the human eye (defocus, cylinder, astigmatism and coma), and the quality of its correction, in terms of maximum amplitude of the corrected wavefront. In addition to wavefront correction, we had also used the closed-loop system to generate an arbitrary aberration pattern by entering the desired Hartmann-Shack centroid locations as input to the AO controller. These centroid locations were computed in Matlab for a user-defined aberration pattern, allowing us to test the ability of the DM to generate and compensate for various aberrations. We conclude that this device, in combination with another DM based on Micro-Electro Mechanical Systems (MEMS) technology, may provide better compensation of the higher-order ocular wavefront aberrations of the human eye
Efficient Irregular Wavefront Propagation Algorithms on Hybrid CPU-GPU Machines
Teodoro, George; Pan, Tony; Kurc, Tahsin; Kong, Jun; Cooper, Lee; Saltz, Joel
2013-01-01
We address the problem of efficient execution of a computation pattern, referred to here as the irregular wavefront propagation pattern (IWPP), on hybrid systems with multiple CPUs and GPUs. The IWPP is common in several image processing operations. In the IWPP, data elements in the wavefront propagate waves to their neighboring elements on a grid if a propagation condition is satisfied. Elements receiving the propagated waves become part of the wavefront. This pattern results in irregular data accesses and computations. We develop and evaluate strategies for efficient computation and propagation of wavefronts using a multi-level queue structure. This queue structure improves the utilization of fast memories in a GPU and reduces synchronization overheads. We also develop a tile-based parallelization strategy to support execution on multiple CPUs and GPUs. We evaluate our approaches on a state-of-the-art GPU accelerated machine (equipped with 3 GPUs and 2 multicore CPUs) using the IWPP implementations of two widely used image processing operations: morphological reconstruction and euclidean distance transform. Our results show significant performance improvements on GPUs. The use of multiple CPUs and GPUs cooperatively attains speedups of 50× and 85× with respect to single core CPU executions for morphological reconstruction and euclidean distance transform, respectively. PMID:23908562
Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.
2014-10-01
A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.
Non-null annular subaperture stitching interferometry for aspheric test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian
2015-10-01
A non-null annular subaperture stitching interferometry (NASSI), combining the subaperture stitching idea and non-null test method, is proposed for steep aspheric testing. Compared with standard annular subaperture stitching interferometry (ASSI), a partial null lens (PNL) is employed as an alternative to the transmission sphere, to generate different aspherical wavefronts as the references. The coverage subaperture number would thus be reduced greatly for the better performance of aspherical wavefronts in matching the local slope of aspheric surfaces. Instead of various mathematical stitching algorithms, a simultaneous reverse optimizing reconstruction (SROR) method based on system modeling and ray tracing is proposed for full aperture figure error reconstruction. All the subaperture measurements are simulated simultaneously with a multi-configuration model in a ray-tracing program, including the interferometric system modeling and subaperture misalignments modeling. With the multi-configuration model, full aperture figure error would be extracted in form of Zernike polynomials from subapertures wavefront data by the SROR method. This method concurrently accomplishes subaperture retrace error and misalignment correction, requiring neither complex mathematical algorithms nor subaperture overlaps. A numerical simulation exhibits the comparison of the performance of the NASSI and standard ASSI, which demonstrates the high accuracy of the NASSI in testing steep aspheric. Experimental results of NASSI are shown to be in good agreement with that of Zygo® VerifireTM Asphere interferometer.
Computerized lateral-shear interferometer
NASA Astrophysics Data System (ADS)
Hasegan, Sorin A.; Jianu, Angela; Vlad, Valentin I.
1998-07-01
A lateral-shear interferometer, coupled with a computer for laser wavefront analysis, is described. A CCD camera is used to transfer the fringe images through a frame-grabber into a PC. 3D phase maps are obtained by fringe pattern processing using a new algorithm for direct spatial reconstruction of the optical phase. The program describes phase maps by Zernike polynomials yielding an analytical description of the wavefront aberration. A compact lateral-shear interferometer has been built using a laser diode as light source, a CCD camera and a rechargeable battery supply, which allows measurements in-situ, if necessary.
NASA Astrophysics Data System (ADS)
Baránek, M.; Běhal, J.; Bouchal, Z.
2018-01-01
In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakowatz, C.V. Jr.; Wahl, D.E.; Thompson, P.A.
1996-12-31
Wavefront curvature defocus effects can occur in spotlight-mode SAR imagery when reconstructed via the well-known polar formatting algorithm (PFA) under certain scenarios that include imaging at close range, use of very low center frequency, and/or imaging of very large scenes. The range migration algorithm (RMA), also known as seismic migration, was developed to accommodate these wavefront curvature effects. However, the along-track upsampling of the phase history data required of the original version of range migration can in certain instances represent a major computational burden. A more recent version of migration processing, the Frequency Domain Replication and Downsampling (FReD) algorithm, obviatesmore » the need to upsample, and is accordingly more efficient. In this paper the authors demonstrate that the combination of traditional polar formatting with appropriate space-variant post-filtering for refocus can be as efficient or even more efficient than FReD under some imaging conditions, as demonstrated by the computer-simulated results in this paper. The post-filter can be pre-calculated from a theoretical derivation of the curvature effect. The conclusion is that the new polar formatting with post filtering algorithm (PF2) should be considered as a viable candidate for a spotlight-mode image formation processor when curvature effects are present.« less
Correia, Carlos M; Teixeira, Joel
2014-12-01
Computationally efficient wave-front reconstruction techniques for astronomical adaptive-optics (AO) systems have seen great development in the past decade. Algorithms developed in the spatial-frequency (Fourier) domain have gathered much attention, especially for high-contrast imaging systems. In this paper we present the Wiener filter (resulting in the maximization of the Strehl ratio) and further develop formulae for the anti-aliasing (AA) Wiener filter that optimally takes into account high-order wave-front terms folded in-band during the sensing (i.e., discrete sampling) process. We employ a continuous spatial-frequency representation for the forward measurement operators and derive the Wiener filter when aliasing is explicitly taken into account. We further investigate and compare to classical estimates using least-squares filters the reconstructed wave-front, measurement noise, and aliasing propagation coefficients as a function of the system order. Regarding high-contrast systems, we provide achievable performance results as a function of an ensemble of forward models for the Shack-Hartmann wave-front sensor (using sparse and nonsparse representations) and compute point-spread-function raw intensities. We find that for a 32×32 single-conjugated AOs system the aliasing propagation coefficient is roughly 60% of the least-squares filters, whereas the noise propagation is around 80%. Contrast improvements of factors of up to 2 are achievable across the field in the H band. For current and next-generation high-contrast imagers, despite better aliasing mitigation, AA Wiener filtering cannot be used as a standalone method and must therefore be used in combination with optical spatial filters deployed before image formation actually takes place.
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-09-01
Multiconjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wave-front control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10-2 Hz, i.e., 4-5 orders of magnitude lower than the typical 103 Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
Simplified Phase Diversity algorithm based on a first-order Taylor expansion.
Zhang, Dong; Zhang, Xiaobin; Xu, Shuyan; Liu, Nannan; Zhao, Luoxin
2016-10-01
We present a simplified solution to phase diversity when the observed object is a point source. It utilizes an iterative linearization of the point spread function (PSF) at two or more diverse planes by first-order Taylor expansion to reconstruct the initial wavefront. To enhance the influence of the PSF in the defocal plane which is usually very dim compared to that in the focal plane, we build a new model with the Tikhonov regularization function. The new model cannot only increase the computational speed, but also reduce the influence of the noise. By using the PSFs obtained from Zemax, we reconstruct the wavefront of the Hubble Space Telescope (HST) at the edge of the field of view (FOV) when the telescope is in either the nominal state or the misaligned state. We also set up an experiment, which consists of an imaging system and a deformable mirror, to validate the correctness of the presented model. The result shows that the new model can improve the computational speed with high wavefront detection accuracy.
NASA Astrophysics Data System (ADS)
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng
2018-03-01
Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.
Reference-free Shack-Hartmann wavefront sensor.
Zhao, Liping; Guo, Wenjiang; Li, Xiang; Chen, I-Ming
2011-08-01
The traditional Shack-Hartmann wavefront sensing (SHWS) system measures the wavefront slope by calculating the centroid shift between the sample and a reference piece, and then the wavefront is reconstructed by a suitable iterative reconstruction method. Because of the necessity of a reference, many issues are brought up, which limit the system in most applications. This Letter proposes a reference-free wavefront sensing (RFWS) methodology, and an RFWS system is built up where wavefront slope changes are measured by introducing a lateral disturbance to the sampling aperture. By using Southwell reconstruction two times to process the measured data, the form of the wavefront at the sampling plane can be well reconstructed. A theoretical simulation platform of RFWS is established, and various surface forms are investigated. Practical measurements with two measurement systems-SHWS and our RFWS-are conducted, analyzed, and compared. All the simulation and measurement results prove and demonstrate the correctness and effectiveness of the method. © 2011 Optical Society of America
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOREN,NEALL E.
Wavefront curvature defocus effects occur in spotlight-mode SAR imagery when reconstructed via the well-known polar-formatting algorithm (PFA) under certain imaging scenarios. These include imaging at close range, using a very low radar center frequency, utilizing high resolution, and/or imaging very large scenes. Wavefront curvature effects arise from the unrealistic assumption of strictly planar wavefronts illuminating the imaged scene. This dissertation presents a method for the correction of wavefront curvature defocus effects under these scenarios, concentrating on the generalized: squint-mode imaging scenario and its computational aspects. This correction is accomplished through an efficient one-dimensional, image domain filter applied as a post-processingmore » step to PF.4. This post-filter, referred to as SVPF, is precalculated from a theoretical derivation of the wavefront curvature effect and varies as a function of scene location. Prior to SVPF, severe restrictions were placed on the imaged scene size in order to avoid defocus effects under these scenarios when using PFA. The SVPF algorithm eliminates the need for scene size restrictions when wavefront curvature effects are present, correcting for wavefront curvature in broadside as well as squinted collection modes while imposing little additional computational penalty for squinted images. This dissertation covers the theoretical development, implementation and analysis of the generalized, squint-mode SVPF algorithm (of which broadside-mode is a special case) and provides examples of its capabilities and limitations as well as offering guidelines for maximizing its computational efficiency. Tradeoffs between the PFA/SVPF combination and other spotlight-mode SAR image formation techniques are discussed with regard to computational burden, image quality, and imaging geometry constraints. It is demonstrated that other methods fail to exhibit a clear computational advantage over polar-formatting in conjunction with SVPF. This research concludes that PFA in conjunction with SVPF provides a computationally efficient spotlight-mode image formation solution that solves the wavefront curvature problem for most standoff distances and patch sizes, regardless of squint, resolution or radar center frequency. Additional advantages are that SVPF is not iterative and has no dependence on the visual contents of the scene: resulting in a deterministic computational complexity which typically adds only thirty percent to the overall image formation time.« less
High-resolution wavefront reconstruction using the frozen flow hypothesis
NASA Astrophysics Data System (ADS)
Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping
2017-10-01
This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
NASA Astrophysics Data System (ADS)
Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing
2015-08-01
Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).
Three-dimensional near-field MIMO array imaging using range migration techniques.
Zhuge, Xiaodong; Yarovoy, Alexander G
2012-06-01
This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.
Zhu, Zhaoyi; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Cao, Zhaoliang; Hu, Lifa; Xuan, Li
2016-10-17
The centroid-based Shack-Hartmann wavefront sensor (SHWFS) treats the sampled wavefronts in the sub-apertures as planes, and the slopes of the sub-wavefronts are used to reconstruct the whole pupil wavefront. The problem is that the centroid method may fail to sense the high-order modes for strong turbulences, decreasing the precision of the whole pupil wavefront reconstruction. To solve this problem, we propose a sub-wavefront estimation method for SHWFS based on the focal plane sensing technique, by which more Zernike modes than the two slopes can be sensed in each sub-aperture. In this paper, the effects on the sub-wavefront estimation method of the related parameters, such as the spot size, the phase offset with its set amplitude and the pixels number in each sub-aperture, are analyzed and these parameters are optimized to achieve high efficiency. After the optimization, open-loop measurement is realized. For the sub-wavefront sensing, we achieve a large linearity range of 3.0 rad RMS for Zernike modes Z2 and Z3, and 2.0 rad RMS for Zernike modes Z4 to Z6 when the pixel number does not exceed 8 × 8 in each sub-aperture. The whole pupil wavefront reconstruction with the modified SHWFS is realized to analyze the improvements brought by the optimized sub-wavefront estimation method. Sixty-five Zernike modes can be reconstructed with a modified SHWFS containing only 7 × 7 sub-apertures, which could reconstruct only 35 modes by the centroid method, and the mean RMS errors of the residual phases are less than 0.2 rad2, which is lower than the 0.35 rad2 by the centroid method.
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
NASA Astrophysics Data System (ADS)
Steinbock, Michael J.; Hyde, Milo W.
2012-10-01
Adaptive optics is used in applications such as laser communication, remote sensing, and laser weapon systems to estimate and correct for atmospheric distortions of propagated light in real-time. Within an adaptive optics system, a reconstruction process interprets the raw wavefront sensor measurements and calculates an estimate for the unwrapped phase function to be sent through a control law and applied to a wavefront correction device. This research is focused on adaptive optics using a self-referencing interferometer wavefront sensor, which directly measures the wrapped wavefront phase. Therefore, its measurements must be reconstructed for use on a continuous facesheet deformable mirror. In testing and evaluating a novel class of branch-point- tolerant wavefront reconstructors based on the post-processing congruence operation technique, an increase in Strehl ratio compared to a traditional least squares reconstructor was noted even in non-scintillated fields. To investigate this further, this paper uses wave-optics simulations to eliminate many of the variables from a hardware adaptive optics system, so as to focus on the reconstruction techniques alone. The simulation results along with a discussion of the physical reasoning for this phenomenon are provided. For any applications using a self-referencing interferometer wavefront sensor with low signal levels or high localized wavefront gradients, understanding this phenomena is critical when applying a traditional least squares wavefront reconstructor.
Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system
NASA Astrophysics Data System (ADS)
Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen
2018-02-01
In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.
Pupil-segmentation-based adaptive optics for microscopy
NASA Astrophysics Data System (ADS)
Ji, Na; Milkie, Daniel E.; Betzig, Eric
2011-03-01
Inhomogeneous optical properties of biological samples make it difficult to obtain diffraction-limited resolution in depth. Correcting the sample-induced optical aberrations needs adaptive optics (AO). However, the direct wavefront-sensing approach commonly used in astronomy is not suitable for most biological samples due to their strong scattering of light. We developed an image-based AO approach that is insensitive to sample scattering. By comparing images of the sample taken with different segments of the pupil illuminated, local tilt in the wavefront is measured from image shift. The aberrated wavefront is then obtained either by measuring the local phase directly using interference or with phase reconstruction algorithms similar to those used in astronomical AO. We implemented this pupil-segmentation-based approach in a two-photon fluorescence microscope and demonstrated that diffraction-limited resolution can be recovered from nonbiological and biological samples.
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics.
Gilles, Luc; Ellerbroek, Brent L; Vogel, Curtis R
2003-09-10
Multiconjugate adaptive optics (MCAO) systems with 10(4)-10(5) degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 10(4) actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10(-2) Hz, i.e., 4-5 orders of magnitude lower than the typical 10(3) Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
NASA Astrophysics Data System (ADS)
Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen
2017-06-01
In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
Methods for coherent lensless imaging and X-ray wavefront measurements
NASA Astrophysics Data System (ADS)
Guizar Sicairos, Manuel
X-ray diffractive imaging is set apart from other high-resolution imaging techniques (e.g. scanning electron or atomic force microscopy) for its high penetration depth, which enables tomographic 3D imaging of thick samples and buried structures. Furthermore, using short x-ray pulses, it enables the capability to take ultrafast snapshots, giving a unique opportunity to probe nanoscale dynamics at femtosecond time scales. In this thesis we present improvements to phase retrieval algorithms, assess their performance through numerical simulations, and develop new methods for both imaging and wavefront measurement. Building on the original work by Faulkner and Rodenburg, we developed an improved reconstruction algorithm for phase retrieval with transverse translations of the object relative to the illumination beam. Based on gradient-based nonlinear optimization, this algorithm is capable of estimating the object, and at the same time refining the initial knowledge of the incident illumination and the object translations. The advantages of this algorithm over the original iterative transform approach are shown through numerical simulations. Phase retrieval has already shown substantial success in wavefront sensing at optical wavelengths. Although in principle the algorithms can be used at any wavelength, in practice the focus-diversity mechanism that makes optical phase retrieval robust is not practical to implement for x-rays. In this thesis we also describe the novel application of phase retrieval with transverse translations to the problem of x-ray wavefront sensing. This approach allows the characterization of the complex-valued x-ray field in-situ and at-wavelength and has several practical and algorithmic advantages over conventional focused beam measurement techniques. A few of these advantages include improved robustness through diverse measurements, reconstruction from far-field intensity measurements only, and significant relaxation of experimental requirements over other beam characterization approaches. Furthermore, we show that a one-dimensional version of this technique can be used to characterize an x-ray line focus produced by a cylindrical focusing element. We provide experimental demonstrations of the latter at hard x-ray wavelengths, where we have characterized the beams focused by a kinoform lens and an elliptical mirror. In both experiments the reconstructions exhibited good agreement with independent measurements, and in the latter a small mirror misalignment was inferred from the phase retrieval reconstruction. These experiments pave the way for the application of robust phase retrieval algorithms for in-situ alignment and performance characterization of x-ray optics for nanofocusing. We also present a study on how transverse translations help with the well-known uniqueness problem of one-dimensional phase retrieval. We also present a novel method for x-ray holography that is capable of reconstructing an image using an off-axis extended reference in a non-iterative computation, greatly generalizing an earlier approach by Podorov et al. The approach, based on the numerical application of derivatives on the field autocorrelation, was developed from first mathematical principles. We conducted a thorough theoretical study to develop technical and intuitive understanding of this technique and derived sufficient separation conditions required for an artifact-free reconstruction. We studied the effects of missing information in the Fourier domain, and of an imperfect reference, and we provide a signal-to-noise ratio comparison with the more traditional approach of Fourier transform holography. We demonstrated this new holographic approach through proof-of-principle optical experiments and later experimentally at soft x-ray wavelengths, where we compared its performance to Fourier transform holography, iterative phase retrieval and state-of-the-art zone-plate x-ray imaging techniques (scanning and full-field). Finally, we present a demonstration of the technique using a single 20 fs pulse from a high-harmonic table-top source. Holography with an extended reference is shown to provide fast, good quality images that are robust to noise and artifacts that arise from missing information due to a beam stop. (Abstract shortened by UMI.)
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Segmented Mirror Telescope Model and Simulation
2011-06-01
mirror surface is treated as a grid of masses and springs. The actuators have surface normal forces applied to individual masses. The equation to...are not widely treated in the literature. The required modifications for the wavefront reconstruction algorithm of a circular aperture to correctly...Zernike polynomials, which are particularly suitable to describe the common optical character- izations of astigmatism , coma, defocus and others [9
Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang
2016-12-14
An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.
Noise reduction in digital holography based on a filtering algorithm
NASA Astrophysics Data System (ADS)
Zhang, Wenhui; Cao, Liangcai; Zhang, Hua; Jin, Guofan; Brady, David
2018-02-01
Holography is a tool to record the object wavefront by interference. Complex amplitude of the object wave is coded into a two dimensional hologram. Unfortunately, the conjugate wave and background wave would also appear at the object plane during reconstruction, as noise, which blurs the reconstructed object. From the perspective of wave, we propose a filtering algorithm to get a noise-reduced reconstruction. Due to the fact that the hologram is a kind of amplitude grating, three waves would appear when reconstruction, which are object wave, conjugate wave and background wave. The background is easy to eliminate by frequency domain filtering. The object wave and conjugate wave are signals to be dealt with. These two waves, as a whole, propagate in the space. However, when detected at the original object plane, the object wave would diffract into a sparse pattern while the conjugate wave would diffract into a diffused pattern forming the noise. Hence, the noise can be reduced based on these difference with a filtering algorithm. Both amplitude and phase distributions are truthfully retrieved in our simulation and experimental demonstration.
Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent
2013-05-01
This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.
Improved algorithm of ray tracing in ICF cryogenic targets
NASA Astrophysics Data System (ADS)
Zhang, Rui; Yang, Yongying; Ling, Tong; Jiang, Jiabin
2016-10-01
The high precision ray tracing inside inertial confinement fusion (ICF) cryogenic targets plays an important role in the reconstruction of the three-dimensional density distribution by algebraic reconstruction technique (ART) algorithm. The traditional Runge-Kutta methods, which is restricted by the precision of the grid division and the step size of ray tracing, cannot make an accurate calculation in the case of refractive index saltation. In this paper, we propose an improved algorithm of ray tracing based on the Runge-Kutta methods and Snell's law of refraction to achieve high tracing precision. On the boundary of refractive index, we apply Snell's law of refraction and contact point search algorithm to ensure accuracy of the simulation. Inside the cryogenic target, the combination of the Runge-Kutta methods and self-adaptive step algorithm are employed for computation. The original refractive index data, which is used to mesh the target, can be obtained by experimental measurement or priori refractive index distribution function. A finite differential method is performed to calculate the refractive index gradient of mesh nodes, and the distance weighted average interpolation methods is utilized to obtain refractive index and gradient of each point in space. In the simulation, we take ideal ICF target, Luneberg lens and Graded index rod as simulation model to calculate the spot diagram and wavefront map. Compared the simulation results to Zemax, it manifests that the improved algorithm of ray tracing based on the fourth-order Runge-Kutta methods and Snell's law of refraction exhibits high accuracy. The relative error of the spot diagram is 0.2%, and the peak-to-valley (PV) error and the root-mean-square (RMS) error of the wavefront map is less than λ/35 and λ/100, correspondingly.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Enhancing the performance of the light field microscope using wavefront coding
Cohen, Noy; Yang, Samuel; Andalman, Aaron; Broxton, Michael; Grosenick, Logan; Deisseroth, Karl; Horowitz, Mark; Levoy, Marc
2014-01-01
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective’s back focal plane and at the microscope’s native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain. PMID:25322056
Enhancing the performance of the light field microscope using wavefront coding.
Cohen, Noy; Yang, Samuel; Andalman, Aaron; Broxton, Michael; Grosenick, Logan; Deisseroth, Karl; Horowitz, Mark; Levoy, Marc
2014-10-06
Light field microscopy has been proposed as a new high-speed volumetric computational imaging method that enables reconstruction of 3-D volumes from captured projections of the 4-D light field. Recently, a detailed physical optics model of the light field microscope has been derived, which led to the development of a deconvolution algorithm that reconstructs 3-D volumes with high spatial resolution. However, the spatial resolution of the reconstructions has been shown to be non-uniform across depth, with some z planes showing high resolution and others, particularly at the center of the imaged volume, showing very low resolution. In this paper, we enhance the performance of the light field microscope using wavefront coding techniques. By including phase masks in the optical path of the microscope we are able to address this non-uniform resolution limitation. We have also found that superior control over the performance of the light field microscope can be achieved by using two phase masks rather than one, placed at the objective's back focal plane and at the microscope's native image plane. We present an extended optical model for our wavefront coded light field microscope and develop a performance metric based on Fisher information, which we use to choose adequate phase masks parameters. We validate our approach using both simulated data and experimental resolution measurements of a USAF 1951 resolution target; and demonstrate the utility for biological applications with in vivo volumetric calcium imaging of larval zebrafish brain.
Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation
Barbero, Sergio; Thibos, Larry N.
2007-01-01
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302
Fourier transform-wavefront reconstruction for the pyramid wavefront sensor
NASA Astrophysics Data System (ADS)
Quirós-Pacheco, Fernando; Correia, Carlos; Esposito, Simone
The application of Fourier-transform reconstruction techniques to the pyramid wavefront sensor has been investigated. A preliminary study based on end-to-end simulations of an adaptive optics system with ≈40x40 subapertures and actuators shows that the performance of the Fourier-transform reconstructor (FTR) is of the same order of magnitude than the one obtained with a conventional matrix-vector multiply (MVM) method.
NASA Astrophysics Data System (ADS)
Mao, Heng; Wang, Xiao; Zhao, Dazun
2009-05-01
As a wavefront sensing (WFS) tool, Baseline algorithm, which is classified as the iterative-transform algorithm of phase retrieval, estimates the phase distribution at pupil from some known PSFs at defocus planes. By using multiple phase diversities and appropriate phase unwrapping methods, this algorithm can accomplish reliable unique solution and high dynamic phase measurement. In the paper, a Baseline algorithm based wavefront sensing experiment with modification of phase unwrapping has been implemented, and corresponding Graphical User Interfaces (GUI) software has also been given. The adaptability and repeatability of Baseline algorithm have been validated in experiments. Moreover, referring to the ZYGO interferometric results, the WFS accuracy of this algorithm has been exactly calibrated.
Phase retrieval using a modified Shack-Hartmann wavefront sensor with defocus.
Li, Changwei; Li, Bangming; Zhang, Sijiong
2014-02-01
This paper proposes a modified Shack-Hartmann wavefront sensor for phase retrieval. The sensor is revamped by placing a detector at a defocused plane before the focal plane of the lenslet array of the Shack-Hartmann sensor. The algorithm for phase retrieval is an optimization with initial Zernike coefficients calculated by the conventional phase reconstruction of the Shack-Hartmann sensor. Numerical simulations show that the proposed sensor permits sensitive, accurate phase retrieval. Furthermore, experiments tested the feasibility of phase retrieval using the proposed sensor. The surface irregularity for a flat mirror was measured by the proposed method and a Veeco interferometer, respectively. The irregularity for the mirror measured by the proposed method is in very good agreement with that measured using the Veeco interferometer.
Two Improved Algorithms for Envelope and Wavefront Reduction
NASA Technical Reports Server (NTRS)
Kumfert, Gary; Pothen, Alex
1997-01-01
Two algorithms for reordering sparse, symmetric matrices or undirected graphs to reduce envelope and wavefront are considered. The first is a combinatorial algorithm introduced by Sloan and further developed by Duff, Reid, and Scott; we describe enhancements to the Sloan algorithm that improve its quality and reduce its run time. Our test problems fall into two classes with differing asymptotic behavior of their envelope parameters as a function of the weights in the Sloan algorithm. We describe an efficient 0(nlogn + m) time implementation of the Sloan algorithm, where n is the number of rows (vertices), and m is the number of nonzeros (edges). On a collection of test problems, the improved Sloan algorithm required, on the average, only twice the time required by the simpler Reverse Cuthill-Mckee algorithm while improving the mean square wavefront by a factor of three. The second algorithm is a hybrid that combines a spectral algorithm for envelope and wavefront reduction with a refinement step that uses a modified Sloan algorithm. The hybrid algorithm reduces the envelope size and mean square wavefront obtained from the Sloan algorithm at the cost of greater running times. We illustrate how these reductions translate into tangible benefits for frontal Cholesky factorization and incomplete factorization preconditioning.
NASA Astrophysics Data System (ADS)
Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu
2018-07-01
For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.
Iterative-Transform Phase Diversity: An Object and Wavefront Recovery Algorithm
NASA Technical Reports Server (NTRS)
Smith, J. Scott
2011-01-01
Presented is a solution for recovering the wavefront and an extended object. It builds upon the VSM architecture and deconvolution algorithms. Simulations are shown for recovering the wavefront and extended object from noisy data.
NASA Astrophysics Data System (ADS)
Hutterer, Victoria; Ramlau, Ronny
2018-03-01
The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore-Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.
Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2012-01-01
The Iterative Transform Phase Diversity algorithm is designed to solve the problem of recovering the wavefront in the exit pupil of an optical system and the object being imaged. This algorithm builds upon the robust convergence capability of Variable Sampling Mapping (VSM), in combination with the known success of various deconvolution algorithms. VSM is an alternative method for enforcing the amplitude constraints of a Misell-Gerchberg-Saxton (MGS) algorithm. When provided the object and additional optical parameters, VSM can accurately recover the exit pupil wavefront. By combining VSM and deconvolution, one is able to simultaneously recover the wavefront and the object.
Wavefront reconstruction using computer-generated holograms
NASA Astrophysics Data System (ADS)
Schulze, Christian; Flamm, Daniel; Schmidt, Oliver A.; Duparré, Michael
2012-02-01
We propose a new method to determine the wavefront of a laser beam, based on modal decomposition using computer-generated holograms (CGHs). Thereby the beam under test illuminates the CGH with a specific, inscribed transmission function that enables the measurement of modal amplitudes and phases by evaluating the first diffraction order of the hologram. Since we use an angular multiplexing technique, our method is innately capable of real-time measurements of amplitude and phase, yielding the complete information about the optical field. A measurement of the Stokes parameters, respectively of the polarization state, provides the possibility to calculate the Poynting vector. Two wavefront reconstruction possibilities are outlined: reconstruction from the phase for scalar beams and reconstruction from the Poynting vector for inhomogeneously polarized beams. To quantify single aberrations, the reconstructed wavefront is decomposed into Zernike polynomials. Our technique is applied to beams emerging from different kinds of multimode optical fibers, such as step-index, photonic crystal and multicore fibers, whereas in this work results are exemplarily shown for a step-index fiber and compared to a Shack-Hartmann measurement that serves as a reference.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Yu, Honghao; Chang, Jun; Liu, Xin; Wu, Chuhan; He, Yifan; Zhang, Yongjian
2017-04-17
Herein, we propose a new security enhancing method that employs wavefront aberrations as optical keys to improve the resistance capabilities of conventional double-random phase encoding (DRPE) optical cryptosystems. This study has two main innovations. First, we exploit a special beam-expander afocal-reflecting to produce different types of aberrations, and the wavefront distortion can be altered by changing the shape of the afocal-reflecting system using a deformable mirror. Then, we reconstruct the wavefront aberrations via the surface fitting of Zernike polynomials and use the reconstructed aberrations as novel asymmetric vector keys. The ideal wavefront and the distorted wavefront obtained by wavefront sensing can be regarded as a pair of private and public keys. The wavelength and focal length of the Fourier lens can be used as additional keys to increase the number of degrees of freedom. This novel cryptosystem can enhance the resistance to various attacks aimed at DRPE systems. Finally, we conduct ZEMAX and MATLAB simulations to demonstrate the superiority of this method.
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
2014-12-01
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
Cousin, Seth L; Bueno, Juan M; Forget, Nicolas; Austin, Dane R; Biegert, J
2012-08-01
We demonstrate a simplified arrangement for spatiotemporal ultrashort pulse characterization called Hartmann-Shack assisted, multidimensional, shaper-based technique for electric-field reconstruction. It employs an acousto-optic pulse shaper in combination with a second-order nonlinear crystal and a Hartmann-Shack wavefront sensor. The shaper is used as a tunable bandpass filter, and the wavefronts and intensities of quasimonochromatic spectral slices of the pulse are obtained using the Hartmann-Shack wavefront sensor. The wavefronts and intensities of the spectral slices are related to one another using shaper-assisted frequency-resolved optical gating measurements, performed at particular points in the beam. This enables a three-dimensional reconstruction of the amplitude and phase of the pulse. We present some example pulse measurements and discuss the operating parameters of the device.
Detecting higher-order wavefront errors with an astigmatic hybrid wavefront sensor.
Barwick, Shane
2009-06-01
The reconstruction of wavefront errors from measurements over subapertures can be made more accurate if a fully characterized quadratic surface can be fitted to the local wavefront surface. An astigmatic hybrid wavefront sensor with added neural network postprocessing is shown to have this capability, provided that the focal image of each subaperture is sufficiently sampled. Furthermore, complete local curvature information is obtained with a single image without splitting beam power.
A wavefront reconstruction method for 3-D cylindrical subsurface radar imaging.
Flores-Tapia, Daniel; Thomas, Gabriel; Pistorius, Stephen
2008-10-01
In recent years, the use of radar technology has been proposed in a wide range of subsurface imaging applications. Traditionally, linear scan trajectories are used to acquire data in most subsurface radar applications. However, novel applications, such as breast microwave imaging and wood inspection, require the use of nonlinear scan trajectories in order to adjust to the geometry of the scanned area. This paper proposes a novel reconstruction algorithm for subsurface radar data acquired along cylindrical scan trajectories. The spectrum of the collected data is processed in order to locate the spatial origin of the target reflections and remove the spreading of the target reflections which results from the different signal travel times along the scan trajectory. The proposed algorithm was successfully tested using experimental data collected from phantoms that mimic high contrast subsurface radar scenarios, yielding promising results. Practical considerations such as spatial resolution and sampling constraints are discussed and illustrated as well.
NASA Astrophysics Data System (ADS)
Niu, Chaojun; Han, Xiang'e.
2015-10-01
Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.
Reconstruction-free sensitive wavefront sensor based on continuous position sensitive detectors.
Godin, Thomas; Fromager, Michael; Cagniot, Emmanuel; Brunel, Marc; Aït-Ameur, Kamel
2013-12-01
We propose a new device that is able to perform highly sensitive wavefront measurements based on the use of continuous position sensitive detectors and without resorting to any reconstruction process. We demonstrate experimentally its ability to measure small wavefront distortions through the characterization of pump-induced refractive index changes in laser material. In addition, it is shown using computer-generated holograms that this device can detect phase discontinuities as well as improve the quality of sharp phase variations measurements. Results are compared to reference Shack-Hartmann measurements, and dramatic enhancements are obtained.
Computational test bench and flow chart for wavefront sensors
NASA Astrophysics Data System (ADS)
Abecassis, Úrsula V.; de Lima Monteiro, Davies W.; Salles, Luciana P.; Stanigher, Rafaela; Borges, Euller
2014-05-01
The wavefront reconstruction diagram has come to supply the need in literature of an ampler vision over the many methods and optronic devices used for the reconstruction of wavefronts and to show the existing interactions between those. A computational platform has been developed using the diagram's orientation for the taking of decision over the best technique and the photo sensible and electronic structures to be implemented. This work will be directed to an ophthalmological application in the development of an instrument of help for the diagnosis of optical aberrations of the human eye.
Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.
Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian
2015-10-05
The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.
Complex wavefront sensing with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2016-09-01
There are many techniques to achieve basic wavefront sensing tasks in the weak atmospheric turbulence regime. However, in strong and deep turbulence situations, the complexity of a propagating wavefront increases significantly. Typically, beam breakup will happen and various portions of the beam will randomly interfere with each other. Consequently, some conventional techniques for wavefront sensing turn out to be inaccurate and misleading. For example, a Shack-Hartmann sensor will be confused by multi-spot/zero-spot result in some cells. The curvature sensor will be affected by random interference patterns for both the image acquired before the focal plane and the image acquired after the focal plane. We propose the use of a plenoptic sensor to solve complex wavefront sensing problems. In fact, our results show that even for multiple beams (their wavelengths can be the same) passing through the same turbulent channel, the plenoptic sensor can reconstruct the turbulence-induced distortion accurately. In this paper, we will demonstrate the plenoptic mapping principle to analyze and reconstruct the complex wavefront of a distorted laser beam.
A First Order Wavefront Estimation Algorithm for P1640 Calibrator
NASA Technical Reports Server (NTRS)
Zhaia, C.; Vasisht, G.; Shao, M.; Lockhart, T.; Cady, E.; Oppenheimer, B.; Burruss, R.; Roberts, J.; Beichman, C.; Brenner, D.;
2012-01-01
P1640 calibrator is a wavefront sensor working with the P1640 coronagraph and the Palomar 3000 actuator adaptive optics system (P3K) at the Palomar 200 inch Hale telescope. It measures the wavefront by interfering post-coronagraph light with a reference beam formed by low-pass filtering the blocked light from the coronagraph focal plane mask. The P1640 instrument has a similar architecture to the Gemini Planet Imager (GPI) and its performance is currently limited by the quasi-static speckles due to non-common path wavefront errors, which comes from the non-common path for the light to arrive at the AO wavefront sensor and the coronagraph mask. By measuring the wavefront after the coronagraph mask, the non-common path wavefront error can be estimated and corrected by feeding back the error signal to the deformable mirror (DM) of the P3K AO system. Here, we present a first order wavefront estimation algorithm and an instrument calibration scheme used in experiments done recently at Palomar observatory. We calibrate the P1640 calibrator by measuring its responses to poking DM actuators with a sparse checkerboard pattern at different amplitudes. The calibration yields a complex normalization factor for wavefront estimation and establishes the registration of the DM actuators at the pupil camera of the P1640 calibrator, necessary for wavefront correction. Improvement of imaging quality after feeding back the wavefront correction to the AO system demonstrated the efficacy of the algorithm.
Effects of illumination on image reconstruction via Fourier ptychography
NASA Astrophysics Data System (ADS)
Cao, Xinrui; Sinzinger, Stefan
2017-12-01
The Fourier ptychographic microscopy (FPM) technique provides high-resolution images by combining a traditional imaging system, e.g. a microscope or a 4f-imaging system, with a multiplexing illumination system, e.g. an LED array and numerical image processing for enhanced image reconstruction. In order to numerically combine images that are captured under varying illumination angles, an iterative phase-retrieval algorithm is often applied. However, in practice, the performance of the FPM algorithm degrades due to the imperfections of the optical system, the image noise caused by the camera, etc. To eliminate the influence of the aberrations of the imaging system, an embedded pupil function recovery (EPRY)-FPM algorithm has been proposed [Opt. Express 22, 4960-4972 (2014)]. In this paper, we study how the performance of FPM and EPRY-FPM algorithms are affected by imperfections of the illumination system using both numerical simulations and experiments. The investigated imperfections include varying and non-uniform intensities, and wavefront aberrations. Our study shows that the aberrations of the illumination system significantly affect the performance of both FPM and EPRY-FPM algorithms. Hence, in practice, aberrations in the illumination system gain significant influence on the resulting image quality.
Simpler Adaptive Optics using a Single Device for Processing and Control
NASA Astrophysics Data System (ADS)
Zovaro, A.; Bennet, F.; Rye, D.; D'Orgeville, C.; Rigaut, F.; Price, I.; Ritchie, I.; Smith, C.
The management of low Earth orbit is becoming more urgent as satellite and debris densities climb, in order to avoid a Kessler syndrome. A key part of this management is to precisely measure the orbit of both active satellites and debris. The Research School of Astronomy and Astrophysics at the Australian National University have been developing an adaptive optics (AO) system to image and range orbiting objects. The AO system provides atmospheric correction for imaging and laser ranging, allowing for the detection of smaller angular targets and drastically increasing the number of detectable objects. AO systems are by nature very complex and high cost systems, often costing millions of dollars and taking years to design. It is not unusual for AO systems to comprise multiple servers, digital signal processors (DSP) and field programmable gate arrays (FPGA), with dedicated tasks such as wavefront sensor data processing or wavefront reconstruction. While this multi-platform approach has been necessary in AO systems to date due to computation and latency requirements, this may no longer be the case for those with less demanding processing needs. In recent years, large strides have been made in FPGA and microcontroller technology, with todays devices having clock speeds in excess of 200 MHz whilst using a < 5 V power supply. AO systems using a single such device for all data processing and control may present a far simpler, cheaper, smaller and more efficient solution than existing systems. A novel AO system design based around a single, low-cost controller is presented. The objective is to determine the performance which can be achieved in terms of bandwidth and correction order, with a focus on optimisation and parallelisation of AO algorithms such as wavefront measurement and reconstruction. The AO system consists of a Shack-Hartmann wavefront sensor and a deformable mirror to correct light from a 1.8 m telescope for the purpose of imaging orbiting satellites. The microcontroller or FPGA interfaces directly with the wavefront sensor detector and deformable mirror. Wavefront slopes are calculated from each detector frame and converted into actuator commands to complete the closed loop AO control system. A particular challenge of this system is to optimise the AO algorithms to achieve a high rate (> 1kHz) with low latency (< 1ms) to achieve a good AO correction. As part of the Space Environment Cooperative Research Centre (SERC) this AO system design will be used as a demonstrator for what is possible with ground based AO corrected satellite imaging and ranging systems. The ability to directly and efficiently interface the wavefront sensor and deformable mirror is an important step in reducing the cost and complexity of an AO system. It is hoped that in the future this design can be modified for use in general AO applications, such as in 1-3 m telescopes for space surveillance, or even for amateur astronomy.
Statistical virtual eye model based on wavefront aberration
Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie
2012-01-01
Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112
Wavefront sensing with all-digital Stokes measurements
NASA Astrophysics Data System (ADS)
Dudley, Angela; Milione, Giovanni; Alfano, Robert R.; Forbes, Andrew
2014-09-01
A long-standing question in optics has been to efficiently measure the phase (or wavefront) of an optical field. This has led to numerous publications and commercial devices such as phase shift interferometry, wavefront reconstruction via modal decomposition and Shack-Hartmann wavefront sensors. In this work we develop a new technique to extract the phase which in contrast to previously mentioned methods is based on polarization (or Stokes) measurements. We outline a simple, all-digital approach using only a spatial light modulator and a polarization grating to exploit the amplitude and phase relationship between the orthogonal states of polarization to determine the phase of an optical field. We implement this technique to reconstruct the phase of static and propagating optical vortices.
Model-based sensor-less wavefront aberration correction in optical coherence tomography.
Verstraete, Hans R G W; Wahls, Sander; Kalkman, Jeroen; Verhaegen, Michel
2015-12-15
Several sensor-less wavefront aberration correction methods that correct nonlinear wavefront aberrations by maximizing the optical coherence tomography (OCT) signal are tested on an OCT setup. A conventional coordinate search method is compared to two model-based optimization methods. The first model-based method takes advantage of the well-known optimization algorithm (NEWUOA) and utilizes a quadratic model. The second model-based method (DONE) is new and utilizes a random multidimensional Fourier-basis expansion. The model-based algorithms achieve lower wavefront errors with up to ten times fewer measurements. Furthermore, the newly proposed DONE method outperforms the NEWUOA method significantly. The DONE algorithm is tested on OCT images and shows a significantly improved image quality.
Comparison of different 3D wavefront sensing and reconstruction techniques for MCAO
NASA Astrophysics Data System (ADS)
Bello, Dolores; Vérinaud, Christophe; Conan, Jean-Marc; Fusco, Thierry; Carbillet, Marcel; Esposito, Simone
2003-02-01
The vertical distribution of the turbulence limits the field of view of classical adaptive optics due to the anisoplanatism. Multiconjugate adaptive optics (MCAO) uses several deformable mirrors conjugated to different layers in the atmosphere to overcome this effect. In the last few years, many studies and developments have been done regarding the analysis of the turbulence volume, and the choice of the wavefront reconstruction techniques.An extensive study of MCAO modelisation and performance estimation has been done at OAA and ONERA. The developed Monte Carlo codes allow to simulate and investigate many aspects: comparison of turbulence analysis strategies (tomography or layer oriented) and comparison of different reconstruction approaches. For instance in the layer oriented approach, the control for a given deformable mirror can be either deduced from the whole set of wavefront sensor measurements or only using the associated wavefront sensor. Numerical simulations are presented showing the advantages and disadvantages of these different options for several cases depending on the number, geometry and magnitude of the guide stars.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
Optimal wavefront estimation of incoherent sources
NASA Astrophysics Data System (ADS)
Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler
2014-08-01
Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.
Shack-Hartmann wavefront sensor with large dynamic range.
Xia, Mingliang; Li, Chao; Hu, Lifa; Cao, Zhaoliang; Mu, Quanquan; Xuan, Li
2010-01-01
A new spot centroid detection algorithm for a Shack-Hartmann wavefront sensor (SHWFS) is experimentally investigated. The algorithm is a kind of dynamic tracking algorithm that tracks and calculates the corresponding spot centroid of the current spot map based on the spot centroid of the previous spot map, according to the strong correlation of the wavefront slope and the centroid of the corresponding spot between temporally adjacent SHWFS measurements. That is, for adjacent measurements, the spot centroid movement will usually fall within some range. Using the algorithm, the dynamic range of an SHWFS can be expanded by a factor of three in the measurement of tilt aberration compared with the conventional algorithm, more than 1.3 times in the measurement of defocus aberration, and more than 2 times in the measurement of the mixture of spherical aberration plus coma aberration. The algorithm is applied in our SHWFS to measure the distorted wavefront of the human eye. The experimental results of the adaptive optics (AO) system for retina imaging are presented to prove its feasibility for highly aberrated eyes.
Design and Implementation of the PALM-3000 Real-Time Control System
NASA Technical Reports Server (NTRS)
Truong, Tuan N.; Bouchez, Antonin H.; Burruss, Rick S.; Dekany, Richard G.; Guiwits, Stephen R.; Roberts, Jennifer E.; Shelton, Jean C.; Troy, Mitchell
2012-01-01
This paper reflects, from a computational perspective, on the experience gathered in designing and implementing realtime control of the PALM-3000 adaptive optics system currently in operation at the Palomar Observatory. We review the algorithms that serve as functional requirements driving the architecture developed, and describe key design issues and solutions that contributed to the system's low compute-latency. Additionally, we describe an implementation of dense matrix-vector-multiplication for wavefront reconstruction that exceeds 95% of the maximum sustained achievable bandwidth on NVIDIA Geforce 8800GTX GPU.
Development of a pyramidal wavefront sensor test-bench at INO
NASA Astrophysics Data System (ADS)
Turbide, Simon; Wang, Min; Gauvin, Jonny; Martin, Olivier; Savard, Maxime; Bourqui, Pascal; Veran, Jean-Pierre; Deschenes, William; Anctil, Genevieve; Chateauneuf, François
2013-12-01
The key technical element of the adaptive optics in astronomy is the wavefront sensing (WFS). One of the advantages of the pyramid wavefront sensor (P-WFS) over the widely used Shack-Hartmann wavefront sensor seems to be the increased sensitivity in closed-loop applications. A high-sensitivity and large dynamic-range WFS, such as P-WFS technology, still needs to be further investigated for proper justification in future Extremely Large Telescopes application. At INO, we have recently carried out the optical design, testing and performance evaluation of a P-WFS bench setup. The optical design of the bench setup mainly consists of the super-LED fiber source, source collimator, spatial light modulator (SLM), relay lenses, tip-tilt mirror, Fourier-transforming lens, and a four-faceted glass pyramid with a large vertex angle as well as pupil re-imaged optics. The phase-only SLM has been introduced in the bench setup to generate atmospheric turbulence with a maximum phase shift of more than 2π at each pixel (256 grey levels). Like a modified Foucault knife-edge test, the refractive pyramid element is used to produce four images of the entrance pupil on a CCD camera. The Fourier-transforming lens, which is used before the pyramid prism, is designed for telecentric output to allow dynamic modulation (rotation of the beam around the pyramid-prism center) from a tip-tilt mirror. Furthermore, a P-WFS diffraction-based model has been developed. This model includes most of the system limitations such as the SLM discrete voltage steps and the CCD pixel pitch. The pyramid effects (edges and tip) are considered as well. The modal wavefront reconstruction algorithm relies on the construction of an interaction matrix (one for each modulation's amplitude). Each column of the interaction matrix represents the combination of the four pupil images for a given wavefront aberration. The nice agreement between the data and the model suggest that the limitation of the system is not the P-WFS itself, but rather its environment such as source intensity fluctuation and vibration of the optical bench. Finally, the phase-reconstruction errors of the P-WFS have been compared to those of a Shack-Hartmann, showing the regions of interest of the former system. The bench setup will be focusing on the astronomy application as well as commercial applications, such as bio-medical application etc.
Plenoptic wavefront sensor with scattering pupil.
Vdovin, Gleb; Soloviev, Oleg; Loktev, Mikhail
2014-04-21
We consider a wavefront sensor combining scattering pupil with a plenoptic imager. Such a sensor utilizes the same reconstruction principle as the Hartmann-Shack sensor, however it is free from the ambiguity of the spot location caused by the periodic structure of the sensor matrix, and allows for wider range of measured aberrations. In our study, sensor with scattering pupil has demonstrated a good match between the introduced and reconstructed aberrations, both in the simulation and experiment. The concept is expected to be applicable to optical metrology of strongly distorted wavefronts, especially for measurements through dirty, distorted, or scattering windows and pupils, such as cataract eyes.
Analysis of Spacelab-III Reconstructed Wavefronts by Non-Holographic Methods
NASA Technical Reports Server (NTRS)
Vikram, Chandra S.; Witherow, William K.; Rose, M. Franklin (Technical Monitor)
2001-01-01
Holography has been used in several past space missions. One popular experimental mode deals with study of fluid refractive properties in the crystal growth cell. The perceived advantage of holography is that it stores and reconstructs wavefronts so that a complete information is available later on ground. That means the wavefront can be analyzed not only by traditional holographic interferometry but other means as well. We have successfully demonstrated two such means being described here. One is deflectometry using a Ronchi grating and the other confocal optical processing. These results, using holograms from Spacelab-III mission dealing with triglycine sulfate crystal growth clearly demonstrate that a single hardware (holography) can do the task of several fluid experimental systems. Finally, not experimentally demonstrated, the possibility of some other analysis modes like speckle techniques and video holography using the reconstructed wavefronts have been described. Since only traditional holographic interferometry has been used in the past leading to the argument that non-holographic interferometry hardware in space could do the job, the present study firmly establishes advantage of holography.
Parallel-Computing Architecture for JWST Wavefront-Sensing Algorithms
2011-09-01
results due to the increasing cost and complexity of each test. 2. ALGORITHM OVERVIEW Phase retrieval is an image-based wavefront-sensing...broadband illumination problems we have found that hand-tuning the right matrix sizes can account for a speedup of 86x faster. This comes from hand-picking...Wavefront Sensing and Control”. Proceedings of SPIE (2007) vol. 6687 (08). [5] Greenhouse, M. A., Drury , M. P., Dunn, J. L., Glazer, S. D., Greville, E
Solar multi-conjugate adaptive optics performance improvement
NASA Astrophysics Data System (ADS)
Zhang, Zhicheng; Zhang, Xiaofang; Song, Jie
2015-08-01
In order to overcome the effect of the atmospheric anisoplanatism, Multi-Conjugate Adaptive Optics (MCAO), which was developed based on turbulence correction by means of several deformable mirrors (DMs) conjugated to different altitude and by which the limit of a small corrected FOV that is achievable with AO is overcome and a wider FOV is able to be corrected, has been widely used to widen the field-of-view (FOV) of a solar telescope. With the assistance of the multi-threaded Adaptive Optics Simulator (MAOS), we can make a 3D reconstruction of the distorted wavefront. The correction is applied by one or more DMs. This technique benefits from information about atmospheric turbulence at different layers, which can be used to reconstruct the wavefront extremely well. In MAOS, the sensors are either simulated as idealized wavefront gradient sensors, tip-tilt sensors based on the best Zernike fit, or a WFS using physical optics and incorporating user specified pixel characteristics and a matched filter pixel processing algorithm. Only considering the atmospheric anisoplanatism, we focus on how the performance of a solar MCAO system is related to the numbers of DMs and their conjugate heights. We theoretically quantify the performance of the tomographic solar MCAO system. The results indicate that the tomographic AO system can improve the average Strehl ratio of a solar telescope by only employing one or two DMs conjugated to the optimum altitude. And the S.R. has a significant increase when more deformable mirrors are used. Furthermore, we discuss the effects of DM conjugate altitude on the correction achievable by the MCAO system, and present the optimum DM conjugate altitudes.
Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.
Haber, Aleksandar; Verhaegen, Michel
2016-11-15
We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.
Virtual pyramid wavefront sensor for phase unwrapping.
Akondi, Vyas; Vohnsen, Brian; Marcos, Susana
2016-10-10
Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.
Dual-thread parallel control strategy for ophthalmic adaptive optics.
Yu, Yongxin; Zhang, Yuhua
To improve ophthalmic adaptive optics speed and compensate for ocular wavefront aberration of high temporal frequency, the adaptive optics wavefront correction has been implemented with a control scheme including 2 parallel threads; one is dedicated to wavefront detection and the other conducts wavefront reconstruction and compensation. With a custom Shack-Hartmann wavefront sensor that measures the ocular wave aberration with 193 subapertures across the pupil, adaptive optics has achieved a closed loop updating frequency up to 110 Hz, and demonstrated robust compensation for ocular wave aberration up to 50 Hz in an adaptive optics scanning laser ophthalmoscope.
Dual-thread parallel control strategy for ophthalmic adaptive optics
Yu, Yongxin; Zhang, Yuhua
2015-01-01
To improve ophthalmic adaptive optics speed and compensate for ocular wavefront aberration of high temporal frequency, the adaptive optics wavefront correction has been implemented with a control scheme including 2 parallel threads; one is dedicated to wavefront detection and the other conducts wavefront reconstruction and compensation. With a custom Shack-Hartmann wavefront sensor that measures the ocular wave aberration with 193 subapertures across the pupil, adaptive optics has achieved a closed loop updating frequency up to 110 Hz, and demonstrated robust compensation for ocular wave aberration up to 50 Hz in an adaptive optics scanning laser ophthalmoscope. PMID:25866498
Experimental results for correlation-based wavefront sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A; Palmer, D W; LaFortune, K N
2005-07-01
Correlation wave-front sensing can improve Adaptive Optics (AO) system performance in two keys areas. For point-source-based AO systems, Correlation is more accurate, more robust to changing conditions and provides lower noise than a centroiding algorithm. Experimental results from the Lick AO system and the SSHCL laser AO system confirm this. For remote imaging, Correlation enables the use of extended objects for wave-front sensing. Results from short horizontal-path experiments will show algorithm properties and requirements.
An analysis of printing conditions for wavefront overlapping printing
NASA Astrophysics Data System (ADS)
Ichihashi, Y.; Yamamoto, K.; Wakunami, K.; Oi, R.; Okui, M.; Senoh, T.
2017-03-01
Wavefront printing for a digitally-designed hologram has got attentions recently. In this printing, a spatial light modulator (SLM) is used for displaying a hologram data and the wavefront is reproduced by irradiating the hologram with a reference light the same way as electronic holography. However, a pixel count of current SLM devices is not enough to display an entire hologram data. To generate a practical digitally-designed hologram, the entire hologram data is divided into a set of sub-hologram data and wavefront reproduced by each sub-hologram is sequentially recorded in tiling manner by using X-Y motorized stage. Due to a lack of positioning an accuracy of X-Y motorized stage and the temporal incoherent recording, phase continuity of recorded/reproduced wavefront is lost between neighboring subholograms. In this paper, we generate the holograms that have different size of sub-holograms with an overlap or nonoverlap, and verify the size of sub-holograms effect on the reconstructed images. In the result, the reconstructed images degrade with decreasing the size of sub-holograms and there is little or no degradation of quality by the wavefront printing with the overlap.
Terahertz wavefront assessment based on 2D electro-optic imaging
NASA Astrophysics Data System (ADS)
Cahyadi, Harsono; Ichikawa, Ryuji; Degert, Jérôme; Freysz, Eric; Yasui, Takeshi; Abraham, Emmanuel
2015-03-01
Complete characterization of terahertz (THz) radiation becomes an interesting yet challenging study for many years. In visible optical region, the wavefront assessment has been proved as a powerful tool for the beam profiling and characterization, which consequently requires 2-dimension (2D) single-shot acquisition of the beam cross-section to provide the spatial profile in time- and frequency-domain. In THz region, the main problem is the lack of effective THz cameras to satisfy this need. In this communication, we propose a simple setup based on free-space collinear 2D electrooptic sampling in a ZnTe crystal for the characterization of THz wavefronts. In principle, we map the optically converted, time-resolved data of the THz pulse by changing the time delay between the probe pulse and the generated THz pulse. The temporal waveforms from different lens-ZnTe distances can clearly indicate the evolution of THz beam as it is converged, focused, or diverged. From the Fourier transform of the temporal waveforms, we can obtain the spectral profile of a broadband THz wave, which in this case within the 0.1-2 THz range. The spectral profile also provides the frequency dependency of the THz pulse amplitude. The comparison between experimental and theoretical results at certain frequencies (here we choose 0.285 and 1.035 THz) is in a good agreement suggesting that our system is capable of THz wavefront characterization. Furthermore, the implementation of Hartmann/Shack-Hartmann sensor principle enables the reconstruction of THz wavefront. We demonstrate the reconstruction of THz wavefronts which are changed from planar wave to spherical one due to the insertion of convex THz lens in the THz beam path. We apply and compare two different reconstruction methods: linear integration and Zernike polynomial. Roughly we conclude that the Zernike method provide smoother wavefront shape that can be elaborated later into quantitative-qualitative analysis about the wavefront distortion.
Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.
Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M
2017-05-15
We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.
An ANN-Based Smart Tomographic Reconstructor in a Dynamic Environment
de Cos Juez, Francisco J.; Lasheras, Fernando Sánchez; Roqueñí, Nieves; Osborn, James
2012-01-01
In astronomy, the light emitted by an object travels through the vacuum of space and then the turbulent atmosphere before arriving at a ground based telescope. By passing through the atmosphere a series of turbulent layers modify the light's wave-front in such a way that Adaptive Optics reconstruction techniques are needed to improve the image quality. A novel reconstruction technique based in Artificial Neural Networks (ANN) is proposed. The network is designed to use the local tilts of the wave-front measured by a Shack Hartmann Wave-front Sensor (SHWFS) as inputs and estimate the turbulence in terms of Zernike coefficients. The ANN used is a Multi-Layer Perceptron (MLP) trained with simulated data with one turbulent layer changing in altitude. The reconstructor was tested using three different atmospheric profiles and compared with two existing reconstruction techniques: Least Squares type Matrix Vector Multiplication (LS) and Learn and Apply (L + A). PMID:23012524
Waffle mode error in the AEOS adaptive optics point-spread function
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Graham, James R.
2003-02-01
Adaptive optics (AO) systems have improved astronomical imaging capabilities significantly over the last decade, and have the potential to revolutionize the kinds of science done with 4-5m class ground-based telescopes. However, provided sufficient detailed study and analysis, existing AO systems can be improved beyond their original specified error budgets. Indeed, modeling AO systems has been a major activity in the past decade: sources of noise in the atmosphere and the wavefront sensing WFS) control loop have received a great deal of attention, and many detailed and sophisticated control-theoretic and numerical models predicting AO performance are already in existence. However, in terms of AO system performance improvements, wavefront reconstruction (WFR) and wavefront calibration techniques have commanded relatively little attention. We elucidate the nature of some of these reconstruction problems, and demonstrate their existence in data from the AEOS AO system. We simulate the AO correction of AEOS in the I-band, and show that the magnitude of the `waffle mode' error in the AEOS reconstructor is considerably larger than expected. We suggest ways of reducing the magnitude of this error, and, in doing so, open up ways of understanding how wavefront reconstruction might handle bad actuators and partially-illuminated WFS subapertures.
Baranec, Christoph; Dekany, Richard
2008-10-01
We introduce a Shack-Hartmann wavefront sensor for adaptive optics that enables dynamic control of the spatial sampling of an incoming wavefront using a segmented mirror microelectrical mechanical systems (MEMS) device. Unlike a conventional lenslet array, subapertures are defined by either segments or groups of segments of a mirror array, with the ability to change spatial pupil sampling arbitrarily by redefining the segment grouping. Control over the spatial sampling of the wavefront allows for the minimization of wavefront reconstruction error for different intensities of guide source and different atmospheric conditions, which in turn maximizes an adaptive optics system's delivered Strehl ratio. Requirements for the MEMS devices needed in this Shack-Hartmann wavefront sensor are also presented.
NASA Astrophysics Data System (ADS)
Woeger, Friedrich; Rimmele, Thomas
2009-10-01
We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
NASA Astrophysics Data System (ADS)
Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying
2018-03-01
In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.
Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing
2014-10-01
Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.
NASA Astrophysics Data System (ADS)
Xuan, Li; He, Bin; Hu, Li-Fa; Li, Da-Yu; Xu, Huan-Yu; Zhang, Xing-Yun; Wang, Shao-Xin; Wang, Yu-Kun; Yang, Cheng-Liang; Cao, Zhao-Liang; Mu, Quan-Quan; Lu, Xing-Hai
2016-09-01
Multi-conjugation adaptive optics (MCAOs) have been investigated and used in the large aperture optical telescopes for high-resolution imaging with large field of view (FOV). The atmospheric tomographic phase reconstruction and projection of three-dimensional turbulence volume onto wavefront correctors, such as deformable mirrors (DMs) or liquid crystal wavefront correctors (LCWCs), is a very important step in the data processing of an MCAO’s controller. In this paper, a method according to the wavefront reconstruction performance of MCAO is presented to evaluate the optimized configuration of multi laser guide stars (LGSs) and the reasonable conjugation heights of LCWCs. Analytical formulations are derived for the different configurations and are used to generate optimized parameters for MCAO. Several examples are given to demonstrate our LGSs configuration optimization method. Compared with traditional methods, our method has minimum wavefront tomographic error, which will be helpful to get higher imaging resolution at large FOV in MCAO. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174274, 11174279, 61205021, 11204299, 61475152, and 61405194) and the State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences.
Development of a wavefront sensor for terahertz pulses.
Abraham, Emmanuel; Cahyadi, Harsono; Brossard, Mathilde; Degert, Jérôme; Freysz, Eric; Yasui, Takeshi
2016-03-07
Wavefront characterization of terahertz pulses is essential to optimize far-field intensity distribution of time-domain (imaging) spectrometers or increase the peak power of intense terahertz sources. In this paper, we report on the wavefront measurement of terahertz pulses using a Hartmann sensor associated with a 2D electro-optic imaging system composed of a ZnTe crystal and a CMOS camera. We quantitatively determined the deformations of planar and converging spherical wavefronts using the modal Zernike reconstruction least-squares method. Associated with deformable mirrors, the sensor will also open the route to terahertz adaptive optics.
An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations. PMID:22315523
Magdaleno, Eduardo; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel
2010-01-01
In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.
Design and realization of adaptive optical principle system without wavefront sensing
NASA Astrophysics Data System (ADS)
Wang, Xiaobin; Niu, Chaojun; Guo, Yaxing; Han, Xiang'e.
2018-02-01
In this paper, we focus on the performance improvement of the free space optical communication system and carry out the research on wavefront-sensorless adaptive optics. We use a phase only liquid crystal spatial light modulator (SLM) as the wavefront corrector. The optical intensity distribution of the distorted wavefront is detected by a CCD. We develop a wavefront controller based on ARM and a software based on the Linux operating system. The wavefront controller can control the CCD camera and the wavefront corrector. There being two SLMs in the experimental system, one simulates atmospheric turbulence and the other is used to compensate the wavefront distortion. The experimental results show that the performance quality metric (the total gray value of 25 pixels) increases from 3037 to 4863 after 200 iterations. Besides, it is demonstrated that our wavefront-sensorless adaptive optics system based on SPGD algorithm has a good performance in compensating wavefront distortion.
Digital pyramid wavefront sensor with tunable modulation.
Akondi, Vyas; Castillo, Sara; Vohnsen, Brian
2013-07-29
The pyramid wavefront sensor is known for its high sensitivity and dynamic range that can be tuned by mechanically altering its modulation amplitude. Here, a novel modulating digital scheme employing a reflecting phase only spatial light modulator is demonstrated. The use of the modulator allows an easy reconfigurable pyramid with digital control of the apex angle and modulation geometry without the need of any mechanically moving parts. Aberrations introduced by a 140-actuator deformable mirror were simultaneously sensed with the help of a commercial Hartmann-Shack wavefront sensor. The wavefronts reconstructed using the digital pyramid wavefront sensor matched very closely with those sensed by the Hartmann-Shack. It is noted that a tunable modulation is necessary to operate the wavefront sensor in the linear regime and to accurately sense aberrations. Through simulations, it is shown that the wavefront sensor can be extended to astronomical applications as well. This novel digital pyramid wavefront sensor has the potential to become an attractive option in both open and closed loop adaptive optics systems.
Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm
NASA Technical Reports Server (NTRS)
Sidick, Erikin
2009-01-01
A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.
NASA Astrophysics Data System (ADS)
Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu
2018-04-01
A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.
NASA Astrophysics Data System (ADS)
Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi
2016-08-01
The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).
The wavefront of the radio signal emitted by cosmic ray air showers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apel, W.D.; Bekk, K.; Blümer, J.
2014-09-01
Analyzing measurements of the LOPES antenna array together with corresponding CoREAS simulations for more than 300 measured events with energy above 10{sup 17} eV and zenith angles smaller than 45{sup o}, we find that the radio wavefront of cosmic-ray air showers is of approximately hyperbolic shape. The simulations predict a slightly steeper wavefront towards East than towards West, but this asymmetry is negligible against the measurement uncertainties of LOPES. At axis distances ∼> 50 m, the wavefront can be approximated by a simple cone. According to the simulations, the cone angle is clearly correlated with the shower maximum. Thus, we confirmmore » earlier predictions that arrival time measurements can be used to study the longitudinal shower development, but now using a realistic wavefront. Moreover, we show that the hyperbolic wavefront is compatible with our measurement, and we present several experimental indications that the cone angle is indeed sensitive to the shower development. Consequently, the wavefront can be used to statistically study the primary composition of ultra-high energy cosmic rays. At LOPES, the experimentally achieved precision for the shower maximum is limited by measurement uncertainties to approximately 140 g/c {sup 2}. But the simulations indicate that under better conditions this method might yield an accuracy for the atmospheric depth of the shower maximum, X{sub max}, better than 30 g/c {sup 2}. This would be competitive with the established air-fluorescence and air-Cherenkov techniques, where the radio technique offers the advantage of a significantly higher duty-cycle. Finally, the hyperbolic wavefront can be used to reconstruct the shower geometry more accurately, which potentially allows a better reconstruction of all other shower parameters, too.« less
The wavefront of the radio signal emitted by cosmic ray air showers
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bähren, L.; Bekk, K.; Bertaina, M.; Biermann, P. L.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Falcke, H.; Fuchs, B.; Gemmeke, H.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Horneffer, A.; Huber, D.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Krömer, O.; Kuijpers, J.; Link, K.; Łuczak, P.; Ludwig, M.; Mathes, H. J.; Melissas, M.; Morello, C.; Oehlschläger, J.; Palmieri, N.; Pierog, T.; Rautenberg, J.; Rebel, H.; Roth, M.; Rühle, C.; Saftoiu, A.; Schieler, H.; Schmidt, A.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Weindl, A.; Wochele, J.; Zabierowski, J.; Zensus, J. A.
2014-09-01
Analyzing measurements of the LOPES antenna array together with corresponding CoREAS simulations for more than 300 measured events with energy above 1017 eV and zenith angles smaller than 45o, we find that the radio wavefront of cosmic-ray air showers is of approximately hyperbolic shape. The simulations predict a slightly steeper wavefront towards East than towards West, but this asymmetry is negligible against the measurement uncertainties of LOPES. At axis distances gtrsim 50 m, the wavefront can be approximated by a simple cone. According to the simulations, the cone angle is clearly correlated with the shower maximum. Thus, we confirm earlier predictions that arrival time measurements can be used to study the longitudinal shower development, but now using a realistic wavefront. Moreover, we show that the hyperbolic wavefront is compatible with our measurement, and we present several experimental indications that the cone angle is indeed sensitive to the shower development. Consequently, the wavefront can be used to statistically study the primary composition of ultra-high energy cosmic rays. At LOPES, the experimentally achieved precision for the shower maximum is limited by measurement uncertainties to approximately 140 g/c 2. But the simulations indicate that under better conditions this method might yield an accuracy for the atmospheric depth of the shower maximum, Xmax, better than 30 g/c 2. This would be competitive with the established air-fluorescence and air-Cherenkov techniques, where the radio technique offers the advantage of a significantly higher duty-cycle. Finally, the hyperbolic wavefront can be used to reconstruct the shower geometry more accurately, which potentially allows a better reconstruction of all other shower parameters, too.
Two dimensional wavefront retrieval using lateral shearing interferometry
NASA Astrophysics Data System (ADS)
Mancilla-Escobar, B.; Malacara-Hernández, Z.; Malacara-Hernández, D.
2018-06-01
A new zonal two-dimensional method for wavefront retrieval from a surface under test using lateral shearing interferometry is presented. A modified Saunders method and phase shifting techniques are combined to generate a method for wavefront reconstruction. The result is a wavefront with an error below 0.7 λ and without any global high frequency filtering. A zonal analysis over square cells along the surfaces is made, obtaining a polynomial expression for the wavefront deformations over each cell. The main advantage of this method over previously published methods is that a global filtering of high spatial frequencies is not present. Thus, a global smoothing of the wavefront deformations is avoided, allowing the detection of deformations with relatively small extensions, that is, with high spatial frequencies. Additionally, local curvature and low order aberration coefficients are obtained in each cell.
Dong, Bing; Booth, Martin J
2018-01-22
In adaptive optical microscopy of thick biological tissue, strong scattering and aberrations can change the effective pupil shape by rendering some Shack-Hartmann spots unusable. The change of pupil shape leads to a change of wavefront reconstruction or control matrix that should be updated accordingly. Modified slope and modal wavefront control methods based on measurements of a Shack-Hartmann wavefront sensor are proposed to accommodate an arbitrarily shaped pupil. Furthermore, we present partial wavefront control methods that remove specific aberration modes like tip, tilt and defocus from the control loop. The proposed control methods were investigated and compared by simulation using experimentally obtained aberration data. The performance was then tested experimentally through closed-loop aberration corrections using an obscured pupil.
AWARE - The Automated EUV Wave Analysis and REduction algorithm
NASA Astrophysics Data System (ADS)
Ireland, J.; Inglis; A. R.; Shih, A. Y.; Christe, S.; Mumford, S.; Hayes, L. A.; Thompson, B. J.
2016-10-01
Extreme ultraviolet (EUV) waves are large-scale propagating disturbances observed in the solar corona, frequently associated with coronal mass ejections and flares. Since their discovery over two hundred papers discussing their properties, causes and physics have been published. However, their fundamental nature and the physics of their interactions with other solar phenomena are still not understood. To further the understanding of EUV waves, and their relation to other solar phenomena, we have constructed the Automated Wave Analysis and REduction (AWARE) algorithm for the detection of EUV waves over the full Sun. The AWARE algorithm is based on a novel image processing approach to isolating the bright wavefront of the EUV as it propagates across the corona. AWARE detects the presence of a wavefront, and measures the distance, velocity and acceleration of that wavefront across the Sun. Results from AWARE are compared to results from other algorithms for some well known EUV wave events. Suggestions are also give for further refinements to the basic algorithm presented here.
Tomographic wavefront retrieval by combined use of geometric and plenoptic sensors
NASA Astrophysics Data System (ADS)
Trujillo-Sevilla, J. M.; Rodríguez-Ramos, L. F.; Fernández-Valdivia, Juan J.; Marichal-Hernández, José G.; Rodríguez-Ramos, J. M.
2014-05-01
Modern astronomic telescopes take advantage of multi-conjugate adaptive optics, in which wavefront sensors play a key role. A single sensor capable of measuring wavefront phases at any angle of observation would be helpful when improving atmospheric tomographic reconstruction. A new sensor combining both geometric and plenoptic arrangements is proposed, and a simulation demonstrating its working principle is also shown. Results show that this sensor is feasible, and also that single extended objects can be used to perform tomography of atmospheric turbulence.
NASA Astrophysics Data System (ADS)
Di, Jianglei; Zhao, Jianlin; Sun, Weiwei; Jiang, Hongzhen; Yan, Xiaobo
2009-10-01
Digital holographic microscopy allows the numerical reconstruction of the complex wavefront of samples, especially biological samples such as living cells. In digital holographic microscopy, a microscope objective is introduced to improve the transverse resolution of the sample; however a phase aberration in the object wavefront is also brought along, which will affect the phase distribution of the reconstructed image. We propose here a numerical method to compensate for the phase aberration of thin transparent objects with a single hologram. The least squares surface fitting with points number less than the matrix of the original hologram is performed on the unwrapped phase distribution to remove the unwanted wavefront curvature. The proposed method is demonstrated with the samples of the cicada wings and epidermal cells of garlic, and the experimental results are consistent with that of the double exposure method.
Phase-Retrieval Uncertainty Estimation and Algorithm Comparison for the JWST-ISIM Test Campaign
NASA Technical Reports Server (NTRS)
Aronstein, David L.; Smith, J. Scott
2016-01-01
Phase retrieval, the process of determining the exitpupil wavefront of an optical instrument from image-plane intensity measurements, is the baseline methodology for characterizing the wavefront for the suite of science instruments (SIs) in the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST). JWST is a large, infrared space telescope with a 6.5-meter diameter primary mirror. JWST is currently NASA's flagship mission and will be the premier space observatory of the next decade. ISIM contains four optical benches with nine unique instruments, including redundancies. ISIM was characterized at the Goddard Space Flight Center (GSFC) in Greenbelt, MD in a series of cryogenic vacuum tests using a telescope simulator. During these tests, phase-retrieval algorithms were used to characterize the instruments. The objective of this paper is to describe the Monte-Carlo simulations that were used to establish uncertainties (i.e., error bars) for the wavefronts of the various instruments in ISIM. Multiple retrieval algorithms were used in the analysis of ISIM phase-retrieval focus-sweep data, including an iterativetransform algorithm and a nonlinear optimization algorithm. These algorithms emphasize the recovery of numerous optical parameters, including low-order wavefront composition described by Zernike polynomial terms and high-order wavefront described by a point-by-point map, location of instrument best focus, focal ratio, exit-pupil amplitude, the morphology of any extended object, and optical jitter. The secondary objective of this paper is to report on the relative accuracies of these algorithms for the ISIM instrument tests, and a comparison of their computational complexity and their performance on central and graphical processing unit clusters. From a phase-retrieval perspective, the ISIM test campaign includes a variety of source illumination bandwidths, various image-plane sampling criteria above and below the Nyquist- Shannon critical sampling value, various extended object sizes, and several other impactful effects.
Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping
2009-11-10
A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.
Wavefront Control Toolbox for James Webb Space Telescope Testbed
NASA Technical Reports Server (NTRS)
Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin
2007-01-01
We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.
Bueno, Juan M; Acosta, Eva; Schwarz, Christina; Artal, Pablo
2010-01-20
A dual setup composed of a point diffraction interferometer (PDI) and a Hartmann-Shack (HS) wavefront sensor was built to compare the estimates of wavefront aberrations provided by the two different and complementary techniques when applied to different phase plates. Results show that under the same experimental and fitting conditions both techniques provide similar information concerning the wavefront aberration map. When taking into account all Zernike terms up to 6th order, the maximum difference in root-mean-square wavefront error was 0.08 microm, and this reduced up to 0.03 microm when excluding lower-order terms. The effects of the pupil size and the order of the Zernike expansion used to reconstruct the wavefront were evaluated. The combination of the two techniques can accurately measure complicated phase profiles, combining the robustness of the HS and the higher resolution and dynamic range of the PDI.
Underwater Turbulence Detection Using Gated Wavefront Sensing Technique
Bi, Ying; Xu, Xiping; Chow, Eddy Mun Tik
2018-01-01
Laser sensing has been applied in various underwater applications, ranging from underwater detection to laser underwater communications. However, there are several great challenges when profiling underwater turbulence effects. Underwater detection is greatly affected by the turbulence effect, where the acquired image suffers excessive noise, blurring, and deformation. In this paper, we propose a novel underwater turbulence detection method based on a gated wavefront sensing technique. First, we elaborate on the operating principle of gated wavefront sensing and wavefront reconstruction. We then setup an experimental system in order to validate the feasibility of our proposed method. The effect of underwater turbulence on detection is examined at different distances, and under different turbulence levels. The experimental results obtained from our gated wavefront sensing system indicate that underwater turbulence can be detected and analyzed. The proposed gated wavefront sensing system has the advantage of a simple structure and high detection efficiency for underwater environments. PMID:29518889
ESO/ST-ECF Data Analysis Workshop, 5th, Garching, Germany, Apr. 26, 27, 1993, Proceedings
NASA Astrophysics Data System (ADS)
Grosbol, Preben; de Ruijsscher, Resy
1993-01-01
Various papers on astronomical data analysis are presented. Individual optics addressed include: surface photometry of early-type galaxies, wavelet transform and adaptive filtering, package for surface photometry of galaxies, calibration of large-field mosaics, surface photometry of galaxies with HST, wavefront-supported image deconvolution, seeing effects on elliptical galaxies, multiple algorithms deconvolution program, enhancement of Skylab X-ray images, MIDAS procedures for the image analysis of E-S0 galaxies, photometric data reductions under MIDAS, crowded field photometry with deconvolved images, the DENIS Deep Near Infrared Survey. Also discussed are: analysis of astronomical time series, detection of low-amplitude stellar pulsations, new SOT method for frequency analysis, chaotic attractor reconstruction and applications to variable stars, reconstructing a 1D signal from irregular samples, automatic analysis for time series with large gaps, prospects for content-based image retrieval, redshift survey in the South Galactic Pole Region.
Phase retrieval algorithm for JWST Flight and Testbed Telescope
NASA Astrophysics Data System (ADS)
Dean, Bruce H.; Aronstein, David L.; Smith, J. Scott; Shiri, Ron; Acton, D. Scott
2006-06-01
An image-based wavefront sensing and control algorithm for the James Webb Space Telescope (JWST) is presented. The algorithm heritage is discussed in addition to implications for algorithm performance dictated by NASA's Technology Readiness Level (TRL) 6. The algorithm uses feedback through an adaptive diversity function to avoid the need for phase-unwrapping post-processing steps. Algorithm results are demonstrated using JWST Testbed Telescope (TBT) commissioning data and the accuracy is assessed by comparison with interferometer results on a multi-wave phase aberration. Strategies for minimizing aliasing artifacts in the recovered phase are presented and orthogonal basis functions are implemented for representing wavefronts in irregular hexagonal apertures. Algorithm implementation on a parallel cluster of high-speed digital signal processors (DSPs) is also discussed.
NASA Astrophysics Data System (ADS)
Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel J.; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Sarunic, Marinko V.; Verhaegen, Michel; Jian, Yifan
2017-02-01
Optical Coherence Tomography (OCT) has revolutionized modern ophthalmology, providing depth resolved images of the retinal layers in a system that is suited to a clinical environment. A limitation of the performance and utilization of the OCT systems has been the lateral resolution. Through the combination of wavefront sensorless adaptive optics with dual variable optical elements, we present a compact lens based OCT system that is capable of imaging the photoreceptor mosaic. We utilized a commercially available variable focal length lens to correct for a wide range of defocus commonly found in patient eyes, and a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators for aberration correction to obtain near diffraction limited imaging at the retina. A parallel processing computational platform permitted real-time image acquisition and display. The Data-based Online Nonlinear Extremum seeker (DONE) algorithm was used for real time optimization of the wavefront sensorless adaptive optics OCT, and the performance was compared with a coordinate search algorithm. Cross sectional images of the retinal layers and en face images of the cone photoreceptor mosaic acquired in vivo from research volunteers before and after WSAO optimization are presented. Applying the DONE algorithm in vivo for wavefront sensorless AO-OCT demonstrates that the DONE algorithm succeeds in drastically improving the signal while achieving a computational time of 1 ms per iteration, making it applicable for high speed real time applications.
Sidick, Erkin
2013-09-10
An adaptive periodic-correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high accuracy even when the subimages in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift estimate error of the APC algorithm has a component that depends on the content of the extended scene. In this paper, we assess the amount of that error and propose a method to minimize it.
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2012-01-01
Adaptive Periodic-Correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high-accuracy even when the sub-images in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift-estimate error of the APC algorithm has a component that depends on the content of extended-scene. In this paper we assess the amount of that error and propose a method to minimize it.
NASA Astrophysics Data System (ADS)
Maguen, Ezra I.; Salz, James J.; McDonald, Marguerite B.; Pettit, George H.; Papaioannou, Thanassis; Grundfest, Warren S.
2002-06-01
A study was undertaken to assess whether results of laser vision correction with the LADARVISION 193-nm excimer laser (Alcon-Autonomous technologies) can be improved with the use of wavefront analysis generated by a proprietary system including a Hartman-Schack sensor and expressed using Zernicke polynomials. A total of 82 eyes underwent LASIK in several centers with an improved algorithm, using the CustomCornea system. A subgroup of 48 eyes of 24 patients was randomized so that one eye undergoes conventional treatment and one eye undergoes treatment based on wavefront analysis. Treatment parameters were equal for each type of refractive error. 83% of all eyes had uncorrected vision of 20/20 or better and 95% were 20/25 or better. In all groups, uncorrected visual acuities did not improve significantly in eyes treated with wavefront analysis compared to conventional treatments. Higher order aberrations were consistently better corrected in eyes undergoing treatment based on wavefront analysis for LASIK at 6 months postop. In addition, the number of eyes with reduced RMS was significantly higher in the subset of eyes treated with a wavefront algorithm (38% vs. 5%). Wavefront technology may improve the outcomes of laser vision correction with the LADARVISION excimer laser. Further refinements of the technology and clinical trials will contribute to this goal.
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-03-01
The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.
NASA Astrophysics Data System (ADS)
Wahl, Daniel J.; Zhang, Pengfei; Jian, Yifan; Bonora, Stefano; Sarunic, Marinko V.; Zawadzki, Robert J.
2017-02-01
Adaptive optics (AO) is essential for achieving diffraction limited resolution in large numerical aperture (NA) in-vivo retinal imaging in small animals. Cellular-resolution in-vivo imaging of fluorescently labeled cells is highly desirable for studying pathophysiology in animal models of retina diseases in pre-clinical vision research. Currently, wavefront sensor-based (WFS-based) AO is widely used for retinal imaging and has demonstrated great success. However, the performance can be limited by several factors including common path errors, wavefront reconstruction errors and an ill-defined reference plane on the retina. Wavefront sensorless (WFS-less) AO has the advantage of avoiding these issues at the cost of algorithmic execution time. We have investigated WFS-less AO on a fluorescence scanning laser ophthalmoscopy (fSLO) system that was originally designed for WFS-based AO. The WFS-based AO uses a Shack-Hartmann WFS and a continuous surface deformable mirror in a closed-loop control system to measure and correct for aberrations induced by the mouse eye. The WFS-less AO performs an open-loop modal optimization with an image quality metric. After WFS-less AO aberration correction, the WFS was used as a control of the closed-loop WFS-less AO operation. We can easily switch between WFS-based and WFS-less control of the deformable mirror multiple times within an imaging session for the same mouse. This allows for a direct comparison between these two types of AO correction for fSLO. Our results demonstrate volumetric AO-fSLO imaging of mouse retinal cells labeled with GFP. Most significantly, we have analyzed and compared the aberration correction results for WFS-based and WFS-less AO imaging.
Myopic aberrations: Simulation based comparison of curvature and Hartmann Shack wavefront sensors
NASA Astrophysics Data System (ADS)
Basavaraju, Roopashree M.; Akondi, Vyas; Weddell, Stephen J.; Budihal, Raghavendra Prasad
2014-02-01
In comparison with a Hartmann Shack wavefront sensor, the curvature wavefront sensor is known for its higher sensitivity and greater dynamic range. The aim of this study is to numerically investigate the merits of using a curvature wavefront sensor, in comparison with a Hartmann Shack (HS) wavefront sensor, to analyze aberrations of the myopic eye. Aberrations were statistically generated using Zernike coefficient data of 41 myopic subjects obtained from the literature. The curvature sensor is relatively simple to implement, and the processing of extra- and intra-focal images was linearly resolved using the Radon transform to provide Zernike modes corresponding to statistically generated aberrations. Simulations of the HS wavefront sensor involve the evaluation of the focal spot pattern from simulated aberrations. Optical wavefronts were reconstructed using the slope geometry of Southwell. Monte Carlo simulation was used to find critical parameters for accurate wavefront sensing and to investigate the performance of HS and curvature sensors. The performance of the HS sensor is highly dependent on the number of subapertures and the curvature sensor is largely dependent on the number of Zernike modes used to represent the aberration and the effective propagation distance. It is shown that in order to achieve high wavefront sensing accuracy while measuring aberrations of the myopic eye, a simpler and cost effective curvature wavefront sensor is a reliable alternative to a high resolution HS wavefront sensor with a large number of subapertures.
Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor
NASA Astrophysics Data System (ADS)
Nagy, J.; Kelly, K.
2013-09-01
Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.
Broadband Phase Retrieval for Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A focus-diverse phase-retrieval algorithm has been shown to perform adequately for the purpose of image-based wavefront sensing when (1) broadband light (typically spanning the visible spectrum) is used in forming the images by use of an optical system under test and (2) the assumption of monochromaticity is applied to the broadband image data. Heretofore, it had been assumed that in order to obtain adequate performance, it is necessary to use narrowband or monochromatic light. Some background information, including definitions of terms and a brief description of pertinent aspects of image-based phase retrieval, is prerequisite to a meaningful summary of the present development. Phase retrieval is a general term used in optics to denote estimation of optical imperfections or aberrations of an optical system under test. The term image-based wavefront sensing refers to a general class of algorithms that recover optical phase information, and phase-retrieval algorithms constitute a subset of this class. In phase retrieval, one utilizes the measured response of the optical system under test to produce a phase estimate. The optical response of the system is defined as the image of a point-source object, which could be a star or a laboratory point source. The phase-retrieval problem is characterized as image-based in the sense that a charge-coupled-device camera, preferably of scientific imaging quality, is used to collect image data where the optical system would normally form an image. In a variant of phase retrieval, denoted phase-diverse phase retrieval [which can include focus-diverse phase retrieval (in which various defocus planes are used)], an additional known aberration (or an equivalent diversity function) is superimposed as an aid in estimating unknown aberrations by use of an image-based wavefront-sensing algorithm. Image-based phase-retrieval differs from such other wavefront-sensing methods, such as interferometry, shearing interferometry, curvature wavefront sensing, and Shack-Hartmann sensing, all of which entail disadvantages in comparison with image-based methods. The main disadvantages of these non-image based methods are complexity of test equipment and the need for a wavefront reference.
NASA Astrophysics Data System (ADS)
Campos-García, Manuel; Granados-Agustín, Fermín.; Cornejo-Rodríguez, Alejandro; Estrada-Molina, Amilcar; Avendaño-Alejo, Maximino; Moreno-Oliva, Víctor Iván.
2013-11-01
In order to obtain a clearer interpretation of the Intensity Transport Equation (ITE), in this work, we propose an algorithm to solve it for some particular wavefronts and its corresponding intensity distributions. By simulating intensity distributions in some planes, the ITE is turns into a Poisson equation with Neumann boundary conditions. The Poisson equation is solved by means of the iterative algorithm SOR (Simultaneous Over-Relaxation).
Zou, Weiyao; Burns, Stephen A.
2012-01-01
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. PMID:22441462
Zou, Weiyao; Burns, Stephen A
2012-03-20
A Lagrange multiplier-based damped least-squares control algorithm for woofer-tweeter (W-T) dual deformable-mirror (DM) adaptive optics (AO) is tested with a breadboard system. We show that the algorithm can complementarily command the two DMs to correct wavefront aberrations within a single optimization process: the woofer DM correcting the high-stroke, low-order aberrations, and the tweeter DM correcting the low-stroke, high-order aberrations. The optimal damping factor for a DM is found to be the median of the eigenvalue spectrum of the influence matrix of that DM. Wavefront control accuracy is maximized with the optimized control parameters. For the breadboard system, the residual wavefront error can be controlled to the precision of 0.03 μm in root mean square. The W-T dual-DM AO has applications in both ophthalmology and astronomy. © 2012 Optical Society of America
Projected Pupil Plane Pattern: an alternative LGS wavefront sensing technique
NASA Astrophysics Data System (ADS)
Yang, Huizhe; Bharmal, Nazim A.; Myers, Richard M.
2018-07-01
We have analysed and simulated a novel alternative Laser Guide Star (LGS) configuration termed Projected Pupil Plane Pattern (PPPP), including wavefront sensing and the reconstruction method. A key advantage of this method is that a collimated beam is launched through the telescope primary mirror, therefore the wavefront measurements do not suffer from the effects of focal anisoplanatism. A detailed simulation including the upward wave optics propagation, return path imaging, and linearized wavefront reconstruction has been presented. The conclusions that we draw from the simulation include the optimum pixel number across the pupilN = 32, the optimum number of Zernike modes (which is 78), propagation altitudes h1 = 10 km and h2 = 20 km for Rayleigh scattered returns, and the choice for the laser beam modulation (Gaussian beam). We also investigate the effects of turbulence profiles with multiple layers and find that it does not reduce PPPP performance as long as the turbulence layers are below h1. A signal-to-noise ratio analysis has been given when photon and read noise are introduced. Finally, we compare the PPPP performance with a conventional Shack-Hartmann Wavefront Sensor in an open loop, using Rayleigh LGS or sodium LGS, for 4-m and 10-m telescopes, respectively. For this purpose, we use a full Monte Carlo end-to-end AO simulation tool, Soapy. From these results, we confirm that PPPP does not suffer from focus anisoplanatism.
Projected Pupil Plane Pattern: an alternative LGS wavefront sensing technique
NASA Astrophysics Data System (ADS)
Yang, Huizhe; Bharmal, Nazim A.; Myers, Richard M.
2018-04-01
We have analyzed and simulated a novel alternative LGS configuration termed Projected Pupil Plane Pattern (PPPP), including wavefront sensing and the reconstruction method. A key advantage of this method is that a collimated beam is launched through the telescope primary mirror, therefore the wavefront measurements do not suffer from the effects of focal anisoplanatism. A detailed simulation including the upward wave optics propagation, return path imaging and linearized wavefront reconstruction has been presented. The conclusions that we draw from the simulation include the optimum pixel number across the pupil N=32, the optimum number of Zernike modes (which is 78), propagation altitudes h1 = 10 km and h2 = 20 km for Rayleigh scattered returns, and the choice for the laser beam modulation (Gaussian beam). We also investigate the effects of turbulence profiles with multiple layers and find that it does not reduce PPPP performance as long as the turbulence layers are below h1. A signal-to-noise ratio (SNR) analysis has been given when photon and read noise are introduced. Finally, we compare the PPPP performance with a conventional Shack-Hartmann Wavefront Sensor (WFS) in open loop, using Rayleigh LGS or sodium LGS, for 4-m and 10-m telescopes respectively. For this purpose we use a full Monte-Carlo end-to-end AO simulation tool, Soapy. From these results we confirm that PPPP does not suffer from focus anisoplanatism.
Transformation of a Plane Wavefront in Hemispherical Lenses Made of Leuco-Sapphire
NASA Astrophysics Data System (ADS)
Vetrov, V. N.; Ignatenkov, B. A.; Yakobson, V. E.
2018-01-01
An algorithm for wavefront calculation of ordinary and extraordinary waves after propagation through hemispherical components made of a uniaxial crystal is developed. The influence of frequency dispersion of n o and n e , as well as change in the direction of the optic axis of the crystal, on extraordinary wavefront in hemispheres made of from leuco-sapphire and a plastically deformed analog thereof is determined.
Estimating stochastic noise using in situ measurements from a linear wavefront slope sensor.
Bharmal, Nazim Ali; Reeves, Andrew P
2016-01-15
It is shown how the solenoidal component of noise from the measurements of a wavefront slope sensor can be utilized to estimate the total noise: specifically, the ensemble noise variance. It is well known that solenoidal noise is orthogonal to the reconstruction of the wavefront under conditions of low scintillation (absence of wavefront vortices). Therefore, it can be retrieved even with a nonzero slope signal present. By explicitly estimating the solenoidal noise from an ensemble of slopes, it can be retrieved for any wavefront sensor configuration. Furthermore, the ensemble variance is demonstrated to be related to the total noise variance via a straightforward relationship. This relationship is revealed via the method of the explicit estimation: it consists of a small, heuristic set of four constants that do not depend on the underlying statistics of the incoming wavefront. These constants seem to apply to all situations-data from a laboratory experiment as well as many configurations of numerical simulation-so the method is concluded to be generic.
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.
2007-01-01
We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.
Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.
Bedggood, Phillip; Metha, Andrew
2010-01-01
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.
Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry
NASA Astrophysics Data System (ADS)
Bedggood, Phillip; Metha, Andrew
2010-11-01
Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.
An Adaptive Cross-Correlation Algorithm for Extended-Scene Shack-Hartmann Wavefront Sensing
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.
2007-01-01
This viewgraph presentation reviews the Adaptive Cross-Correlation (ACC) Algorithm for extended scene-Shack Hartmann wavefront (WF) sensing. A Shack-Hartmann sensor places a lenslet array at a plane conjugate to the WF error source. Each sub-aperture lenslet samples the WF in the corresponding patch of the WF. A description of the ACC algorithm is included. The ACC has several benefits; amongst them are: ACC requires only about 4 image-shifting iterations to achieve 0.01 pixel accuracy and ACC is insensitive to both background light and noise much more robust than centroiding,
Feasibility study of a layer-oriented wavefront sensor for solar telescopes.
Marino, Jose; Wöger, Friedrich
2014-02-01
Solar multiconjugate adaptive optics systems rely on several wavefront sensors, which measure the incoming turbulent phase along several field directions to produce a tomographic reconstruction of the turbulent phase. In this paper, we explore an alternative wavefront sensing approach that attempts to directly measure the turbulent phase present at a particular height in the atmosphere: a layer-oriented cross-correlating Shack-Hartmann wavefront sensor (SHWFS). In an experiment at the Dunn Solar Telescope, we built a prototype layer-oriented cross-correlating SHWFS system conjugated to two separate atmospheric heights. We present the data obtained in the observations and complement these with ray-tracing computations to achieve a better understanding of the instrument's performance and limitations. The results obtained in this study strongly indicate that a layer-oriented cross-correlating SHWFS is not a practical design to measure the wavefront at a high layer in the atmosphere.
Advanced Imaging Optics Utilizing Wavefront Coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen
2015-06-01
Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less
Polarimetric image reconstruction algorithms
NASA Astrophysics Data System (ADS)
Valenzuela, John R.
In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.
Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.
Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L
2002-09-01
We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.
Ultra-high resolution coded wavefront sensor.
Wang, Congli; Dun, Xiong; Fu, Qiang; Heidrich, Wolfgang
2017-06-12
Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.
Chirped pulse digital holography for measuring the sequence of ultrafast optical wavefronts
NASA Astrophysics Data System (ADS)
Karasawa, Naoki
2018-04-01
Optical setups for measuring the sequence of ultrafast optical wavefronts using a chirped pulse as a reference wave in digital holography are proposed and analyzed. In this method, multiple ultrafast object pulses are used to probe the temporal evolution of ultrafast phenomena and they are interfered with a chirped reference wave to record a digital hologram. Wavefronts at different times can be reconstructed separately from the recorded hologram when the reference pulse can be treated as a quasi-monochromatic wave during the pulse width of each object pulse. The feasibility of this method is demonstrated by numerical simulation.
Study of the performance of image restoration under different wavefront aberrations
NASA Astrophysics Data System (ADS)
Wang, Xinqiu; Hu, Xinqi
2016-10-01
Image restoration is an effective way to improve the quality of images degraded by wave-front aberrations. If the wave-front aberration is too large, the performance of the image restoration will not be good. In this paper, the relationship between the performance of image restoration and the degree of wave-front aberrations is studied. A set of different wave-front aberrations is constructed by Zernike polynomials, and the corresponding PSF under white-light illumination is calculated. A set of blurred images is then obtained through convolution methods. Next we recover the images with the regularized Richardson-Lucy algorithm and use the RMS of the original image and the homologous deblurred image to evaluate the quality of restoration. Consequently, we determine the range of wave-front errors in which the recovered images are acceptable.
An Improved Wavefront Control Algorithm for Large Space Telescopes
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Basinger, Scott A.; Redding, David C.
2008-01-01
Wavefront sensing and control is required throughout the mission lifecycle of large space telescopes such as James Webb Space Telescope (JWST). When an optic of such a telescope is controlled with both surface-deforming and rigid-body actuators, the sensitivity-matrix obtained from the exit pupil wavefront vector divided by the corresponding actuator command value can sometimes become singular due to difference in actuator types and in actuator command values. In this paper, we propose a simple approach for preventing a sensitivity-matrix from singularity. We also introduce a new "minimum-wavefront and optimal control compensator". It uses an optimal control gain matrix obtained by feeding back the actuator commands along with the measured or estimated wavefront phase information to the estimator, thus eliminating the actuator modes that are not observable in the wavefront sensing process.
Real-time blind deconvolution of retinal images in adaptive optics scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2011-06-01
With the use of adaptive optics (AO), the ocular aberrations can be compensated to get high-resolution image of living human retina. However, the wavefront correction is not perfect due to the wavefront measure error and hardware restrictions. Thus, it is necessary to use a deconvolution algorithm to recover the retinal images. In this paper, a blind deconvolution technique called Incremental Wiener filter is used to restore the adaptive optics confocal scanning laser ophthalmoscope (AOSLO) images. The point-spread function (PSF) measured by wavefront sensor is only used as an initial value of our algorithm. We also realize the Incremental Wiener filter on graphics processing unit (GPU) in real-time. When the image size is 512 × 480 pixels, six iterations of our algorithm only spend about 10 ms. Retinal blood vessels as well as cells in retinal images are restored by our algorithm, and the PSFs are also revised. Retinal images with and without adaptive optics are both restored. The results show that Incremental Wiener filter reduces the noises and improve the image quality.
Implementation of a Wavefront-Sensing Algorithm
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce; Aronstein, David
2013-01-01
A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.
Adaptive Optics for the Thirty Meter Telescope
NASA Astrophysics Data System (ADS)
Ellerbroek, Brent
2013-12-01
This paper provides an overview of the progress made since the last AO4ELT conference towards developing the first-light AO architecture for the Thirty Meter Telescope (TMT). The Preliminary Design of the facility AO system NFIRAOS has been concluded by the Herzberg Institute of Astrophysics. Work on the client Infrared Imaging Spectrograph (IRIS) has progressed in parallel, including a successful Conceptual Design Review and prototyping of On-Instrument WFS (OIWFS) hardware. Progress on the design for the Laser Guide Star Facility (LGSF) continues at the Institute of Optics and Electronics in Chengdu, China, including the final acceptance of the Conceptual Design and modest revisions for the updated TMT telescope structure. Design and prototyping activities continue for lasers, wavefront sensing detectors, detector readout electronics, real-time control (RTC) processors, and deformable mirrors (DMs) with their associated drive electronics. Highlights include development of a prototype sum frequency guide star laser at the Technical Institute of Physics and Chemistry (Beijing); fabrication/test of prototype natural- and laser-guide star wavefront sensor CCDs for NFIRAOS by MIT Lincoln Laboratory and W.M. Keck Observatory; a trade study of RTC control algorithms and processors, with prototyping of GPU and FPGA architectures by TMT and the Dominion Radio Astrophysical Observatory; and fabrication/test of a 6x60 actuator DM prototype by CILAS. Work with the University of British Columbia LIDAR is continuing, in collaboration with ESO, to measure the spatial/temporal variability of the sodium layer and characterize the sodium coupling efficiency of several guide star laser systems. AO performance budgets have been further detailed. Modeling topics receiving particular attention include performance vs. computational cost tradeoffs for RTC algorithms; optimizing performance of the tip/tilt, plate scale, and sodium focus tracking loops controlled by the NGS on-instrument wavefront sensors, sky coverage, PSF reconstruction for LGS MCAO, and precision astrometry for the galactic center and other observations.
Qualification of a Null Lens Using Image-Based Phase Retrieval
NASA Technical Reports Server (NTRS)
Bolcar, Matthew R.; Aronstein, David L.; Hill, Peter C.; Smith, J. Scott; Zielinski, Thomas P.
2012-01-01
In measuring the figure error of an aspheric optic using a null lens, the wavefront contribution from the null lens must be independently and accurately characterized in order to isolate the optical performance of the aspheric optic alone. Various techniques can be used to characterize such a null lens, including interferometry, profilometry and image-based methods. Only image-based methods, such as phase retrieval, can measure the null-lens wavefront in situ - in single-pass, and at the same conjugates and in the same alignment state in which the null lens will ultimately be used - with no additional optical components. Due to the intended purpose of a Dull lens (e.g., to null a large aspheric wavefront with a near-equal-but-opposite spherical wavefront), characterizing a null-lens wavefront presents several challenges to image-based phase retrieval: Large wavefront slopes and high-dynamic-range data decrease the capture range of phase-retrieval algorithms, increase the requirements on the fidelity of the forward model of the optical system, and make it difficult to extract diagnostic information (e.g., the system F/#) from the image data. In this paper, we present a study of these effects on phase-retrieval algorithms in the context of a null lens used in component development for the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. Approaches for mitigation are also discussed.
Real-Time Wavefront Control for the PALM-3000 High Order Adaptive Optics System
NASA Technical Reports Server (NTRS)
Truong, Tuan N.; Bouchez, Antonin H.; Dekany, Richard G.; Guiwits, Stephen R.; Roberts, Jennifer E.; Troy, Mitchell
2008-01-01
We present a cost-effective scalable real-time wavefront control architecture based on off-the-shelf graphics processing units hosted in an ultra-low latency, high-bandwidth interconnect PC cluster environment composed of modules written in the component-oriented language of nesC. The architecture enables full-matrix reconstruction of the wavefront at up to 2 KHz with latency under 250 us for the PALM-3000 adaptive optics systems, a state-of-the-art upgrade on the 5.1 meter Hale Telescope that consists of a 64 x 64 subaperture Shack-Hartmann wavefront sensor and a 3368 active actuator high order deformable mirror in series with a 241 active actuator tweeter DM. The architecture can easily scale up to support much larger AO systems at higher rates and lower latency.
Du, Yongzhao; Fu, Yuqing; Zheng, Lixin
2016-12-20
A real-time complex amplitude reconstruction method for determining the dynamic beam quality M2 factor based on a Mach-Zehnder self-referencing interferometer wavefront sensor is developed. By using the proposed complex amplitude reconstruction method, full characterization of the laser beam, including amplitude (intensity profile) and phase information, can be reconstructed from a single interference pattern with the Fourier fringe pattern analysis method in a one-shot measurement. With the reconstructed complex amplitude, the beam fields at any position z along its propagation direction can be obtained by first utilizing the diffraction integral theory. Then the beam quality M2 factor of the dynamic beam is calculated according to the specified method of the Standard ISO11146. The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment, including the static and dynamic beam process. The experimental method is simple, fast, and operates without movable parts and is allowed in order to investigate the laser beam in inaccessible conditions using existing methods.
Transmitted wavefront testing with large dynamic range based on computer-aided deflectometry
NASA Astrophysics Data System (ADS)
Wang, Daodang; Xu, Ping; Gong, Zhidong; Xie, Zhongmin; Liang, Rongguang; Xu, Xinke; Kong, Ming; Zhao, Jun
2018-06-01
The transmitted wavefront testing technique is demanded for the performance evaluation of transmission optics and transparent glass, in which the achievable dynamic range is a key issue. A computer-aided deflectometric testing method with fringe projection is proposed for the accurate testing of transmitted wavefronts with a large dynamic range. Ray tracing of the modeled testing system is carried out to achieve the virtual ‘null’ testing of transmitted wavefront aberrations. The ray aberration is obtained from the ray tracing result and measured slope, with which the test wavefront aberration can be reconstructed. To eliminate testing system modeling errors, a system geometry calibration based on computer-aided reverse optimization is applied to realize accurate testing. Both numerical simulation and experiments have been carried out to demonstrate the feasibility and high accuracy of the proposed testing method. The proposed testing method can achieve a large dynamic range compared with the interferometric method, providing a simple, low-cost and accurate way for the testing of transmitted wavefronts from various kinds of optics and a large amount of industrial transmission elements.
Quantitative phase imaging using a programmable wavefront sensor
NASA Astrophysics Data System (ADS)
Soldevila, F.; Durán, V.; Clemente, P.; Lancis, J.; Tajahuerce, E.
2018-02-01
We perform phase imaging using a non-interferometric approach to measure the complex amplitude of a wavefront. We overcome the limitations in spatial resolution, optical efficiency, and dynamic range that are found in Shack-Hartmann wavefront sensing. To do so, we sample the wavefront with a high-speed spatial light modulator. A single lens forms a time-dependent light distribution on its focal plane, where a position detector is placed. Our approach is lenslet-free and does not rely on any kind of iterative or unwrap algorithm. The validity of our technique is demonstrated by performing both aberration sensing and phase imaging of transparent samples.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
NASA Astrophysics Data System (ADS)
Huang, Lei; Zhou, Chenlu; Gong, Mali; Ma, Xingkun; Bian, Qi
2016-07-01
Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.
NASA Astrophysics Data System (ADS)
Meng, X. F.; Peng, X.; Cai, L. Z.; Li, A. M.; Gao, Z.; Wang, Y. R.
2009-08-01
A hybrid cryptosystem is proposed, in which one image is encrypted to two interferograms with the aid of double random-phase encoding (DRPE) and two-step phase-shifting interferometry (2-PSI), then three pairs of public-private keys are utilized to encode and decode the session keys (geometrical parameters, the second random-phase mask) and interferograms. In the stage of decryption, the ciphered image can be decrypted by wavefront reconstruction, inverse Fresnel diffraction, and real amplitude normalization. This approach can successfully solve the problem of key management and dispatch, resulting in increased security strength. The feasibility of the proposed cryptosystem and its robustness against some types of attack are verified and analyzed by computer simulations.
Method and apparatus for wavefront sensing
Bahk, Seung-Whan
2016-08-23
A method of measuring characteristics of a wavefront of an incident beam includes obtaining an interferogram associated with the incident beam passing through a transmission mask and Fourier transforming the interferogram to provide a frequency domain interferogram. The method also includes selecting a subset of harmonics from the frequency domain interferogram, individually inverse Fourier transforming each of the subset of harmonics to provide a set of spatial domain harmonics, and extracting a phase profile from each of the set of spatial domain harmonics. The method further includes removing phase discontinuities in the phase profile, rotating the phase profile, and reconstructing a phase front of the wavefront of the incident beam.
Generation of atmospheric wavefronts using binary micromirror arrays.
Anzuola, Esdras; Belmonte, Aniceto
2016-04-10
To simulate in the laboratory the influence that a turbulent atmosphere has on light beams, we introduce a practical method for generating atmospheric wavefront distortions that considers digital holographic reconstruction using a programmable binary micromirror array. We analyze the efficiency of the approach for different configurations of the micromirror array and experimentally demonstrate the benchtop technique. Though the mirrors on the digital array can only be positioned in one of two states, we show that the holographic technique can be used to devise a wide variety of atmospheric wavefront aberrations in a controllable and predictable way for a fraction of the cost of phase-only spatial light modulators.
Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fijany, A.; Milman, M.; Redding, D.
1994-12-31
In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less
End-to-end learning for digital hologram reconstruction
NASA Astrophysics Data System (ADS)
Xu, Zhimin; Zuo, Si; Lam, Edmund Y.
2018-02-01
Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn't need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.
Optimal reconstruction for closed-loop ground-layer adaptive optics with elongated spots.
Béchet, Clémentine; Tallon, Michel; Tallon-Bosc, Isabelle; Thiébaut, Éric; Le Louarn, Miska; Clare, Richard M
2010-11-01
The design of the laser-guide-star-based adaptive optics (AO) systems for the Extremely Large Telescopes requires careful study of the issue of elongated spots produced on Shack-Hartmann wavefront sensors. The importance of a correct modeling of the nonuniformity and correlations of the noise induced by this elongation has already been demonstrated for wavefront reconstruction. We report here on the first (to our knowledge) end-to-end simulations of closed-loop ground-layer AO with laser guide stars with such an improved noise model. The results are compared with the level of performance predicted by a classical noise model for the reconstruction. The performance is studied in terms of ensquared energy and confirms that, thanks to the improved noise model, central or side launching of the lasers does not affect the performance with respect to the laser guide stars' flux. These two launching schemes also perform similarly whatever the atmospheric turbulence strength.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Phase Contrast Wavefront Sensing for Adaptive Optics
NASA Technical Reports Server (NTRS)
Bloemhof, E. E.; Wallace, J. K.; Bloemhof, E. E.
2004-01-01
Most ground-based adaptive optics systems use one of a small number of wavefront sensor technologies, notably (for relatively high-order systems) the Shack-Hartmann sensor, which provides local measurements of the phase slope (first-derivative) at a number of regularly-spaced points across the telescope pupil. The curvature sensor, with response proportional to the second derivative of the phase, is also sometimes used, but has undesirable noise propagation properties during wavefront reconstruction as the number of actuators becomes large. It is interesting to consider the use for astronomical adaptive optics of the "phase contrast" technique, originally developed for microscopy by Zemike to allow convenient viewing of phase objects. In this technique, the wavefront sensor provides a direct measurement of the local value of phase in each sub-aperture of the pupil. This approach has some obvious disadvantages compared to Shack-Hartmann wavefront sensing, but has some less obvious but substantial advantages as well. Here we evaluate the relative merits in a practical ground-based adaptive optics system.
Adaptive optics; Proceedings of the Meeting, Arlington, VA, April 10, 11, 1985
NASA Astrophysics Data System (ADS)
Ludman, J. E.
Papers are presented on the directed energy program for ballistic missile defense, a self-referencing wavefront interferometer for laser sources, the effects of mirror grating distortions on diffraction spots at wavefront sensors, and the optical design of an all-reflecting, high-resolution camera for active-optics on ground-based telescopes. Also considered are transverse coherence length observations, time dependent statistics of upper atmosphere optical turbulence, high altitude acoustic soundings, and the Cramer-Rao lower bound on wavefront sensor error. Other topics include wavefront reconstruction from noisy slope or difference data using the discrete Fourier transform, acoustooptic adaptive signal processing, the recording of phase deformations on a PLZT wafer for holographic and spatial light modulator applications, and an optical phase reconstructor using a multiplier-accumulator approach. Papers are also presented on an integrated optics wavefront measurement sensor, a new optical preprocessor for automatic vision systems, a model for predicting infrared atmospheric emission fluctuations, and optical logic gates and flip-flops based on polarization-bistable semiconductor lasers.
NASA Astrophysics Data System (ADS)
Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Haueisen, Jens; Baumgarten, Daniel
2016-07-01
Objective measurement of straylight in the human eye with a Shack-Hartmann (SH) wavefront aberrometer is limited in imaging angle. We propose a measurement principle and a point spread function (PSF) reconstruction algorithm to overcome this limitation. In our optical setup, a variable stop replaces the stop conventionally used to suppress reflections and scatter in SH aberrometers. We record images with 21 diameters of the stop. From each SH image, the average intensity of the pupil is computed and normalized. The intensities represent integral values of the PSF. We reconstruct the PSF, which is the derivative of the intensities with respect to the visual angle. A modified Stiles Holladay approximation is fitted to the reconstructed PSF, resulting in a straylight parameter. A proof-of-principle study was carried out on eight healthy young volunteers. Scatter filters were positioned in front of the volunteers' eyes to simulate straylight. The straylight parameter was compared to the C-Quant measurements and the filter values. The PSF parameter shows strong correlation with the density of the filters and a linear relation to the C-Quant straylight parameter. Our measurement and reconstruction techniques allow for objective straylight analysis of visual angles up to 4 deg.
Optimization of wavefront coding imaging system using heuristic algorithms
NASA Astrophysics Data System (ADS)
González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.
2017-08-01
Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.
Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V
2015-02-01
Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.
TRL-6 for JWST wavefront sensing and control
NASA Astrophysics Data System (ADS)
Feinberg, Lee D.; Dean, Bruce H.; Aronstein, David L.; Bowers, Charles W.; Hayden, William; Lyon, Richard G.; Shiri, Ron; Smith, J. Scott; Acton, D. Scott; Carey, Larkin; Contos, Adam; Sabatke, Erin; Schwenker, John; Shields, Duncan; Towell, Tim; Shi, Fang; Meza, Luis
2007-09-01
NASA's Technology Readiness Level (TRL)-6 is documented for the James Webb Space Telescope (JWST) Wavefront Sensing and Control (WFSC) subsystem. The WFSC subsystem is needed to align the Optical Telescope Element (OTE) after all deployments have occurred, and achieves that requirement through a robust commissioning sequence consisting of unique commissioning algorithms, all of which are part of the WFSC algorithm suite. This paper identifies the technology need, algorithm heritage, describes the finished TRL-6 design platform, and summarizes the TRL-6 test results and compliance. Additionally, the performance requirements needed to satisfy JWST science goals as well as the criterion that relate to the TRL-6 Testbed Telescope (TBT) performance requirements are discussed.
TRL-6 for JWST Wavefront Sensing and Control
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Dean, Bruce; Smith, Scott; Aronstein, David; Shiri, Ron; Lyon, Rick; Hayden, Bill; Bowers, Chuck; Acton, D. Scott; Shields, Duncan;
2007-01-01
NASA's Technology Readiness Level (TRL)-6 is documented for the James Webb Space Telescope (JWST) Wavefront Sensing and Control (WFSC) subsystem. The WFSC subsystem is needed to align the Optical Telescope Element (OTE) after all deployments have occurred, and achieves that requirement through a robust commissioning sequence consisting of unique commissioning algorithms, all of which are part of the WFSC algorithm suite. This paper identifies the technology need, algorithm heritage, describes the finished TRL-6 design platform, and summarizes the TRL-6 test results and compliance. Additionally, the performance requirements needed to satisfy JWST science goals as well as the criterion that relate to the TRL-6 Testbed Telescope (TBT) performance requirements are discussed
Self-interference digital holography with a geometric-phase hologram lens.
Choi, KiHong; Yim, Junkyu; Yoo, Seunghwi; Min, Sung-Wook
2017-10-01
Self-interference digital holography (SIDH) is actively studied because the hologram acquisition under the incoherent illumination condition is available. The key component in this system is wavefront modulating optics, which modulates an incoming object wave into two different wavefront curvatures. In this Letter, the geometric-phase hologram lens is introduced in the SIDH system to perform as a polarization-sensitive wavefront modulator and a single-path beam splitter. This special optics has several features, such as high transparency, a modulation efficiency up to 99%, a thinness of a few millimeters, and a flat structure. The demonstration system is devised, and the numerical reconstruction results from an acquired complex hologram are presented.
Wavefront Control Testbed (WCT) Experiment Results
NASA Technical Reports Server (NTRS)
Burns, Laura A.; Basinger, Scott A.; Campion, Scott D.; Faust, Jessica A.; Feinberg, Lee D.; Hayden, William L.; Lowman, Andrew E.; Ohara, Catherine M.; Petrone, Peter P., III
2004-01-01
The Wavefront Control Testbed (WCT) was created to develop and test wavefront sensing and control algorithms and software for the segmented James Webb Space Telescope (JWST). Last year, we changed the system configuration from three sparse aperture segments to a filled aperture with three pie shaped segments. With this upgrade we have performed experiments on fine phasing with line-of-sight and segment-to-segment jitter, dispersed fringe visibility and grism angle;. high dynamic range tilt sensing; coarse phasing with large aberrations, and sampled sub-aperture testing. This paper reviews the results of these experiments.
Improved particle position accuracy from off-axis holograms using a Chebyshev model.
Öhman, Johan; Sjödahl, Mikael
2018-01-01
Side scattered light from micrometer-sized particles is recorded using an off-axis digital holographic setup. From holograms, a volume is reconstructed with information about both intensity and phase. Finding particle positions is non-trivial, since poor axial resolution elongates particles in the reconstruction. To overcome this problem, the reconstructed wavefront around a particle is used to find the axial position. The method is based on the change in the sign of the curvature around the true particle position plane. The wavefront curvature is directly linked to the phase response in the reconstruction. In this paper we propose a new method of estimating the curvature based on a parametric model. The model is based on Chebyshev polynomials and is fit to the phase anomaly and compared to a plane wave in the reconstructed volume. From the model coefficients, it is possible to find particle locations. Simulated results show increased performance in the presence of noise, compared to the use of finite difference methods. The standard deviation is decreased from 3-39 μm to 6-10 μm for varying noise levels. Experimental results show a corresponding improvement where the standard deviation is decreased from 18 μm to 13 μm.
Shear Wave Wavefront Mapping Using Ultrasound Color Flow Imaging.
Yamakoshi, Yoshiki; Kasahara, Toshihiro; Iijima, Tomohiro; Yuminaka, Yasushi
2015-10-01
A wavefront reconstruction method for a continuous shear wave is proposed. The method uses ultrasound color flow imaging (CFI) to detect the shear wave's wavefront. When the shear wave vibration frequency satisfies the required frequency condition and the displacement amplitude satisfies the displacement amplitude condition, zero and maximum flow velocities appear at the shear wave vibration phases of zero and π rad, respectively. These specific flow velocities produce the shear wave's wavefront map in CFI. An important feature of this method is that the shear wave propagation is observed in real time without addition of extra functions to the ultrasound imaging system. The experiments are performed using a 6.5 MHz CFI system. The shear wave is excited by a multilayer piezoelectric actuator. In a phantom experiment, the shear wave velocities estimated using the proposed method and those estimated using a system based on displacement measurement show good agreement. © The Author(s) 2015.
Wavefront correction using machine learning methods for single molecule localization microscopy
NASA Astrophysics Data System (ADS)
Tehrani, Kayvan F.; Xu, Jianquan; Kner, Peter
2015-03-01
Optical Aberrations are a major challenge in imaging biological samples. In particular, in single molecule localization (SML) microscopy techniques (STORM, PALM, etc.) a high Strehl ratio point spread function (PSF) is necessary to achieve sub-diffraction resolution. Distortions in the PSF shape directly reduce the resolution of SML microscopy. The system aberrations caused by the imperfections in the optics and instruments can be compensated using Adaptive Optics (AO) techniques prior to imaging. However, aberrations caused by the biological sample, both static and dynamic, have to be dealt with in real time. A challenge for wavefront correction in SML microscopy is a robust optimization approach in the presence of noise because of the naturally high fluctuations in photon emission from single molecules. Here we demonstrate particle swarm optimization for real time correction of the wavefront using an intensity independent metric. We show that the particle swarm algorithm converges faster than the genetic algorithm for bright fluorophores.
Melde, Kai; Mark, Andrew G; Qiu, Tian; Fischer, Peer
2016-09-22
Holographic techniques are fundamental to applications such as volumetric displays, high-density data storage and optical tweezers that require spatial control of intricate optical or acoustic fields within a three-dimensional volume. The basis of holography is spatial storage of the phase and/or amplitude profile of the desired wavefront in a manner that allows that wavefront to be reconstructed by interference when the hologram is illuminated with a suitable coherent source. Modern computer-generated holography skips the process of recording a hologram from a physical scene, and instead calculates the required phase profile before rendering it for reconstruction. In ultrasound applications, the phase profile is typically generated by discrete and independently driven ultrasound sources; however, these can only be used in small numbers, which limits the complexity or degrees of freedom that can be attained in the wavefront. Here we introduce monolithic acoustic holograms, which can reconstruct diffraction-limited acoustic pressure fields and thus arbitrary ultrasound beams. We use rapid fabrication to craft the holograms and achieve reconstruction degrees of freedom two orders of magnitude higher than commercial phased array sources. The technique is inexpensive, appropriate for both transmission and reflection elements, and scales well to higher information content, larger aperture size and higher power. The complex three-dimensional pressure and phase distributions produced by these acoustic holograms allow us to demonstrate new approaches to controlled ultrasonic manipulation of solids in water, and of liquids and solids in air. We expect that acoustic holograms will enable new capabilities in beam-steering and the contactless transfer of power, improve medical imaging, and drive new applications of ultrasound.
A novel algorithm for fast and efficient multifocus wavefront shaping
NASA Astrophysics Data System (ADS)
Fayyaz, Zahra; Nasiriavanaki, Mohammadreza
2018-02-01
Wavefront shaping using spatial light modulator (SLM) is a popular method for focusing light through a turbid media, such as biological tissues. Usually, in iterative optimization methods, due to the very large number of pixels in SLM, larger pixels are formed, bins, and the phase value of the bins are changed to obtain an optimum phase map, hence a focus. In this study an efficient optimization algorithm is proposed to obtain an arbitrary map of focus utilizing all the SLM pixels or small bin sizes. The application of such methodology in dermatology, hair removal in particular, is explored and discussed
Precise calibration of pupil images in pyramid wavefront sensor.
Liu, Yong; Mu, Quanquan; Cao, Zhaoliang; Hu, Lifa; Yang, Chengliang; Xuan, Li
2017-04-20
The pyramid wavefront sensor (PWFS) is a novel wavefront sensor with several inspiring advantages compared with Shack-Hartmann wavefront sensors. The PWFS uses four pupil images to calculate the local tilt of the incoming wavefront. Pupil images are conjugated with a telescope pupil so that each pixel in the pupil image is diffraction-limited by the telescope pupil diameter, thus the sensing error of the PWFS is much lower than that of the Shack-Hartmann sensor and is related to the extraction and alignment accuracy of pupil images. However, precise extraction of these images is difficult to conduct in practice. Aiming at improving the sensing accuracy, we analyzed the physical model of calibration of a PWFS and put forward an extraction algorithm. The process was verified via a closed-loop correction experiment. The results showed that the sensing accuracy of the PWFS increased after applying the calibration and extraction method.
Sommargren, Gary E.
1999-01-01
An interferometer which has the capability of measuring optical elements and systems with an accuracy of .lambda./1000 where .lambda. is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about .lambda./50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. Whereas current interferometers illuminate the optic to be tested with an aberrated wavefront which also limits the accuracy of the measurement, this interferometer uses an essentially perfect spherical measurement wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms.
Sommargren, G.E.
1999-08-03
An interferometer is disclosed which has the capability of measuring optical elements and systems with an accuracy of {lambda}/1000 where {lambda} is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about {lambda}/50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. Whereas current interferometers illuminate the optic to be tested with an aberrated wavefront which also limits the accuracy of the measurement, this interferometer uses an essentially perfect spherical measurement wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms. 11 figs.
Holographic imaging with a Shack-Hartmann wavefront sensor.
Gong, Hai; Soloviev, Oleg; Wilding, Dean; Pozzi, Paolo; Verhaegen, Michel; Vdovin, Gleb
2016-06-27
A high-resolution Shack-Hartmann wavefront sensor has been used for coherent holographic imaging, by computer reconstruction and propagation of the complex field in a lensless imaging setup. The resolution of the images obtained with the experimental data is in a good agreement with the diffraction theory. Although a proper calibration with a reference beam improves the image quality, the method has a potential for reference-less holographic imaging with spatially coherent monochromatic and narrowband polychromatic sources in microscopy and imaging through turbulence.
NASA Technical Reports Server (NTRS)
Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver
2012-01-01
Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.
Volume hologram with random encoded reference beam for secure data encryption
NASA Astrophysics Data System (ADS)
Markov, Vladimir B.; Weber, David C.; Trolinger, James D.
2000-04-01
A method is presented to store biometric and/or other important information on an ID card in the form of a Card Hologram that cannot be read or duplicated without the use of a special Key Hologram that is secured inside of an automated reader. The Key Hologram produces the unique wavefront required to release the information contained in a complex, 3D diffraction pattern recorded in a volume hologram attached to the card. Experimental results are presented in which the image of an Air Force resolution target are recorded and reconstructed in a volume material using a random speckle wavefront and that cannot be viewed using a simple wavefront such as a collimated or diverging laser beam.
Random encoded reference beam for secure data storage in a holographic memory
NASA Astrophysics Data System (ADS)
Markov, Vladimir B.; Weber, David C.
2000-11-01
A method is presented to store biometric and/or other important information on an ID card in the form of a Card Hologram that cannot be read or duplicated without the use of a special Key Hologram that is secured inside of an automated reader. The Key Hologram produces the unique wavefront required to release the information contained in a complex, 3- D diffraction pattern recorded in a volume hologram attached to the card. Experimental results are presented in which the image of an Air Force resolution target are recorded and reconstructed in a volume material using a random speckle wavefront and that cannot be viewed using a simple wavefront such as a collimated or diverging laser beam.
Phase unwrapping in digital holography based on non-subsampled contourlet transform
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-01-01
In the digital holographic measurement of complex surfaces, phase unwrapping is a critical step for accurate reconstruction. The phases of the complex amplitudes calculated from interferometric holograms are disturbed by speckle noise, thus reliable unwrapping results are difficult to be obtained. Most of existing unwrapping algorithms implement denoising operations first to obtain noise-free phases and then conduct phase unwrapping pixel by pixel. This approach is sensitive to spikes and prone to unreliable results in practice. In this paper, a robust unwrapping algorithm based on the non-subsampled contourlet transform (NSCT) is developed. The multiscale and directional decomposition of NSCT enhances the boundary between adjacent phase levels and henceforth the influence of local noise can be eliminated in the transform domain. The wrapped phase map is segmented into several regions corresponding to different phase levels. Finally, an unwrapped phase map is obtained by elevating the phases of a whole segment instead of individual pixels to avoid unwrapping errors caused by local spikes. This algorithm is suitable for dealing with complex and noisy wavefronts. Its universality and superiority in the digital holographic interferometry have been demonstrated by both numerical analysis and practical experiments.
Observations of starburst galaxies: Science and supporting technology
NASA Astrophysics Data System (ADS)
Laag, Edward Aric
In chapter 1 we report on the development of wavefront reconstruction and control algorithms for multi-conjugate adaptive optics (MCAO) and the results of testing them in the laboratory under conditions that simulate an 8 meter class telescope. The UCO/Lick Observatory Laboratory for Adaptive Optics Multi-Conjugate testbed allows us to test wide field of view adaptive optics systems as they might be instantiated in the near future on giant telescopes. In particular, we have been investigating the performance of MCAO using five laser beacons for wavefront sensing and a minimum variance algorithm for control of two conjugate deformable mirrors. We have demonstrated improved Strehl ratio and enlarged field of view performance when compared to conventional AO techniques. We have demonstrated improved MCAO performance with the implementation of a routine that minimizes the generalized isoplanatism when turbulent layers do not correspond to deformable mirror conjugate altitudes. Finally, we have demonstrated suitability of the system for closed-loop operation when configured to feed back conditional mean estimates of wavefront residuals rather than the directly measured residuals. This technique has recently been referred to as the "pseudo-open-loop" control law in the literature. Chapter 2 introduces the Multi-wavelength Extreme Starburst Sample (MESS), a new catalog of 138 star-forming galaxies (0.1 < z < 0.3) optically selected from the SDSS using emission line strength diagnostics to have SFR ≥ 50 M⊙ yr-1 based on a Kroupa IMF. The MESS was designed to complement samples of nearby star forming galaxies such as the luminous infrared galaxies (LIRGs), and ultraviolet luminous galaxies (UVLGs). Observations using the multiband imaging photometer (MIPS; 24, 70, and 160mum channels) on the Spitzer Space Telescope indicate the MESS galaxies have IR luminosities similar to those of LIRGs, with an estimated median LTIR ˜ 3 x 1011 L⊙ . The selection criteria for the MESS suggests they may be less obscured than typical far-IR selected galaxies with similar estimated SFRs. We estimate the SFRs based directly on luminosities to determine the agreement for these methods in the MESS.
Near Real-Time Image Reconstruction
NASA Astrophysics Data System (ADS)
Denker, C.; Yang, G.; Wang, H.
2001-08-01
In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.
Verstraete, Hans R. G. W.; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V.
2017-01-01
In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented. PMID:28736670
Verstraete, Hans R G W; Heisler, Morgan; Ju, Myeong Jin; Wahl, Daniel; Bliek, Laurens; Kalkman, Jeroen; Bonora, Stefano; Jian, Yifan; Verhaegen, Michel; Sarunic, Marinko V
2017-04-01
In this report, which is an international collaboration of OCT, adaptive optics, and control research, we demonstrate the Data-based Online Nonlinear Extremum-seeker (DONE) algorithm to guide the image based optimization for wavefront sensorless adaptive optics (WFSL-AO) OCT for in vivo human retinal imaging. The ocular aberrations were corrected using a multi-actuator adaptive lens after linearization of the hysteresis in the piezoelectric actuators. The DONE algorithm succeeded in drastically improving image quality and the OCT signal intensity, up to a factor seven, while achieving a computational time of 1 ms per iteration, making it applicable for many high speed applications. We demonstrate the correction of five aberrations using 70 iterations of the DONE algorithm performed over 2.8 s of continuous volumetric OCT acquisition. Data acquired from an imaging phantom and in vivo from human research volunteers are presented.
Light-field and holographic three-dimensional displays [Invited].
Yamaguchi, Masahiro
2016-12-01
A perfect three-dimensional (3D) display that satisfies all depth cues in human vision is possible if a light field can be reproduced exactly as it appeared when it emerged from a real object. The light field can be generated based on either light ray or wavefront reconstruction, with the latter known as holography. This paper first provides an overview of the advances of ray-based and wavefront-based 3D display technologies, including integral photography and holography, and the integration of those technologies with digital information systems. Hardcopy displays have already been used in some applications, whereas the electronic display of a light field is under active investigation. Next, a fundamental question in this technology field is addressed: what is the difference between ray-based and wavefront-based methods for light-field 3D displays? In considering this question, it is of particular interest to look at the technology of holographic stereograms. The phase information in holography contributes to the resolution of a reconstructed image, especially for deep 3D images. Moreover, issues facing the electronic display system of light fields are discussed, including the resolution of the spatial light modulator, the computational techniques of holography, and the speckle in holographic images.
High-Contrast Coronagraph Performance in the Presence of DM Actuator Defects
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Shaklan, Stuart; Cady, Eric
2015-01-01
Deformable Mirrors (DMs) are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Occasionally DM actuators or their associated cables or electronics fail, requiring a wavefront control algorithm to compensate for actuators that may be displaced from their neighbors by hundreds of nanometers. We have carried out experiments on our High-Contrast Imaging Testbed (HCIT) to study the impact of failed actuators in partial fulfillment of the Terrestrial Planet Finder Coronagraph optical model validation milestone. We show that the wavefront control algorithm adapts to several broken actuators and maintains dark-hole contrast in broadband light.
High-contrast coronagraph performance in the presence of DM actuator defects
NASA Astrophysics Data System (ADS)
Sidick, Erkin; Shaklan, Stuart; Cady, Eric
2015-09-01
Deformable Mirrors (DMs) are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Occasionally DM actuators or their associated cables or electronics fail, requiring a wavefront control algorithm to compensate for actuators that may be displaced from their neighbors by hundreds of nanometers. We have carried out experiments on our High-Contrast Imaging Testbed (HCIT) to study the impact of failed actuators in partial fulfilment of the Terrestrial Planet Finder Coronagraph optical model validation milestone. We show that the wavefront control algorithm adapts to several broken actuators and maintains dark-hole contrast in broadband light.
NASA Astrophysics Data System (ADS)
Chen, Shanyong; Li, Shengyi; Wang, Guilin
2014-11-01
The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.
Solar tomography adaptive optics.
Ren, Deqing; Zhu, Yongtian; Zhang, Xi; Dou, Jiangpei; Zhao, Gang
2014-03-10
Conventional solar adaptive optics uses one deformable mirror (DM) and one guide star for wave-front sensing, which seriously limits high-resolution imaging over a large field of view (FOV). Recent progress toward multiconjugate adaptive optics indicates that atmosphere turbulence induced wave-front distortion at different altitudes can be reconstructed by using multiple guide stars. To maximize the performance over a large FOV, we propose a solar tomography adaptive optics (TAO) system that uses tomographic wave-front information and uses one DM. We show that by fully taking advantage of the knowledge of three-dimensional wave-front distribution, a classical solar adaptive optics with one DM can provide an extra performance gain for high-resolution imaging over a large FOV in the near infrared. The TAO will allow existing one-deformable-mirror solar adaptive optics to deliver better performance over a large FOV for high-resolution magnetic field investigation, where solar activities occur in a two-dimensional field up to 60'', and where the near infrared is superior to the visible in terms of magnetic field sensitivity.
NASA Astrophysics Data System (ADS)
Barr, David; Basden, Alastair; Dipper, Nigel; Schwartz, Noah; Vick, Andy; Schnetler, Hermine
2014-08-01
We present wavefront reconstruction acceleration of high-order AO systems using an Intel Xeon Phi processor. The Xeon Phi is a coprocessor providing many integrated cores and designed for accelerating compute intensive, numerical codes. Unlike other accelerator technologies, it allows virtually unchanged C/C++ to be recompiled to run on the Xeon Phi, giving the potential of making development, upgrade and maintenance faster and less complex. We benchmark the Xeon Phi in the context of AO real-time control by running a matrix vector multiply (MVM) algorithm. We investigate variability in execution time and demonstrate a substantial speed-up in loop frequency. We examine the integration of a Xeon Phi into an existing RTC system and show that performance improvements can be achieved with limited development effort.
Yue, Dan; Nie, Haitao; Li, Ye; Ying, Changsheng
2018-03-01
Wavefront sensorless (WFSless) adaptive optics (AO) systems have been widely studied in recent years. To reach optimum results, such systems require an efficient correction method. This paper presents a fast wavefront correction approach for a WFSless AO system mainly based on the linear phase diversity (PD) technique. The fast closed-loop control algorithm is set up based on the linear relationship between the drive voltage of the deformable mirror (DM) and the far-field images of the system, which is obtained through the linear PD algorithm combined with the influence function of the DM. A large number of phase screens under different turbulence strengths are simulated to test the performance of the proposed method. The numerical simulation results show that the method has fast convergence rate and strong correction ability, a few correction times can achieve good correction results, and can effectively improve the imaging quality of the system while needing fewer measurements of CCD data.
NASA Astrophysics Data System (ADS)
Wei, Ping; Li, Xinyang; Luo, Xi; Li, Jianfeng
2018-02-01
The centroid method is commonly adopted to locate the spot in the sub-apertures in the Shack-Hartmann wavefront sensor (SH-WFS), in which preprocessing image is required before calculating the spot location due to that the centroid method is extremely sensitive to noises. In this paper, the SH-WFS image was simulated according to the characteristics of the noises, background and intensity distribution. The Optimal parameters of SH-WFS image preprocessing method were put forward, in different signal-to-noise ratio (SNR) conditions, where the wavefront reconstruction error was considered as the evaluation index. Two methods of image preprocessing, thresholding method and windowing combing with thresholding method, were compared by studying the applicable range of SNR and analyzing the stability of the two methods, respectively.
Wavefront metrology for coherent hard X-rays by scanning a microsphere.
Skjønsfjell, Eirik Torbjørn Bakken; Chushkin, Yuriy; Zontone, Federico; Patil, Nilesh; Gibaud, Alain; Breiby, Dag W
2016-05-16
Characterization of the wavefront of an X-ray beam is of primary importance for all applications where coherence plays a major role. Imaging techniques based on numerically retrieving the phase from interference patterns are often relying on an a-priori assumption of the wavefront shape. In Coherent X-ray Diffraction Imaging (CXDI) a planar incoming wave field is often assumed for the inversion of the measured diffraction pattern, which allows retrieving the real space image via simple Fourier transformation. It is therefore important to know how reliable the plane wave approximation is to describe the real wavefront. Here, we demonstrate that the quantitative wavefront shape and flux distribution of an X-ray beam used for CXDI can be measured by using a micrometer size metal-coated polymer sphere serving in a similar way as the hole array in a Hartmann wavefront sensor. The method relies on monitoring the shape and center of the scattered intensity distribution in the far field using a 2D area detector while raster-scanning the microsphere with respect to the incoming beam. The reconstructed X-ray wavefront was found to have a well-defined central region of approximately 16 µm diameter and a weaker, asymmetric, intensity distribution extending 30 µm from the beam center. The phase front distortion was primarily spherical with an effective radius of 0.55 m which matches the distance to the last upstream beam-defining slit, and could be accurately represented by Zernike polynomials.
Phase shifting diffraction interferometer
Sommargren, Gary E.
1996-01-01
An interferometer which has the capability of measuring optical elements and systems with an accuracy of .lambda./1000 where .lambda. is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about .lambda./50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms.
Phase shifting diffraction interferometer
Sommargren, G.E.
1996-08-29
An interferometer which has the capability of measuring optical elements and systems with an accuracy of {lambda}/1000 where {lambda} is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about {lambda}/50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms. 8 figs.
Yang, Ping; Ning, Yu; Lei, Xiang; Xu, Bing; Li, Xinyang; Dong, Lizhi; Yan, Hu; Liu, Wenjing; Jiang, Wenhan; Liu, Lei; Wang, Chao; Liang, Xingbo; Tang, Xiaojun
2010-03-29
We present a slab laser amplifier beam cleanup experimental system based on a 39-actuator rectangular piezoelectric deformable mirror. Rather than use a wave-front sensor to measure distortions in the wave-front and then apply a conjugation wave-front for compensating them, the system uses a Stochastic Parallel Gradient Descent algorithm to maximize the power contained within a far-field designated bucket. Experimental results demonstrate that at the output power of 335W, more than 30% energy concentrates in the 1x diffraction-limited area while the beam quality is enhanced greatly.
NASA Astrophysics Data System (ADS)
Smith, Malcolm; Kerley, Dan; Chapin, Edward L.; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi
2016-07-01
Prototyping and benchmarking was performed for the Real-Time Controller (RTC) of the Narrow Field InfraRed Adaptive Optics System (NFIRAOS). To perform wavefront correction, NFIRAOS utilizes two deformable mirrors (DM) and one tip/tilt stage (TTS). The RTC receives wavefront information from six Laser Guide Star (LGS) Shack- Hartmann WaveFront Sensors (WFS), one high-order Natural Guide Star Pyramid WaveFront Sensor (PWFS) and multiple low-order instrument detectors. The RTC uses this information to determine the commands to send to the wavefront correctors. NFIRAOS is the first light AO system for the Thirty Meter Telescope (TMT). The prototyping was performed using dual-socket high performance Linux servers with the real-time (PREEMPT_RT) patch and demonstrated the viability of a commercial off-the-shelf (COTS) hardware approach to large scale AO reconstruction. In particular, a large custom matrix vector multiplication (MVM) was benchmarked which met the required latency requirements. In addition all major inter-machine communication was verified to be adequate using 10Gb and 40Gb Ethernet. The results of this prototyping has enabled a CPU-based NFIRAOS RTC design to proceed with confidence and that COTS hardware can be used to meet the demanding performance requirements.
Efficient level set methods for constructing wavefronts in three spatial dimensions
NASA Astrophysics Data System (ADS)
Cheng, Li-Tien
2007-10-01
Wavefront construction in geometrical optics has long faced the twin difficulties of dealing with multi-valued forms and resolution of wavefront surfaces. A recent change in viewpoint, however, has demonstrated that working in phase space on bicharacteristic strips using eulerian methods can bypass both difficulties. The level set method for interface dynamics makes a suitable choice for the eulerian method. Unfortunately, in three-dimensional space, the setting of interest for most practical applications, the advantages of this method are largely offset by a new problem: the high dimension of phase space. In this work, we present new types of level set algorithms that remove this obstacle and demonstrate their abilities to accurately construct wavefronts under high resolution. These results propel the level set method forward significantly as a competitive approach in geometrical optics under realistic conditions.
Design of wavefront coding optical system with annular aperture
NASA Astrophysics Data System (ADS)
Chen, Xinhua; Zhou, Jiankang; Shen, Weimin
2016-10-01
Wavefront coding can extend the depth of field of traditional optical system by inserting a phase mask into the pupil plane. In this paper, the point spread function (PSF) of wavefront coding system with annular aperture are analyzed. Stationary phase method and fast Fourier transform (FFT) method are used to compute the diffraction integral respectively. The OTF invariance is analyzed for the annular aperture with cubic phase mask under different obscuration ratio. With these analysis results, a wavefront coding system using Maksutov-Cassegrain configuration is designed finally. It is an F/8.21 catadioptric system with annular aperture, and its focal length is 821mm. The strength of the cubic phase mask is optimized with user-defined operand in Zemax. The Wiener filtering algorithm is used to restore the images and the numerical simulation proves the validity of the design.
NASA Technical Reports Server (NTRS)
Leboeuf, Claudia M.; Davila, Pamela S.; Redding, David C.; Morell, Armando; Lowman, Andrew E.; Wilson, Mark E.; Young, Eric W.; Pacini, Linda K.; Coulter, Dan R.
1998-01-01
As part of the technology validation strategy of the next generation space telescope (NGST), a system testbed is being developed at GSFC, in partnership with JPL and Marshall Space Flight Center (MSFC), which will include all of the component functions envisioned in an NGST active optical system. The system will include an actively controlled, segmented primary mirror, actively controlled secondary, deformable, and fast steering mirrors, wavefront sensing optics, wavefront control algorithms, a telescope simulator module, and an interferometric wavefront sensor for use in comparing final obtained wavefronts from different tests. The developmental. cryogenic active telescope testbed (DCATT) will be implemented in three phases. Phase 1 will focus on operating the testbed at ambient temperature. During Phase 2, a cryocapable segmented telescope will be developed and cooled to cryogenic temperature to investigate the impact on the ability to correct the wavefront and stabilize the image. In Phase 3, it is planned to incorporate industry developed flight-like components, such as figure controlled mirror segments, cryogenic, low hold power actuators, or different wavefront sensing and control hardware or software. A very important element of the program is the development and subsequent validation of the integrated multidisciplinary models. The Phase 1 testbed objectives, plans, configuration, and design will be discussed.
The design of wavefront coded imaging system
NASA Astrophysics Data System (ADS)
Lan, Shun; Cen, Zhaofeng; Li, Xiaotong
2016-10-01
Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.
Testing and Calibration of Phase Plates for JWST Optical Simulator
NASA Technical Reports Server (NTRS)
Gong, Qian; Chu, Jenny; Tournois, Severine; Eichhorn, William; Kubalak, David
2011-01-01
Three phase plates were designed to simulate the JWST segmented primary mirror wavefront at three on-orbit alignment stages: coarse phasing, intermediate phasing, and fine phasing. The purpose is to verify JWST's on-orbit wavefront sensing capability. Amongst the three stages, coarse alignment is defined to have piston error between adjacent segments being 30 m to 300 m, intermediate being 0.4 m to 10 m, and fine is below 0.4 m. The phase plates were made of fused silica, and were assembled in JWST Optical Simulator (OSIM). The piston difference was realized by the thickness difference of two adjacent segments. The two important parameters to phase plates are piston and wavefront errors. Dispersed Fringe Sensor (DFS) method was used for initial coarse piston evaluation, which is the emphasis of this paper. Point Diffraction Interferometer (PDI) is used for fine piston and wavefront error. In order to remove piston's 2 pi uncertainty with PDI, three laser wavelengths, 640nm, 660nm, and 780nm, are used for the measurement. The DHS test setup, analysis algorithm and results are presented. The phase plate design concept and its application (i.e. verifying the JWST on-orbit alignment algorithm) are described. The layout of JWST OSIM and the function of phase plates in OSIM are also addressed briefly.
FPGA-accelerated adaptive optics wavefront control
NASA Astrophysics Data System (ADS)
Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.
2014-03-01
The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.
Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A
2011-07-01
We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.
Pre-correction of distorted Bessel-Gauss beams without wavefront detection
NASA Astrophysics Data System (ADS)
Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing
2017-12-01
By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.
NASA Technical Reports Server (NTRS)
Shi, Fang; Basinger, Scott A.; Redding, David C.
2006-01-01
Dispersed Fringe Sensing (DFS) is an efficient and robust method for coarse phasing of a segmented primary mirror such as the James Webb Space Telescope (JWST). In this paper, modeling and simulations are used to study the effect of segmented mirror aberrations on the fringe image, DFS signals and DFS detection accuracy. The study has shown due to the pixilation spatial filter effect from DFS signal extraction the effect of wavefront error is reduced and DFS algorithm will be more robust against wavefront aberration by using multi-trace DFS approach. We also studied the JWST Dispersed Hartmann Sensor (DHS) performance in presence of wavefront aberrations caused by the gravity sag and we use the scaled gravity sag to explore the JWST DHS performance relationship with the level of the wavefront aberration. This also includes the effect from line-of-sight jitter.
NASA Astrophysics Data System (ADS)
Powell, Keith B.; Vaitheeswaran, Vidhya
2010-07-01
The MMT observatory has recently implemented and tested an optimal wavefront controller for the NGS adaptive optics system. Open loop atmospheric data collected at the telescope is used as the input to a MATLAB based analytical model. The model uses nonlinear constrained minimization to determine controller gains and optimize the system performance. The real-time controller performing the adaptive optics close loop operation is implemented on a dedicated high performance PC based quad core server. The controller algorithm is written in C and uses the GNU scientific library for linear algebra. Tests at the MMT confirmed the optimal controller significantly reduced the residual RMS wavefront compared with the previous controller. Significant reductions in image FWHM and increased peak intensities were obtained in J, H and K-bands. The optimal PID controller is now operating as the baseline wavefront controller for the MMT NGS-AO system.
Adaptive wavefront shaping for controlling nonlinear multimode interactions in optical fibres
NASA Astrophysics Data System (ADS)
Tzang, Omer; Caravaca-Aguirre, Antonio M.; Wagner, Kelvin; Piestun, Rafael
2018-06-01
Recent progress in wavefront shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibres comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibres and its applications are in their infancy, opening opportunities to take advantage of complex nonlinear modal dynamics. Here, we demonstrate a wavefront shaping approach for controlling nonlinear phenomena in multimode fibres. Using a spatial light modulator at the fibre input, real-time spectral feedback and a genetic algorithm optimization, we control a highly nonlinear multimode stimulated Raman scattering cascade and its interplay with four-wave mixing via a flexible implicit control on the superposition of modes coupled into the fibre. We show versatile spectrum manipulations including shifts, suppression, and enhancement of Stokes and anti-Stokes peaks. These demonstrations illustrate the power of wavefront shaping to control and optimize nonlinear wave propagation.
NASA Astrophysics Data System (ADS)
Basden, A. G.; Bardou, L.; Bonaccini Calia, D.; Buey, T.; Centrone, M.; Chemla, F.; Gach, J. L.; Gendron, E.; Gratadour, D.; Guidolin, I.; Jenkins, D. R.; Marchetti, E.; Morris, T. J.; Myers, R. M.; Osborn, J.; Reeves, A. P.; Reyes, M.; Rousset, G.; Lombardi, G.; Townson, M. J.; Vidal, F.
2017-04-01
The performance of adaptive optics systems is partially dependent on the algorithms used within the real-time control system to compute wavefront slope measurements. We demonstrate the use of a matched filter algorithm for the processing of elongated laser guide star (LGS) Shack-Hartmann images, using the CANARY adaptive optics instrument on the 4.2 m William Herschel Telescope and the European Southern Observatory Wendelstein LGS Unit placed 40 m away. This algorithm has been selected for use with the forthcoming Thirty Meter Telescope, but until now had not been demonstrated on-sky. From the results of a first observing run, we show that the use of matched filtering improves our adaptive optics system performance, with increases in on-sky H-band Strehl measured up to about a factor of 1.1 with respect to a conventional centre of gravity approach. We describe the algorithm used, and the methods that we implemented to enable on-sky demonstration.
Optical Studies of model binary miscibility gap system
NASA Technical Reports Server (NTRS)
Lacy, L. L.; Witherow, W. K.; Facemire, B. R.; Nishioka, G. M.
1982-01-01
In order to develop a better understanding of separation processes in binary miscibility gap metal alloys, model transparent fluid systems were studied. The system selected was diethylene glycol-ethyl salicylate which has convenient working temperatures (288 to 350 K), low toxicity, and is relatively easy to purify. The system is well characterized with respect to its phase diagram, density, surface and interfacial tensions, viscosity and other pertinent physical properties. Studies of migration of the dispersed phase in a thermal gradient were performed using conventional photomicroscopy. Velocities of the droplets of the dispersed phase were measured and compared to calculated rates which included both Stokes and thermal components. A holographic microscopy system was used to study growth, coalescence, and particle motions. Sequential holograms allowed determination of particle size distribution changes with respect to time and temperature. Holographic microscopy is capable of recording particle densities up to 10 to the 7th power particles/cu cm and is able to resolve particles of the order of 2 to 3 microns in diameter throughout the entire volume of the test cell. The reconstructed hologram produces a wavefront that is identical to the original wavefront as it existed when the hologram was made. The reconstructed wavefront is analyzed using a variety of conventional optical methods.
Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe
2015-11-01
We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.
Akondi, Vyas; Pérez-Merino, Pablo; Martinez-Enriquez, Eduardo; Dorronsoro, Carlos; Alejandre, Nicolás; Jiménez-Alfaro, Ignacio; Marcos, Susana
2017-04-01
Standard evaluation of aberrations from wavefront slope measurements in patients implanted with a rotationally asymmetric multifocal intraocular lens (IOL), the Lentis Mplus (Oculentis GmbH, Berlin, Germany), results in large magnitude primary vertical coma, which is attributed to the intrinsic IOL design. The new proposed method analyzes aberrometry data, allowing disentangling the IOL power pupillary distribution from the true higher order aberrations of the eye. The new method of wavefront reconstruction uses retinal spots obtained at both the near and far foci. The method was tested using ray tracing optical simulations in a computer eye model virtually implanted with the Lentis Mplus IOL, with a generic cornea or with anterior segment geometry obtained from custom quantitative spectral-domain optical coherence tomography in a real patient. The method was applied to laser ray tracing aberrometry data at near and far fixation obtained in a patient implanted with the Lentis Mplus IOL. Higher order aberrations evaluated from simulated and real retinal spot diagrams following the new reconstruction approach matched the nominal aberrations (approximately 98%). Previously reported primary vertical coma in patients implanted with this IOL lost significance with the application of the proposed reconstruction. Custom analysis of ray tracing-based retinal spot diagrams allowed decoupling of the true higher order aberrations of the patient's eye from the power pupillary distribution of a rotationally asymmetric multifocal IOL, therefore providing the appropriate phase map to accurately evaluate through-focus optical quality. [J Refract Surg. 2017;33(4):257-265.]. Copyright 2017, SLACK Incorporated.
3D shape reconstruction of specular surfaces by using phase measuring deflectometry
NASA Astrophysics Data System (ADS)
Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan
2016-10-01
The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.
Optimizing phase to enhance optical trap stiffness.
Taylor, Michael A
2017-04-03
Phase optimization offers promising capabilities in optical tweezers, allowing huge increases in the applied forces, trap stiff-ness, or measurement sensitivity. One key obstacle to potential applications is the lack of an efficient algorithm to compute an optimized phase profile, with enhanced trapping experiments relying on slow programs that would take up to a week to converge. Here we introduce an algorithm that reduces the wait from days to minutes. We characterize the achievable in-crease in trap stiffness and its dependence on particle size, refractive index, and optical polarization. We further show that phase-only control can achieve almost all of the enhancement possible with full wavefront shaping; for instance phase control allows 62 times higher trap stiffness for 10 μm silica spheres in water, while amplitude control and non-trivial polarization further increase this by 1.26 and 1.01 respectively. This algorithm will facilitate future applications in optical trapping, and more generally in wavefront optimization.
Experimental Verification of Sparse Aperture Mask for Low Order Wavefront Sensing
NASA Astrophysics Data System (ADS)
Subedi, Hari; Kasdin, N. Jeremy
2017-01-01
To directly image exoplanets, future space-based missions are equipped with coronagraphs which manipulate the diffraction of starlight and create regions of high contrast called dark holes. Theoretically, coronagraphs can be designed to achieve the high level of contrast required to image exoplanets, which are billions of times dimmer than their host stars, however the aberrations caused by optical imperfections and thermal fluctuations cause the degradation of contrast in the dark holes. Focal plane wavefront control (FPWC) algorithms using deformable mirrors (DMs) are used to mitigate the quasi-static aberrations caused by optical imperfections. Although the FPWC methods correct the quasi-static aberrations, they are blind to dynamic errors caused by telescope jitter and thermal fluctuations. At Princeton's High Contrast Imaging Lab we have developed a new technique that integrates a sparse aperture mask with the coronagraph to estimate these low-order dynamic wavefront errors. This poster shows the effectiveness of a SAM Low-Order Wavefront Sensor in estimating and correcting these errors via simulation and experiment and compares the results to other methods, such as the Zernike Wavefront Sensor planned for WFIRST.
Phase Diversity Applied to Sunspot Observations
NASA Astrophysics Data System (ADS)
Tritschler, A.; Schmidt, W.; Knolker, M.
We present preliminary results of a multi-colour phase diversity experiment carried out with the Multichannel Filter System of the Vacuum Tower Telescope at the Observatorio del Teide on Tenerife. We apply phase-diversity imaging to a time sequence of sunspot filtergrams taken in three continuum bands and correct the seeing influence for each image. A newly developed phase diversity device allowing for the projection of both the focused and the defocused image onto a single CCD chip was used in one of the wavelength channels. With the information about the wavefront obtained by the image reconstruction algorithm the restoration of the other two bands can be performed as well. The processed and restored data set will then be used to derive the temperature and proper motion of the umbral dots. Data analysis is still under way, and final results will be given in a forthcoming article.
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Analysis of wave propagation and wavefront sensing in target-in-the-loop beam control systems
NASA Astrophysics Data System (ADS)
Vorontsov, Mikhail A.; Kolosov, Valeri V.
2004-10-01
Target-in-the-loop (TIL) wave propagation geometry represents perhaps the most challenging case for adaptive optics applications that are related with maximization of irradiance power density on extended remotely located surfaces in the presence of dynamically changing refractive index inhomogeneities in the propagation medium. We introduce a TIL propagation model that uses a combination of the parabolic equation describing outgoing wave propagation, and the equation describing evolution of the mutual intensity function (MIF) for the backscattered (returned) wave. The resulting evolution equation for the MIF is further simplified by the use of the smooth refractive index approximation. This approximation enables derivation of the transport equation for the returned wave brightness function, analyzed here using method characteristics (brightness function trajectories). The equations for the brightness function trajectories (ray equations) can be efficiently integrated numerically. We also consider wavefront sensors that perform sensing of speckle-averaged characteristics of the wavefront phase (TIL sensors). Analysis of the wavefront phase reconstructed from Shack-Hartmann TIL sensor measurements shows that an extended target introduces a phase modulation (target-induced phase) that cannot be easily separated from the atmospheric turbulence-related phase aberrations. We also show that wavefront sensing results depend on the extended target shape, surface roughness, and the outgoing beam intensity distribution on the target surface.
NASA Astrophysics Data System (ADS)
Egron, Sylvain; Lajoie, Charles-Philippe; Leboulleux, Lucie; N'Diaye, Mamadou; Pueyo, Laurent; Choquet, Élodie; Perrin, Marshall D.; Ygouf, Marie; Michau, Vincent; Bonnefois, Aurélie; Fusco, Thierry; Escolle, Clément; Ferrari, Marc; Hugot, Emmanuel; Soummer, Rémi
2016-07-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, including both commissioning and maintenance activities. JOST is complementary to existing testbeds for JWST (e.g. the Ball Aerospace Testbed Telescope TBT) given its compact scale and flexibility, ease of use, and colocation at the JWST Science and Operations Center. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the full linear control alignment infrastructure developed for JOST, with an emphasis on multi-field wavefront sensing and control. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is experimentally tested. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by small misalignments of the three lenses, are tested and validated on simulations.
Manipulating Digital Holograms to Modify Phase of Reconstructed Wavefronts
NASA Astrophysics Data System (ADS)
Ferraro, Pietro; Paturzo, Melania; Memmolo, Pasquale; Finizio, Andrea
2010-04-01
We show that through an adaptive deformation of digital holograms it is possible to manage the depth of focus in the numerical reconstruction. Deformation is applied to the original hologram with the aim to put simultaneously in-focus, and in one reconstructed image plane, different objects lying at different distance from the hologram plane (i.e. CCD sensor), but in the same field of view. In the same way it is possible to extend the depth of field for 3D object having a tilted object whole in-focus.
NASA Astrophysics Data System (ADS)
Mao, Heng; Wang, Xiao; Zhao, Dazun
2007-07-01
Baseline algorithm, as a tool in wavefront sensing (WFS), incorporates the phase-diverse phase retrieval (PDPR) method with hybrid-unwrapping approach to ensure a unique pupil phase estimate with high WFS accuracy even in the case of high dynamic range aberration, as long as the pupil shape is of a convex set. However, for a complicated pupil, such as that in obstructed pupil optics, the said unwrapping approach would fail owing to the fake values at points located in obstructed areas of the pupil. Thus a modified unwrapping approach that can minimize the negative effects of the obstructed areas is proposed. Simulations have shown the validity of this unwrapping approach when it is embedded in Baseline algorithm.
NASA Astrophysics Data System (ADS)
Butts, Robert R.
1997-08-01
A low noise, high resolution Shack-Hartmann wavefront sensor was included in the ABLE-ACE instrument suite to obtain direct high resolution phase measurements of the 0.53 micrometers pulsed laser beam propagated through high altitude atmospheric turbulence. The wavefront sensor employed a Fired geometry using a lenslet array which provided approximately 17 sub-apertures across the pupil. The lenslets focused the light in each sub-aperture onto a 21 by 21 array of pixels in the camera focal plane with 8 pixels in the camera focal plane with 8 pixels across the central lobe of the diffraction limited spot. The goal of the experiment was to measure the effects of the turbulence in the free atmosphere on propagation, but the wavefront sensor also detected the aberrations induced by the aircraft boundary layer and the receiver aircraft internal beam path. Data analysis methods used to extract the desired atmospheric contribution to the phase measurements from the data corrupted by non-atmospheric aberrations are described. Approaches which were used included a reconstruction of the phase as a linear combination of Zernike polynomials coupled with optical estimator sand computation of structure functions of the sub-aperture slopes. The theoretical basis for the data analysis techniques is presented. Results are described, and comparisons with theory and simulations are shown. Estimates of average turbulence strength along the propagation path from the wavefront sensor showed good agreement with other sensor. The Zernike spectra calculated from the wavefront sensor data were consistent with the standard Kolmogorov model of turbulence.
Improvements to the modal holographic wavefront sensor.
Kong, Fanpeng; Lambert, Andrew
2016-05-01
The Zernike coefficients of a light wavefront can be calculated directly by intensity ratios of pairs of spots in the reconstructed image plane of a holographic wavefront sensor (HWFS). However, the response curve of the HWFS heavily depends on the position and size of the detector for each spot and the distortions introduced by other aberrations. In this paper, we propose a method to measure the intensity of each spot by setting a threshold to select effective pixels and using the weighted average intensity within a selected window. Compared with using the integral intensity over a small window for each spot, we show through a numerical simulation that the proposed method reduces the dependency of the HWFS's response curve on the selection of the detector window. We also recorded a HWFS on a holographic plate using a blue laser and demonstrated its capability to detect the strength of encoded Zernike terms in an aberrated beam.
Three-dimensional imaging of cultural heritage artifacts with holographic printers
NASA Astrophysics Data System (ADS)
Kang, Hoonjong; Stoykova, Elena; Berberova, Nataliya; Park, Jiyong; Nazarova, Dimana; Park, Joo Sup; Kim, Youngmin; Hong, Sunghee; Ivanov, Branimir; Malinowski, Nikola
2016-01-01
Holography is defined as a two-steps process of capture and reconstruction of the light wavefront scattered from three-dimensional (3D) objects. Capture of the wavefront is possible due to encoding of both amplitude and phase in the hologram as a result of interference of the light beam coming from the object and mutually coherent reference beam. Three-dimensional imaging provided by holography motivates development of digital holographic imaging methods based on computer generation of holograms as a holographic display or a holographic printer. The holographic printing technique relies on combining digital 3D object representation and encoding of the holographic data with recording of analog white light viewable reflection holograms. The paper considers 3D contents generation for a holographic stereogram printer and a wavefront printer as a means of analogue recording of specific artifacts which are complicated objects with regards to conventional analog holography restrictions.
Faramarzi, Amir; Moshirfar, Majid; Karimian, Farid; Delfazayebaher, Siamak; Kheiri, Bahareh
2017-12-01
To compare the refractive and higher-order aberrations (HOAs) outcomes after photorefractive keratectomy (PRK) in patients with significant astigmatism using aspheric versus wavefront-guided aspheric profiles. Ophthalmic Research Center and Department of Ophthalmology, Shahid Beheshti University of Medical Sciences, Negah Eye Hospital, Tehran, Iran. Prospective randomized case series. One eye of each patient with a refractive astigmatism more than 2.00 diopters (D) randomly received aspheric PRK. In the other eye, wavefront-guided and aspheric treatment was performed using a personalized treatment advanced algorithm. Visual acuity, refractive errors, and HOAs were compared between the 2 groups preoperatively and 12 months postoperatively. The study comprised 32 patients (64 eyes). The mean preoperative refractive astigmatism was -4.07 D ± 1.64 (SD) and -4.02 ± 1.55 D in the aspheric group and wavefront-guided aspheric group, respectively (P = .2). The mean postoperative astigmatism was -0.46 ± 0.37 D and -0.82 ± 0.53 D in the aspheric group and wavefront-guided aspheric group, respectively (P = .02). Postoperatively, the root mean square of total HOAs was significantly increased in both groups. However, compared with wavefront-guided aspheric PRK, aspheric PRK induced fewer HOAs (P = .003). In eyes with high astigmatism, post-PRK residual astigmatism was lower in the aspheric group than in the wavefront-guided aspheric group. The increase in HOAs was significantly higher in the wavefront-guided aspheric group than in the aspheric group. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Jian, Yifan; Xu, Jing; Gradowski, Martin A.; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.
2014-01-01
We present wavefront sensorless adaptive optics (WSAO) Fourier domain optical coherence tomography (FD-OCT) for in vivo small animal retinal imaging. WSAO is attractive especially for mouse retinal imaging because it simplifies optical design and eliminates the need for wavefront sensing, which is difficult in the small animal eye. GPU accelerated processing of the OCT data permitted real-time extraction of image quality metrics (intensity) for arbitrarily selected retinal layers to be optimized. Modal control of a commercially available segmented deformable mirror (IrisAO Inc.) provided rapid convergence using a sequential search algorithm. Image quality improvements with WSAO OCT are presented for both pigmented and albino mouse retinal data, acquired in vivo. PMID:24575347
Advanced Wavefront Sensing and Control Testbed (AWCT)
NASA Technical Reports Server (NTRS)
Shi, Fang; Basinger, Scott A.; Diaz, Rosemary T.; Gappinger, Robert O.; Tang, Hong; Lam, Raymond K.; Sidick, Erkin; Hein, Randall C.; Rud, Mayer; Troy, Mitchell
2010-01-01
The Advanced Wavefront Sensing and Control Testbed (AWCT) is built as a versatile facility for developing and demonstrating, in hardware, the future technologies of wave front sensing and control algorithms for active optical systems. The testbed includes a source projector for a broadband point-source and a suite of extended scene targets, a dispersed fringe sensor, a Shack-Hartmann camera, and an imaging camera capable of phase retrieval wavefront sensing. The testbed also provides two easily accessible conjugated pupil planes which can accommodate the active optical devices such as fast steering mirror, deformable mirror, and segmented mirrors. In this paper, we describe the testbed optical design, testbed configurations and capabilities, as well as the initial results from the testbed hardware integrations and tests.
Towards feasible and effective predictive wavefront control for adaptive optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyneer, L A; Veran, J
We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.
Ma, Xingkun; Huang, Lei; Bian, Qi; Gong, Mali
2014-09-10
The wavefront correction ability of a deformable mirror with a multireflection waveguide was investigated and compared via simulations. By dividing a conventional actuator array into a multireflection waveguide that consisted of single-actuator units, an arbitrary actuator pattern could be achieved. A stochastic parallel perturbation algorithm was proposed to find the optimal actuator pattern for a particular aberration. Compared with conventional an actuator array, the multireflection waveguide showed significant advantages in correction of higher order aberrations.
Novel algorithm implementations in DARC: the Durham AO real-time controller
NASA Astrophysics Data System (ADS)
Basden, Alastair; Bitenc, Urban; Jenkins, David
2016-07-01
The Durham AO Real-time Controller has been used on-sky with the CANARY AO demonstrator instrument since 2010, and is also used to provide control for several AO test-benches, including DRAGON. Over this period, many new real-time algorithms have been developed, implemented and demonstrated, leading to performance improvements for CANARY. Additionally, the computational performance of this real-time system has continued to improve. Here, we provide details about recent updates and changes made to DARC, and the relevance of these updates, including new algorithms, to forthcoming AO systems. We present the computational performance of DARC when used on different hardware platforms, including hardware accelerators, and determine the relevance and potential for ELT scale systems. Recent updates to DARC have included algorithms to handle elongated laser guide star images, including correlation wavefront sensing, with options to automatically update references during AO loop operation. Additionally, sub-aperture masking options have been developed to increase signal to noise ratio when operating with non-symmetrical wavefront sensor images. The development of end-user tools has progressed with new options for configuration and control of the system. New wavefront sensor camera models and DM models have been integrated with the system, increasing the number of possible hardware configurations available, and a fully open-source AO system is now a reality, including drivers necessary for commercial cameras and DMs. The computational performance of DARC makes it suitable for ELT scale systems when implemented on suitable hardware. We present tests made on different hardware platforms, along with the strategies taken to optimise DARC for these systems.
Wavefront sensing and adaptive control in phased array of fiber collimators
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.
2011-03-01
A new wavefront control approach for mitigation of atmospheric turbulence-induced wavefront phase aberrations in coherent fiber-array-based laser beam projection systems is introduced and analyzed. This approach is based on integration of wavefront sensing capabilities directly into the fiber-array transmitter aperture. In the coherent fiber array considered, we assume that each fiber collimator (subaperture) of the array is capable of precompensation of local (onsubaperture) wavefront phase tip and tilt aberrations using controllable rapid displacement of the tip of the delivery fiber at the collimating lens focal plane. In the technique proposed, this tip and tilt phase aberration control is based on maximization of the optical power received through the same fiber collimator using the stochastic parallel gradient descent (SPGD) technique. The coordinates of the fiber tip after the local tip and tilt aberrations are mitigated correspond to the coordinates of the focal-spot centroid of the optical wave backscattered off the target. Similar to a conventional Shack-Hartmann wavefront sensor, phase function over the entire fiber-array aperture can then be retrieved using the coordinates obtained. The piston phases that are required for coherent combining (phase locking) of the outgoing beams at the target plane can be further calculated from the reconstructed wavefront phase. Results of analysis and numerical simulations are presented. Performance of adaptive precompensation of phase aberrations in this laser beam projection system type is compared for various system configurations characterized by the number of fiber collimators and atmospheric turbulence conditions. The wavefront control concept presented can be effectively applied for long-range laser beam projection scenarios for which the time delay related with the double-pass laser beam propagation to the target and back is compared or even exceeds the characteristic time of the atmospheric turbulence change - scenarios when conventional target-in-the-loop phase-locking techniques fail.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Atmospheric turbulence profiling with unknown power spectral density
NASA Astrophysics Data System (ADS)
Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny
2018-04-01
Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.
Modeling of light-emitting diode wavefronts for the optimization of transmission holograms.
Karthaus, Daniela; Giehl, Markus; Sandfuchs, Oliver; Sinzinger, Stefan
2017-06-20
The objective of applying transmission holograms in automotive headlamp systems requires the adaptation of holograms to divergent and polychromatic light sources like light-emitting diodes (LEDs). In this paper, four different options to describe the scalar light waves emitted by a typical automotive LED are regarded. This includes a new approach to determine the LED's wavefront from interferometric measurements. Computer-generated holograms are designed considering the different LED approximations and recorded into a photopolymer. The holograms are reconstructed with the LED and the resulting images are analyzed to evaluate the quality of the wave descriptions. In this paper, we show that our presented new approach leads to better results in comparison to other wave descriptions. The enhancement is evaluated by the correlation between reconstructed and ideal images. In contrast to the next best approximation, a spherical wave, the correlation coefficient increased by 0.18% at 532 nm, 1.69% at 590 nm, and 0.75% at 620 nm.
Zernike Wavefront Sensor Modeling Development for LOWFS on WFIRST-AFTA
NASA Technical Reports Server (NTRS)
Wang, Xu; Wallace, J. Kent; Shi, Fang
2015-01-01
WFIRST-AFTA design makes use of an existing 2.4m telescope for direct imaging of exoplanets. To maintain the high contrast needed for the coronagraph, wavefront error (WFE) of the optical system needs to be continuously sensed and controlled. Low Order Wavefront Sensing (LOWFS) uses the rejected starlight from an immediate focal plane to sense wavefront changes (mostly thermally induced low order WFE) by combining the LOWFS mask (a phase plate located at the small center region with reflective layer) with the starlight rejection masks, i.e. Hybrid Lyot Coronagraph (HLC)'s occulter or Shaped Pupil Coronagraph (SPC)'s field stop. Zernike wavefront sensor (ZWFS) measures phase via the phase-contrast method and is known to be photon noise optimal for measuring low order aberrations. Recently, ZWFS was selected as the baseline LOWFS technology on WFIST/AFTA for its good sensitivity, accuracy, and its easy integration with the starlight rejection mask. In this paper, we review the theory of ZWFS operation, describe the ZWFS algorithm development, and summarize various numerical sensitivity studies on the sensor performance. In the end, the predicted sensor performance on SPC and HLC configurations are presented.
A Novel Method of High Accuracy, Wavefront Phase and Amplitude Correction for Coronagraphy
NASA Technical Reports Server (NTRS)
Bowers, Charles W.; Woodgate, Bruce E.; Lyon, Richard G.
2003-01-01
Detection of extra-solar, and especially terrestrial-like planets, using coronagraphy requires an extremely high level of wavefront correction. For example, the study of Woodruff et al. (2002) has shown that phase uniformity of order 10(exp -4)lambda(rms) must be achieved over the critical range of spatial frequencies to produce the approx. 10(exp 10) contrast needed for the Terrestrial Planet Finder (TPF) mission. Correction of wavefront phase errors to this level may be accomplished by using a very high precision deformable mirror (DM). However, not only phase but also amplitude uniformity of the same scale (approx. 10(exp -4)) and over the same spatial frequency range must be simultaneously obtained to remove all residual speckle in the image plane. We present a design for producing simultaneous wavefront phase and amplitude uniformity to high levels from an input wavefront of lower quality. The design uses a dual Michelson interferometer arrangement incorporating two DM and a single, fixed mirror (all at pupils) and two beamsplitters: one with unequal (asymmetric) beam splitting and one with symmetric beam splitting. This design allows high precision correction of both phase and amplitude using DM with relatively coarse steps and permits a simple correction algorithm.
Coadding Techniques for Image-based Wavefront Sensing for Segmented-mirror Telescopes
NASA Technical Reports Server (NTRS)
Smith, Scott; Aronstein, David; Dean, Bruce; Acton, Scott
2007-01-01
Image-based wavefront sensing algorithms are being used to characterize optical performance for a variety of current and planned astronomical telescopes. Phase retrieval recovers the optical wavefront that correlates to a series of diversity-defocused point-spread functions (PSFs), where multiple frames can be acquired at each defocus setting. Multiple frames of data can be coadded in different ways; two extremes are in "image-plane space," to average the frames for each defocused PSF and use phase retrieval once on the averaged images, or in "pupil-plane space," to use phase retrieval on every set of PSFs individually and average the resulting wavefronts. The choice of coadd methodology is particularly noteworthy for segmented-mirror telescopes that are subject to noise that causes uncorrelated motions between groups of segments. Using data collected on and simulations of the James Webb Space Telescope Testbed Telescope (TBT) commissioned at Ball Aerospace, we show how different sources of noise (uncorrelated segment jitter, turbulence, and common-mode noise) and different parts of the optical wavefront, segment and global aberrations, contribute to choosing the coadd method. Of particular interest, segment piston is more accurately recovered in "image-plane space" coadding, while segment tip/tilt is recovered in "pupil-plane space" coadding.
Focusing of light through turbid media by curve fitting optimization
NASA Astrophysics Data System (ADS)
Gong, Changmei; Wu, Tengfei; Liu, Jietao; Li, Huijuan; Shao, Xiaopeng; Zhang, Jianqi
2016-12-01
The construction of wavefront phase plays a critical role in focusing light through turbid media. We introduce the curve fitting algorithm (CFA) into the feedback control procedure for wavefront optimization. Unlike the existing continuous sequential algorithm (CSA), the CFA locates the optimal phase by fitting a curve to the measured signals. Simulation results show that, similar to the genetic algorithm (GA), the proposed CFA technique is far less susceptible to the experimental noise than the CSA. Furthermore, only three measurements of feedback signals are enough for CFA to fit the optimal phase while obtaining a higher focal intensity than the CSA and the GA, dramatically shortening the optimization time by a factor of 3 compared with the CSA and the GA. The proposed CFA approach can be applied to enhance the focus intensity and boost the focusing speed in the fields of biological imaging, particle trapping, laser therapy, and so on, and might help to focus light through dynamic turbid media.
Novel 3D Compression Methods for Geometry, Connectivity and Texture
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2016-06-01
A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.
Airway segmentation and analysis for the study of mouse models of lung disease using micro-CT
NASA Astrophysics Data System (ADS)
Artaechevarria, X.; Pérez-Martín, D.; Ceresa, M.; de Biurrun, G.; Blanco, D.; Montuenga, L. M.; van Ginneken, B.; Ortiz-de-Solorzano, C.; Muñoz-Barrutia, A.
2009-11-01
Animal models of lung disease are gaining importance in understanding the underlying mechanisms of diseases such as emphysema and lung cancer. Micro-CT allows in vivo imaging of these models, thus permitting the study of the progression of the disease or the effect of therapeutic drugs in longitudinal studies. Automated analysis of micro-CT images can be helpful to understand the physiology of diseased lungs, especially when combined with measurements of respiratory system input impedance. In this work, we present a fast and robust murine airway segmentation and reconstruction algorithm. The algorithm is based on a propagating fast marching wavefront that, as it grows, divides the tree into segments. We devised a number of specific rules to guarantee that the front propagates only inside the airways and to avoid leaking into the parenchyma. The algorithm was tested on normal mice, a mouse model of chronic inflammation and a mouse model of emphysema. A comparison with manual segmentations of two independent observers shows that the specificity and sensitivity values of our method are comparable to the inter-observer variability, and radius measurements of the mainstem bronchi reveal significant differences between healthy and diseased mice. Combining measurements of the automatically segmented airways with the parameters of the constant phase model provides extra information on how disease affects lung function.
Wave front sensing for next generation earth observation telescope
NASA Astrophysics Data System (ADS)
Delvit, J.-M.; Thiebaut, C.; Latry, C.; Blanchet, G.
2017-09-01
High resolution observations systems are highly dependent on optics quality and are usually designed to be nearly diffraction limited. Such a performance allows to set a Nyquist frequency closer to the cut off frequency, or equivalently to minimize the pupil diameter for a given ground sampling distance target. Up to now, defocus is the only aberration that is allowed to evolve slowly and that may be inflight corrected, using an open loop correction based upon ground estimation and refocusing command upload. For instance, Pleiades satellites defocus is assessed from star acquisitions and refocusing is done with a thermal actuation of the M2 mirror. Next generation systems under study at CNES should include active optics in order to allow evolving aberrations not only limited to defocus, due for instance to in orbit thermal variable conditions. Active optics relies on aberration estimations through an onboard Wave Front Sensor (WFS). One option is using a Shack Hartmann. The Shack-Hartmann wave-front sensor could be used on extended scenes (unknown landscapes). A wave-front computation algorithm should then be implemented on-board the satellite to provide the control loop wave-front error measure. In the worst case scenario, this measure should be computed before each image acquisition. A robust and fast shift estimation algorithm between Shack-Hartmann images is then needed to fulfill this last requirement. A fast gradient-based algorithm using optical flows with a Lucas-Kanade method has been studied and implemented on an electronic device developed by CNES. Measurement accuracy depends on the Wave Front Error (WFE), the landscape frequency content, the number of searched aberrations, the a priori knowledge of high order aberrations and the characteristics of the sensor. CNES has realized a full scale sensitivity analysis on the whole parameter set with our internally developed algorithm.
NASA Astrophysics Data System (ADS)
Ssessanga, Nicholas; Kim, Yong Ha; Jeong, Se-Heon
2017-03-01
A statistical study on the relationship between the perturbation component (ΔTEC (total electron content)) and the F2 layer peak height (hmF2) during nighttime medium-scale traveling ionospheric disturbances is presented. The results are obtained by using a time-dependent computerized ionospheric tomography (CIT) technique. This was realized by using slant total electron content observations from a dense Global Positioning System receiver network over Japan (with more than 1000 receivers), together with a multiplicative algebraic reconstruction technique. Reconstructions from CIT were validated by using ionosonde and occultation measurements. A total of 36 different time snapshots of the ionosphere when medium-scale traveling ionospheric disturbances (MSTIDs) were eminent were analyzed. These were obtained from a data set covering years from 2011 to 2014. The reconstructed surface wavefronts of ΔTEC and hmF2 structure were found to be aligned along the northwest-southeast direction. These results confirm that nighttime MSTIDs are driven by electrodynamic forces related to Perkins instability which explains the northwest-southeast wavefront alignment based on the F region electrodynamics. Furthermore, from the statistical analysis hmF2 varied quasiperiodically in altitude with dominant peak-to-peak amplitudes between 10 and 40 km. In addition, ΔTEC and hmF2 were 60% anticorrelated.
Co-adding techniques for image-based wavefront sensing for segmented-mirror telescopes
NASA Astrophysics Data System (ADS)
Smith, J. S.; Aronstein, David L.; Dean, Bruce H.; Acton, D. S.
2007-09-01
Image-based wavefront sensing algorithms are being used to characterize the optical performance for a variety of current and planned astronomical telescopes. Phase retrieval recovers the optical wavefront that correlates to a series of diversity-defocused point-spread functions (PSFs), where multiple frames can be acquired at each defocus setting. Multiple frames of data can be co-added in different ways; two extremes are in "image-plane space," to average the frames for each defocused PSF and use phase retrieval once on the averaged images, or in "pupil-plane space," to use phase retrieval on each PSF frame individually and average the resulting wavefronts. The choice of co-add methodology is particularly noteworthy for segmented-mirror telescopes that are subject to noise that causes uncorrelated motions between groups of segments. Using models and data from the James Webb Space Telescope (JWST) Testbed Telescope (TBT), we show how different sources of noise (uncorrelated segment jitter, turbulence, and common-mode noise) and different parts of the optical wavefront, segment and global aberrations, contribute to choosing the co-add method. Of particular interest, segment piston is more accurately recovered in "image-plane space" co-adding, while segment tip/tilt is recovered in "pupil-plane space" co-adding.
Fresnel transform phase retrieval from magnitude.
Pitts, Todd A; Greenleaf, James F
2003-08-01
This report presents a generalized projection method for recovering the phase of a finite support, two-dimensional signal from knowledge of its magnitude in the spatial position and Fresnel transform domains. We establish the uniqueness of sampled monochromatic scalar field phase given Fresnel transform magnitude and finite region of support constraints for complex signals. We derive an optimally relaxed version of the algorithm resulting in a significant reduction in the number of iterations needed to obtain useful results. An advantage of using the Fresnel transform (as opposed to Fourier) for measurement is that the shift-invariance of the transform operator implies retention of object location information in the transformed image magnitude. As a practical application in the context of ultrasound beam measurement we discuss the determination of small optical phase shifts from near field optical intensity distributions. Experimental data are used to reconstruct the phase shape of an optical field immediately after propagating through a wide bandwidth ultrasonic pulse. The phase of each point on the optical wavefront is proportional to the ray sum of pressure through the ultrasound pulse (assuming low ultrasonic intensity). An entire pressure field was reconstructed in three dimensions and compared with a calibrated hydrophone measurement. The comparison is excellent, demonstrating that the phase retrieval is quantitative.
NASA Astrophysics Data System (ADS)
Egron, Sylvain; Soummer, Rémi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Levecq, Olivier; Mazoyer, Johan; N'Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand
2017-09-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, such as JWST. With the JWST Science and Operations Center co-located at STScI, JOST was developed to provide both a platform for staff training and to test alternate wavefront sensing and control strategies for independent validation or future improvements beyond the baseline operations. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the most recent experimental results for the segmented mirror alignment. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is tested on simulation and experimentally. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by misalignment of the secondary lens and the segmented mirror, are tested and validated both on simulations and experimentally. In this proceeding, we present the performance of the full active optic control loop in presence of perturbations on the segmented mirror, and we detail the quality of the alignment correction.
3D imaging and wavefront sensing with a plenoptic objective
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.
2011-06-01
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.
Imbe, Masatoshi
2018-03-20
The optical configuration proposed in this paper consists of a 4-f optical setup with the wavefront modulation device on the Fourier plane, such as a concave mirror and a spatial light modulator. The transverse magnification of reconstructed images with the proposed configuration is independent of locations of an object and an image sensor; therefore, reconstructed images of object(s) at different distances can be scaled with a fixed transverse magnification. It is yielded based on Fourier optics and mathematically verified with the optical matrix method. Numerical simulation results and experimental results are also given to confirm the fixity of the reconstructed images.
Wavefront shaping to correct intraocular scattering
NASA Astrophysics Data System (ADS)
Artal, Pablo; Arias, Augusto; Fernández, Enrique
2018-02-01
Cataracts is a common ocular pathology that increases the amount of intraocular scattering. It degrades the quality of vision by both blur and contrast reduction of the retinal images. In this work, we propose a non-invasive method, based on wavefront shaping (WS), to minimize cataract effects. For the experimental demonstration of the method, a liquid crystal on silicon (LCoS) spatial light modulator was used for both reproduction and reduction of the realistic cataracts effects. The LCoS area was separated in two halves conjugated with the eye's pupil by a telescope with unitary magnification. Thus, while the phase maps that induced programmable amounts of intraocular scattering (related to cataract severity) were displayed in a one half of the LCoS, sequentially testing wavefronts were displayed in the second one. Results of the imaging improvements were visually evaluated by subjects with no known ocular pathology seeing through the instrument. The diffracted intensity of exit pupil is analyzed for the feedback of the implemented algorithms in search for the optimum wavefront. Numerical and experimental results of the imaging improvements are presented and discussed.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
Simulating large atmospheric phase screens using a woofer-tweeter algorithm.
Buscher, David F
2016-10-03
We describe an algorithm for simulating atmospheric wavefront perturbations over ranges of spatial and temporal scales spanning more than 4 orders of magnitude. An open-source implementation of the algorithm written in Python can simulate the evolution of the perturbations more than an order-of-magnitude faster than real time. Testing of the implementation using metrics appropriate to adaptive optics systems and long-baseline interferometers show accuracies at the few percent level or better.
NASA Astrophysics Data System (ADS)
Vorontsov, Mikhail A.; Kolosov, Valeriy V.
2004-12-01
Target-in-the-loop (TIL) wave propagation geometry represents perhaps the most challenging case for adaptive optics applications that are related with maximization of irradiance power density on extended remotely located surfaces in the presence of dynamically changing refractive index inhomogeneities in the propagation medium. We introduce a TIL propagation model that uses a combination of the parabolic equation describing outgoing wave propagation, and the equation describing evolution of the mutual coherence function (MCF) for the backscattered (returned) wave. The resulting evolution equation for the MCF is further simplified by the use of the smooth refractive index approximation. This approximation enables derivation of the transport equation for the returned wave brightness function, analyzed here using method characteristics (brightness function trajectories). The equations for the brightness function trajectories (ray equations) can be efficiently integrated numerically. We also consider wavefront sensors that perform sensing of speckle-averaged characteristics of the wavefront phase (TIL sensors). Analysis of the wavefront phase reconstructed from Shack-Hartmann TIL sensor measurements shows that an extended target introduces a phase modulation (target-induced phase) that cannot be easily separated from the atmospheric turbulence-related phase aberrations. We also show that wavefront sensing results depend on the extended target shape, surface roughness, and the outgoing beam intensity distribution on the target surface.
Wavefront tilt feedforward for the formation interferometer testbad (FIT)
NASA Technical Reports Server (NTRS)
Shields, J. F.; Liewer, K.; Wehmeier, U.
2002-01-01
Separated spacecraft interferometry is a candidate architecture for several future NASA missions. The Formation Interferometer Testbed (FIT) is a ground based testbed dedicated to the validation of this key technology for a formation of two spacecraft. In separated spacecraft interferometry, the residual relative motion of the component spacecraft must be compensated for by articulation of the optical components. In this paper, the design of the FIT interferometer pointing control system is described. This control system is composed of a metrology pointing loop that maintains an optical link between the two spacecraft and two stellar pointing loops for stabilizing the stellar wavefront at both the right and left apertures of the instrument. A novel feedforward algorithm is used to decouple the metrology loop from the left side stellar loop. Experimental results from the testbed are presented that verify this approach and that fully demonstrate the performance of the algorithm.
Coherent control of plasma dynamics by feedback-optimized wavefront manipulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Z.-H.; Hou, B.; Gao, G.
2015-05-15
Plasmas generated by an intense laser pulse can support coherent structures such as large amplitude wakefield that can affect the outcome of an experiment. We investigate the coherent control of plasma dynamics by feedback-optimized wavefront manipulation using a deformable mirror. The experimental outcome is directly used as feedback in an evolutionary algorithm for optimization of the phase front of the driving laser pulse. In this paper, we applied this method to two different experiments: (i) acceleration of electrons in laser driven plasma waves and (ii) self-compression of optical pulses induced by ionization nonlinearity. The manipulation of the laser wavefront leadsmore » to orders of magnitude improvement to electron beam properties such as the peak charge, beam divergence, and transverse emittance. The demonstration of coherent control for plasmas opens new possibilities for future laser-based accelerators and their applications.« less
Resolution enhancement in digital holography by self-extrapolation of holograms.
Latychevskaia, Tatiana; Fink, Hans-Werner
2013-03-25
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
Dynamics and Stability of Acoustic Wavefronts in the Ocean
2014-09-30
processes on underwater acoustic fields. The 3-D HWT algorithm was also applied to investigate long- range propagation of infrasound in the atmosphere...oceanographic processes on underwater sound propagation and also has been demonstrated to be an efficient and robust technique for modeling infrasound ...algorithm by modeling propagation of infrasound generated by Eyjafjallajökull volcano in southern Iceland. Eruptions of this volcano were recorded by
Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.
Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro
2018-04-16
In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.
Advancements to the planogram frequency–distance rebinning algorithm
Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E
2010-01-01
In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790
High speed real-time wavefront processing system for a solid-state laser system
NASA Astrophysics Data System (ADS)
Liu, Yuan; Yang, Ping; Chen, Shanqiu; Ma, Lifang; Xu, Bing
2008-03-01
A high speed real-time wavefront processing system for a solid-state laser beam cleanup system has been built. This system consists of a core2 Industrial PC (IPC) using Linux and real-time Linux (RT-Linux) operation system (OS), a PCI image grabber, a D/A card. More often than not, the phase aberrations of the output beam from solid-state lasers vary fast with intracavity thermal effects and environmental influence. To compensate the phase aberrations of solid-state lasers successfully, a high speed real-time wavefront processing system is presented. Compared to former systems, this system can improve the speed efficiently. In the new system, the acquisition of image data, the output of control voltage data and the implementation of reconstructor control algorithm are treated as real-time tasks in kernel-space, the display of wavefront information and man-machine conversation are treated as non real-time tasks in user-space. The parallel processing of real-time tasks in Symmetric Multi Processors (SMP) mode is the main strategy of improving the speed. In this paper, the performance and efficiency of this wavefront processing system are analyzed. The opened-loop experimental results show that the sampling frequency of this system is up to 3300Hz, and this system can well deal with phase aberrations from solid-state lasers.
Target-in-the-loop beam control: basic considerations for analysis and wave-front sensing
NASA Astrophysics Data System (ADS)
Vorontsov, Mikhail A.; Kolosov, Valeriy
2005-01-01
Target-in-the-loop (TIL) wave propagation geometry represents perhaps the most challenging case for adaptive optics applications that are related to maximization of irradiance power density on extended remotely located surfaces in the presence of dynamically changing refractive-index inhomogeneities in the propagation medium. We introduce a TIL propagation model that uses a combination of the parabolic equation describing coherent outgoing-wave propagation, and the equation describing evolution of the mutual correlation function (MCF) for the backscattered wave (return wave). The resulting evolution equation for the MCF is further simplified by use of the smooth-refractive-index approximation. This approximation permits derivation of the transport equation for the return-wave brightness function, analyzed here by the method of characteristics (brightness function trajectories). The equations for the brightness function trajectories (ray equations) can be efficiently integrated numerically. We also consider wave-front sensors that perform sensing of speckle-averaged characteristics of the wave-front phase (TIL sensors). Analysis of the wave-front phase reconstructed from Shack-Hartmann TIL sensor measurements shows that an extended target introduces a phase modulation (target-induced phase) that cannot be easily separated from the atmospheric-turbulence-related phase aberrations. We also show that wave-front sensing results depend on the extended target shape, surface roughness, and outgoing-beam intensity distribution on the target surface. For targets with smooth surfaces and nonflat shapes, the target-induced phase can contain aberrations. The presence of target-induced aberrations in the conjugated phase may result in a deterioration of adaptive system performance.
Target-in-the-loop beam control: basic considerations for analysis and wave-front sensing.
Vorontsov, Mikhail A; Kolosov, Valeriy
2005-01-01
Target-in-the-loop (TIL) wave propagation geometry represents perhaps the most challenging case for adaptive optics applications that are related to maximization of irradiance power density on extended remotely located surfaces in the presence of dynamically changing refractive-index inhomogeneities in the propagation medium. We introduce a TIL propagation model that uses a combination of the parabolic equation describing coherent outgoing-wave propagation, and the equation describing evolution of the mutual correlation function (MCF) for the backscattered wave (return wave). The resulting evolution equation for the MCF is further simplified by use of the smooth-refractive-index approximation. This approximation permits derivation of the transport equation for the return-wave brightness function, analyzed here by the method of characteristics (brightness function trajectories). The equations for the brightness function trajectories (ray equations) can be efficiently integrated numerically. We also consider wave-front sensors that perform sensing of speckle-averaged characteristics of the wave-front phase (TIL sensors). Analysis of the wave-front phase reconstructed from Shack-Hartmann TIL sensor measurements shows that an extended target introduces a phase modulation (target-induced phase) that cannot be easily separated from the atmospheric-turbulence-related phase aberrations. We also show that wave-front sensing results depend on the extended target shape, surface roughness, and outgoing-beam intensity distribution on the target surface. For targets with smooth surfaces and nonflat shapes, the target-induced phase can contain aberrations. The presence of target-induced aberrations in the conjugated phase may result in a deterioration of adaptive system performance.
NASA Astrophysics Data System (ADS)
La Foy, Roderick; Vlachos, Pavlos
2011-11-01
An optimally designed MLOS tomographic reconstruction algorithm for use in 3D PIV and PTV applications is analyzed. Using a set of optimized reconstruction parameters, the reconstructions produced by the MLOS algorithm are shown to be comparable to reconstructions produced by the MART algorithm for a range of camera geometries, camera numbers, and particle seeding densities. The resultant velocity field error calculated using PIV and PTV algorithms is further minimized by applying both pre and post processing to the reconstructed data sets.
Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo
2016-01-01
To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (p<0.05) among reconstruction algorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (p<0.013). Most of the radiomic features were significantly affected by the reconstruction algorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.
LSPV+7, a branch-point-tolerant reconstructor for strong turbulence adaptive optics.
Steinbock, Michael J; Hyde, Milo W; Schmidt, Jason D
2014-06-20
Optical wave propagation through long paths of extended turbulence presents unique challenges to adaptive optics (AO) systems. As scintillation and branch points develop in the beacon phase, challenges arise in accurately unwrapping the received wavefront and optimizing the reconstructed phase with respect to branch cut placement on a continuous facesheet deformable mirror. Several applications are currently restricted by these capability limits: laser communication, laser weapons, remote sensing, and ground-based astronomy. This paper presents a set of temporally evolving AO simulations comparing traditional least-squares reconstruction techniques to a complex-exponential reconstructor and several other reconstructors derived from the postprocessing congruence operation. The reconstructors' behavior in closed-loop operation is compared and discussed, providing several insights into the fundamental strengths and limitations of each reconstructor type. This research utilizes a self-referencing interferometer (SRI) as the high-order wavefront sensor, driving a traditional linear control law in conjunction with a cooperative point source beacon. The SRI model includes practical optical considerations and frame-by-frame fiber coupling effects to allow for realistic noise modeling. The "LSPV+7" reconstructor is shown to offer the best performance in terms of Strehl ratio and correction stability-outperforming the traditional least-squares reconstructed system by an average of 120% in the studied scenarios. Utilizing a continuous facesheet deformable mirror, these reconstructors offer significant AO performance improvements in strong turbulence applications without the need for segmented deformable mirrors.
NASA Astrophysics Data System (ADS)
Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua
2016-07-01
On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.
Fizeau interferometric cophasing of segmented mirrors: experimental validation.
Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter
2014-06-02
We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.
Peak-locking centroid bias in Shack-Hartmann wavefront sensing
NASA Astrophysics Data System (ADS)
Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.
2018-05-01
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
Polarimeter Blind Deconvolution Using Image Diversity
2007-09-01
significant presence when imaging through turbulence and its ease of production in the labora- tory. An innovative algorithm for detection and estimation...1.2.2.2 Atmospheric Turbulence . Atmospheric turbulence spatially distorts the wavefront as light passes through it and causes blurring of images in an...intensity image . Various values of β are used in the experiments. The optimal β value varied with the input and the algorithm . The hybrid seemed to
Methods of multi-conjugate adaptive optics for astronomy
NASA Astrophysics Data System (ADS)
Flicker, Ralf
2003-07-01
This work analyses several aspects of multi-conjugate adaptive optics (MCAO) for astronomy. The research ranges from fundamental and technical studies for present-day MCAO projects, to feasibility studies of high-order MCAO instruments for the extremely large telescopes (ELTs) of the future. The first part is an introductory exposition on atmospheric turbulence, adaptive optics (AO) and MCAO, establishing the framework within which the research was carried out The second part (papers I VI) commences with a fundamental design parameter study of MCAO systems, based upon a first-order performance estimation Monte Carlo simulation. It is investigated how the number and geometry of deformable mirrors and reference beacons, and the choice of wavefront reconstruction algorithm, affect system performance. Multi-conjugation introduces the possibility of optically canceling scintillation in part, at the expense of additional optics, by applying the phase correction in a certain sequence. The effects of scintillation when this sequence is not observed are investigated. As a link in characterizing anisoplanatism in conventional AO systems, images made with the AO instrument Hokupa'a on the Gemini-North Telescope were analysed with respect to the anisoplanatism signal. By model-fitting of simulated data, conclusions could be drawn about the vertical distribution of turbulence above the observatory site (Mauna Kea), and the significance to future AO and MCAO instruments with conjugated deformable mirrors is addressed. The problem of tilt anisoplanatism with MCAO systems relying on artificial reference beacons—laser guide stars (LGSs)—is analysed, and analytical models for predicting the effects of tilt anisoplanatism are devised. A method is presented for real-time retrieval of the tilt anisoplanatism point spread function (PSF), using control loop data. An independent PSF estimation of high accuracy is thus obtained which enables accurate PSF photometry and deconvolution. Lastly, a first-order performance estimation method is presented by which MCAO systems for ELTs may be studied efficiently, using sparse matrix techniques for wavefront reconstruction and a hybrid numerical/analytical simulation model. MCAO simulation results are presented for a wide range of telescope diameters up to 100 meters, and the effects of LGSs and a finite turbulence outer scale are investigated.
Focusing light through random scattering media by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-01-01
The focusing of light through random scattering materials using wavefront shaping is studied in detail. We propose a newfangled approach namely four-element division algorithm to improve the average convergence rate and signal-to-noise ratio of focusing. Using 4096 independently controlled segments of light, the intensity at the target is 72 times enhanced over the original intensity at the same position. The four-element division algorithm and existing phase control algorithms of focusing through scattering media are compared by both of the numerical simulation and the experiment. It is found that four-element division algorithm is particularly advantageous to improve the average convergence rate of focusing.
Objective straylight assessment of the human eye with a novel device
NASA Astrophysics Data System (ADS)
Schramm, Stefan; Schikowski, Patrick; Lerm, Elena; Kaeding, André; Klemm, Matthias; Haueisen, Jens; Baumgarten, Daniel
2016-03-01
Forward scattered light from the anterior segment of the human eye can be measured by Shack-Hartmann (SH) wavefront aberrometers with limited visual angle. We propose a novel Point Spread Function (PSF) reconstruction algorithm based on SH measurements with a novel measurement devise to overcome these limitations. In our optical setup, we use a Digital Mirror Device as variable field stop, which is conventionally a pinhole suppressing scatter and reflections. Images with 21 different stop diameters were captured and from each image the average subaperture image intensity and the average intensity of the pupil were computed. The 21 intensities represent integral values of the PSF which is consequently reconstructed by derivation with respect to the visual angle. A generalized form of the Stiles-Holladay-approximation is fitted to the PSF resulting in a stray light parameter Log(IS). Additionaly the transmission loss of eye is computed. For the proof of principle, a study on 13 healthy young volunteers was carried out. Scatter filters were positioned in front of the volunteer's eye during C-Quant and scatter measurements to generate straylight emulating scatter in the lens. The straylight parameter is compared to the C-Quant measurement parameter Log(ISC) and scatter density of the filters SDF with a partial correlation. Log(IS) shows significant correlation with the SDF and Log(ISC). The correlation is more prominent between Log(IS) combined with the transmission loss and the SDF and Log(ISC). Our novel measurement and reconstruction technique allow for objective stray light analysis of visual angles up to 4 degrees.
Transverse Pupil Shifts for Adaptive Optics Non-Common Path Calibration
NASA Technical Reports Server (NTRS)
Bloemhof, Eric E.
2011-01-01
A simple new way of obtaining absolute wavefront measurements with a laboratory Fizeau interferometer was recently devised. In that case, the observed wavefront map is the difference of two cavity surfaces, those of the mirror under test and of an unknown reference surface on the Fizeau s transmission flat. The absolute surface of each can be determined by applying standard wavefront reconstruction techniques to two grids of absolute surface height differences of the mirror under test, obtained from pairs of measurements made with slight transverse shifts in X and Y. Adaptive optics systems typically provide an actuated periscope between wavefront sensor (WFS) and commonmode optics, used for lateral registration of deformable mirror (DM) to WFS. This periscope permits independent adjustment of either pupil or focal spot incident on the WFS. It would be used to give the required lateral pupil motion between common and non-common segments, analogous to the lateral shifts of the two phase contributions in the lab Fizeau. The technique is based on a completely new approach to calibration of phase. It offers unusual flexibility with regard to the transverse spatial frequency scales probed, and will give results quite quickly, making use of no auxiliary equipment other than that built into the adaptive optics system. The new technique may be applied to provide novel calibration information about other optical systems in which the beam may be shifted transversely in a controlled way.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
NASA Astrophysics Data System (ADS)
Huang, Kuo-Ting; Chen, Hsi-Chao; Lin, Ssu-Fan; Lin, Ke-Ming; Syue, Hong-Ye
2012-09-01
While tin-doped indium oxide (ITO) has been extensively applied in flexible electronics, the problem of the residual stress has many obstacles to overcome. This study investigated the residual stress of flexible electronics by the double beam shadow moiré interferometer, and focused on the precision improvement with phase shifting interferometry (PSI). According to the out-of-plane displacement equation, the theoretical error depends on the grating pitch and the angle between incident light and CCD. The angle error could be reduced to 0.03% by the angle shift of 10° as a result of the double beam interferometer was a symmetrical system. But the experimental error of the double beam moiré interferometer still reached to 2.2% by the noise of the vibration and interferograms. In order to improve the measurement precision, PSI was introduced to the double shadow moiré interferometer. Wavefront phase was reconstructed by the five interferograms with the Hariharan algorithm. The measurement results of standard cylinder indicating the error could be reduced from 2.2% to less than 1% with PSI. The deformation of flexible electronic could be reconstructed fast and calculated the residual stress with the Stoney correction formula. This shadow moiré interferometer with PSI could improve the precision of residual stress for flexible electronics.
Ultrasonic transmission at solid-liquid interfaces
NASA Astrophysics Data System (ADS)
Wadley, Haydn N. G.; Queheillalt, Douglas T.; Lu, Yichi
1996-11-01
New non-invasive solid-liquid interface sensing technologies are a key element in the development of improved Bridman growth techniques for synthesizing single crystal semiconductor materials. Laser generated and optically detect ultrasonic techniques have the potential to satisfy this need. Using an anisotropic 3D ray tracing methodology combined with elastic constant data measured near the melting point, ultrasonic propagation in cylindrical single crystal bodies containing either a convex, flat, or concave solid-liquid interface has been simulated. Ray paths, wavefronts and the time-of-flight (TOF) of rays that travel from a source to an arbitrarily positioned receiver have all been calculated. Experimentally measured TOF data have been collected using laser generated, optically detected ultrasound on model systems with independently known interface shapes. Both numerically simulated and experimental data have shown that the solidification region can be easily identified from transmission TOF measurements because the velocity of the liquid is much smaller than that of the solid. Since convex and concave solid-liquid interfaces result in distinctively different TOF data profiles, the interface shape can also be readily determined from the TOF data. When TOF data collected in the diametral plane is used in conjunction with a nonlinear least squares algorithm, the interface geometry has been successfully reconstructed and ultrasonic velocities of both the solid and liquid obtained with reconstruction errors less than 5 percent.
A holographic technique for recording a hypervelocity projectile with front surface resolution.
Kurtz, R L; Loh, H Y
1970-05-01
Any motion of the scene during the exposure of a hologram results in a spatial modulation of the recorded fringe contrast. On reconstruction, this produces a spatial amplitude modulation of the reconstructed wavefront, which results in a blurring of the image, not unlike that of a conventional photograph. For motion of the scene sufficient to change the path length of the signal arm by a half wavelength, this blurring is generally prohibitive. This paper describes a proposed holographic technique which offers promise for front light resolution of targets moving at high speeds, heretofore unobtainable by conventional methods.
NASA Astrophysics Data System (ADS)
Leboulleux, Lucie; N'Diaye, Mamadou; Riggs, A. J. E.; Egron, Sylvain; Mazoyer, Johan; Pueyo, Laurent; Choquet, Elodie; Perrin, Marshall D.; Kasdin, Jeremy; Sauvage, Jean-François; Fusco, Thierry; Soummer, Rémi
2016-07-01
Segmented telescopes are a possible approach to enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures and segment gaps, makes high-contrast imaging very challenging. The High-contrast imager for Complex Aperture Telescopes (HiCAT) was designed to study and develop solutions for such telescope pupils using wavefront control and starlight suppression. The testbed design has the flexibility to enable studies with increasing complexity for telescope aperture geometries starting with off-axis telescopes, then on-axis telescopes with central obstruction and support structures (e.g. the Wide Field Infrared Survey Telescope [WFIRST]), up to on-axis segmented telescopes e.g. including various concepts for a Large UV, Optical, IR telescope (LUVOIR), such as the High Definition Space Telescope (HDST). We completed optical alignment in the summer of 2014 and a first deformable mirror was successfully integrated in the testbed, with a total wavefront error of 13nm RMS over a 18mm diameter circular pupil in open loop. HiCAT will also be provided with a segmented mirror conjugated with a shaped pupil representing the HDST configuration, to directly study wavefront control in the presence of segment gaps, central obstruction and spider. We recently applied a focal plane wavefront control method combined with a classical Lyot coronagraph on HiCAT, and we found limitations on contrast performance due to vibration effect. In this communication, we analyze this instability and study its impact on the performance of wavefront control algorithms. We present our Speckle Nulling code to control and correct for wavefront errors both in simulation mode and on testbed mode. This routine is first tested in simulation mode without instability to validate our code. We then add simulated vibrations to study the degradation of contrast performance in the presence of these effects.
Identification of the focal plane wavefront control system using E-M algorithm
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert
2017-09-01
In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.
N'Gom, Moussa; Lien, Miao-Bin; Estakhri, Nooshin M; Norris, Theodore B; Michielssen, Eric; Nadakuditi, Raj Rao
2017-05-31
Complex Semi-Definite Programming (SDP) is introduced as a novel approach to phase retrieval enabled control of monochromatic light transmission through highly scattering media. In a simple optical setup, a spatial light modulator is used to generate a random sequence of phase-modulated wavefronts, and the resulting intensity speckle patterns in the transmitted light are acquired on a camera. The SDP algorithm allows computation of the complex transmission matrix of the system from this sequence of intensity-only measurements, without need for a reference beam. Once the transmission matrix is determined, optimal wavefronts are computed that focus the incident beam to any position or sequence of positions on the far side of the scattering medium, without the need for any subsequent measurements or wavefront shaping iterations. The number of measurements required and the degree of enhancement of the intensity at focus is determined by the number of pixels controlled by the spatial light modulator.
Model-based wavefront sensorless adaptive optics system for large aberrations and extended objects.
Yang, Huizhen; Soloviev, Oleg; Verhaegen, Michel
2015-09-21
A model-based wavefront sensorless (WFSless) adaptive optics (AO) system with a 61-element deformable mirror is simulated to correct the imaging of a turbulence-degraded extended object. A fast closed-loop control algorithm, which is based on the linear relation between the mean square of the aberration gradients and the second moment of the image intensity distribution, is used to generate the control signals for the actuators of the deformable mirror (DM). The restoration capability and the convergence rate of the AO system are investigated with different turbulence strength wave-front aberrations. Simulation results show the model-based WFSless AO system can restore those images degraded by different turbulence strengths successfully and obtain the correction very close to the achievable capability of the given DM. Compared with the ideal correction of 61-element DM, the averaged relative error of RMS value is 6%. The convergence rate of AO system is independent of the turbulence strength and only depends on the number of actuators of DM.
Polans, James; Cunefare, David; Cole, Eli; Keller, Brenton; Mettu, Priyatham S.; Cousins, Scott W.; Allingham, Michael J.; Izatt, Joseph A.; Farsiu, Sina
2017-01-01
Optical coherence tomography angiography (OCTA) is a promising technique for non-invasive visualization of vessel networks in the human eye. We debut a system capable of acquiring wide field-of-view (>70°) OCT angiograms without mosaicking. Additionally, we report on enhancing the visualization of peripheral microvasculature using wavefront sensorless adaptive optics (WSAO). We employed a fast WSAO algorithm that enabled wavefront correction in <2 seconds by iterating the mirror shape at the speed of OCT B-scans rather than volumes. Also, we contrasted ~7° field-of-view OCTA angiograms acquired in the periphery with and without WSAO correction. On average, WSAO improved the sharpness of microvasculature by 65% in healthy and 38% in diseased eyes. Preliminary observations demonstrated that the location of 7° images could be identified directly from the wide field-of-view angiogram. A pilot study on a normal subject and patients with diabetic retinopathy showed the impact of utilizing WSAO for OCTA when visualizing peripheral vasculature pathologies. PMID:28059209
GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Pant, Jeevan K; Krishnan, Sridhar
2014-04-01
A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.
Impact of beacon wavelength on phase-compensation performance
NASA Astrophysics Data System (ADS)
Enterline, Allison A.; Spencer, Mark F.; Burrell, Derek J.; Brennan, Terry J.
2017-09-01
This study evaluates the effects of beacon-wavelength mismatch on phase-compensation performance. In general, beacon-wavelength mismatch occurs at the system level because the beacon-illuminator laser (BIL) and high-energy laser (HEL) are often at different wavelengths. Such is the case, for example, when using an aperture sharing element to isolate the beam-control sensor suite from the blinding nature of the HEL. With that said, this study uses the WavePlex Toolbox in MATLAB® to model ideal spherical wave propagation through various atmospheric-turbulence conditions. To quantify phase-compensation performance, we also model a nominal adaptive-optics (AO) system. We achieve correction from a Shack-Hartmann wavefront sensor and continuous-face-sheet deformable mirror using a least-squares phase reconstruction algorithm in the Fried geometry and a leaky integrator control law. To this end, we plot the power in the bucket metric as a function of BIL-HEL wavelength difference. Our initial results show that positive BIL-HEL wavelength differences achieve better phase compensation performance compared to negative BIL-HEL wavelength differences (i.e., red BILs outperform blue BILs). This outcome is consistent with past results.
Accuracy of modal wavefront estimation from eye transverse aberration measurements
NASA Astrophysics Data System (ADS)
Chyzh, Igor H.; Sokurenko, Vyacheslav M.
2001-01-01
The influence of random errors in measurement of eye transverse aberrations on the accuracy of reconstructing wave aberration as well as ametropia and astigmatism parameters is investigated. The dependence of mentioned errors on a ratio between the number of measurement points and the number of polynomial coefficients is found for different pupil location of measurement points. Recommendations are proposed for setting these ratios.
NASA Astrophysics Data System (ADS)
Andreeva, Olga V.; Dement'ev, Dmitry A.; Chekalin, Sergey V.; Kompanets, V. O.; Matveets, Yu. A.; Serov, Oleg B.; Smolovich, Anatoly M.
2002-05-01
The recording geometry and recording media for the method of achromatic wavefront reconstruction are discussed. The femtosecond recording on the thick slabs of dichromated gelatin and the samples of silver-containing porous glass was obtained. The applications of the method to ultrafast laser spectroscopy and to phase conjugation were suggested.
NASA Astrophysics Data System (ADS)
King, Sharon V.; Yuan, Shuai; Preza, Chrysanthe
2018-03-01
Effectiveness of extended depth of field microscopy (EDFM) implementation with wavefront encoding methods is reduced by depth-induced spherical aberration (SA) due to reliance of this approach on a defined point spread function (PSF). Evaluation of the engineered PSF's robustness to SA, when a specific phase mask design is used, is presented in terms of the final restored image quality. Synthetic intermediate images were generated using selected generalized cubic and cubic phase mask designs. Experimental intermediate images were acquired using the same phase mask designs projected from a liquid crystal spatial light modulator. Intermediate images were restored using the penalized space-invariant expectation maximization and the regularized linear least squares algorithms. In the presence of depth-induced SA, systems characterized by radially symmetric PSFs, coupled with model-based computational methods, achieve microscope imaging performance with fewer deviations in structural fidelity (e.g., artifacts) in simulation and experiment and 50% more accurate positioning of 1-μm beads at 10-μm depth in simulation than those with radially asymmetric PSFs. Despite a drop in the signal-to-noise ratio after processing, EDFM is shown to achieve the conventional resolution limit when a model-based reconstruction algorithm with appropriate regularization is used. These trends are also found in images of fixed fluorescently labeled brine shrimp, not adjacent to the coverslip, and fluorescently labeled mitochondria in live cells.
Fast autonomous holographic adaptive optics
NASA Astrophysics Data System (ADS)
Andersen, G.
2010-07-01
We have created a new adaptive optics system using a holographic modal wavefront sensing method capable of autonomous (computer-free) closed-loop control of a MEMS deformable mirror. A multiplexed hologram is recorded using the maximum and minimum actuator positions on the deformable mirror as the "modes". On reconstruction, an input beam will be diffracted into pairs of focal spots - the ratio of particular pairs determines the absolute wavefront phase at a particular actuator location. The wavefront measurement is made using a fast, sensitive photo-detector array such as a multi-pixel photon counters. This information is then used to directly control each actuator in the MEMS DM without the need for any computer in the loop. We present initial results of a 32-actuator prototype device. We further demonstrate that being an all-optical, parallel processing scheme, the speed is independent of the number of actuators. In fact, the limitations on speed are ultimately determined by the maximum driving speed of the DM actuators themselves. Finally, being modal in nature, the system is largely insensitive to both obscuration and scintillation. This should make it ideal for laser beam transmission or imaging under highly turbulent conditions.
The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2014-06-01
Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.
Filtered refocusing: a volumetric reconstruction algorithm for plenoptic-PIV
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2016-09-01
A new algorithm for reconstruction of 3D particle fields from plenoptic image data is presented. The algorithm is based on the technique of computational refocusing with the addition of a post reconstruction filter to remove the out of focus particles. This new algorithm is tested in terms of reconstruction quality on synthetic particle fields as well as a synthetically generated 3D Gaussian ring vortex. Preliminary results indicate that the new algorithm performs as well as the MART algorithm (used in previous work) in terms of the reconstructed particle position accuracy, but produces more elongated particles. The major advantage to the new algorithm is the dramatic reduction in the computational cost required to reconstruct a volume. It is shown that the new algorithm takes 1/9th the time to reconstruct the same volume as MART while using minimal resources. Experimental results are presented in the form of the wake behind a cylinder at a Reynolds number of 185.
Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography
Wang, Kun; Su, Richard; Oraevsky, Alexander A; Anastasio, Mark A
2012-01-01
Iterative image reconstruction algorithms for optoacoustic tomography (OAT), also known as photoacoustic tomography, have the ability to improve image quality over analytic algorithms due to their ability to incorporate accurate models of the imaging physics, instrument response, and measurement noise. However, to date, there have been few reported attempts to employ advanced iterative image reconstruction algorithms for improving image quality in three-dimensional (3D) OAT. In this work, we implement and investigate two iterative image reconstruction methods for use with a 3D OAT small animal imager: namely, a penalized least-squares (PLS) method employing a quadratic smoothness penalty and a PLS method employing a total variation norm penalty. The reconstruction algorithms employ accurate models of the ultrasonic transducer impulse responses. Experimental data sets are employed to compare the performances of the iterative reconstruction algorithms to that of a 3D filtered backprojection (FBP) algorithm. By use of quantitative measures of image quality, we demonstrate that the iterative reconstruction algorithms can mitigate image artifacts and preserve spatial resolution more effectively than FBP algorithms. These features suggest that the use of advanced image reconstruction algorithms can improve the effectiveness of 3D OAT while reducing the amount of data required for biomedical applications. PMID:22864062
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Feng, Yanqiu; Song, Yanli; Wang, Cong; Xin, Xuegang; Feng, Qianjin; Chen, Wufan
2013-10-01
To develop and test a new algorithm for fast direct Fourier transform (DrFT) reconstruction of MR data on non-Cartesian trajectories composed of lines with equally spaced points. The DrFT, which is normally used as a reference in evaluating the accuracy of other reconstruction methods, can reconstruct images directly from non-Cartesian MR data without interpolation. However, DrFT reconstruction involves substantially intensive computation, which makes the DrFT impractical for clinical routine applications. In this article, the Chirp transform algorithm was introduced to accelerate the DrFT reconstruction of radial and Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) MRI data located on the trajectories that are composed of lines with equally spaced points. The performance of the proposed Chirp transform algorithm-DrFT algorithm was evaluated by using simulation and in vivo MRI data. After implementing the algorithm on a graphics processing unit, the proposed Chirp transform algorithm-DrFT algorithm achieved an acceleration of approximately one order of magnitude, and the speed-up factor was further increased to approximately three orders of magnitude compared with the traditional single-thread DrFT reconstruction. Implementation the Chirp transform algorithm-DrFT algorithm on the graphics processing unit can efficiently calculate the DrFT reconstruction of the radial and PROPELLER MRI data. Copyright © 2012 Wiley Periodicals, Inc.
Advanced Techniques for Ultrasonic Imaging in the Presence of Material and Geometrical Complexity
NASA Astrophysics Data System (ADS)
Brath, Alexander Joseph
The complexity of modern engineering systems is increasing in several ways: advances in materials science are leading to the design of materials which are optimized for material strength, conductivity, temperature resistance etc., leading to complex material microstructure; the combination of additive manufacturing and shape optimization algorithms are leading to components with incredibly intricate geometrical complexity; and engineering systems are being designed to operate at larger scales in ever harsher environments. As a result, at the same time that there is an increasing need for reliable and accurate defect detection and monitoring capabilities, many of the currently available non-destructive evaluation techniques are rendered ineffective by this increasing material and geometrical complexity. This thesis addresses the challenges posed by inspection and monitoring problems in complex engineering systems with a three-part approach. In order to address material complexities, a model of wavefront propagation in anisotropic materials is developed, along with efficient numerical techniques to solve for the wavefront propagation in inhomogeneous, anisotropic material. Since material and geometrical complexities significantly affect the ability of ultrasonic energy to penetrate into the specimen, measurement configurations are tailored to specific applications which utilize arrays of either piezoelectric (PZT) or electromagnetic acoustic transducers (EMAT). These measurement configurations include novel array architectures as well as the exploration of ice as an acoustic coupling medium. Imaging algorithms which were previously developed for isotropic materials with simple geometry are adapted to utilize the more powerful wavefront propagation model and novel measurement configurations.
NASA Astrophysics Data System (ADS)
Kingsbury, Lana K.; Atcheson, Paul D.
2004-10-01
The Northrop-Grumman/Ball/Kodak team is building the JWST observatory that will be launched in 2011. To develop the flight wavefront sensing and control (WFS&C) algorithms and software, Ball is designing and building a 1 meter diameter, functionally accurate version of the JWST optical telescope element (OTE). This testbed telescope (TBT) will incorporate the same optical element control capability as the flight OTE. The secondary mirror will be controlled by a 6 degree of freedom (dof) hexapod and each of the 18 segmented primary mirror assemblies will have 6 dof hexapod control as well as radius of curvature adjustment capability. In addition to the highly adjustable primary and secondary mirrors, the TBT will include a rigid tertiary mirror, 2 fold mirrors (to direct light into the TBT) and a very stable supporting structure. The total telescope system configured residual wavefront error will be better than 175 nm RMS double pass. The primary and secondary mirror hexapod assemblies enable 5 nm piston resolution, 0.0014 arcsec tilt resolution, 100 nm translation resolution, and 0.04497 arcsec clocking resolution. The supporting structure (specifically the secondary mirror support structure) is designed to ensure that the primary mirror segments will not change their despace position relative to the secondary mirror (spaced > 1 meter apart) by greater than 500 nm within a one hour period of ambient clean room operation.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.
Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang
2012-06-20
Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.
Horizon: A Proposal for Large Aperture, Active Optics in Geosynchronous Orbit
NASA Technical Reports Server (NTRS)
Chesters, Dennis; Jenstrom, Del
2000-01-01
In 1999, NASA's New Millennium Program called for proposals to validate new technology in high-earth orbit for the Earth Observing-3 (NMP EO3) mission to fly in 2003. In response, we proposed to test a large aperture, active optics telescope in geosynchronous orbit. This would flight-qualify new technologies for both Earth and Space science: 1) a future instrument with LANDSAT image resolution and radiometric quality watching continuously from geosynchronous station, and 2) the Next Generation Space Telescope (NGST) for deep space imaging. Six enabling technologies were to be flight-qualified: 1) a 3-meter, lightweight segmented primary mirror, 2) mirror actuators and mechanisms, 3) a deformable mirror, 4) coarse phasing techniques, 5) phase retrieval for wavefront control during stellar viewing, and 6) phase diversity for wavefront control during Earth viewing. Three enhancing technologies were to be flight- validated: 1) mirror deployment and latching mechanisms, 2) an advanced microcontroller, and 3) GPS at GEO. In particular, two wavefront sensing algorithms, phase retrieval by JPL and phase diversity by ERIM International, were to sense optical system alignment and focus errors, and to correct them using high-precision mirror mechanisms. Active corrections based on Earth scenes are challenging because phase diversity images must be collected from extended, dynamically changing scenes. In addition, an Earth-facing telescope in GEO orbit is subject to a powerful diurnal thermal and radiometric cycle not experienced by deep-space astronomy. The Horizon proposal was a bare-bones design for a lightweight large-aperture, active optical system that is a practical blend of science requirements, emerging technologies, budget constraints, launch vehicle considerations, orbital mechanics, optical hardware, phase-determination algorithms, communication strategy, computational burdens, and first-rate cooperation among earth and space scientists, engineers and managers. This manuscript presents excerpts from the Horizon proposal's sections that describe the Earth science requirements, the structural -thermal-optical design, the wavefront sensing and control, and the on-orbit validation.
2008-09-01
algorithms that have been proposed to accomplish it fall into three broad categories. Eikonal solvers (e.g., Vidale, 1988, 1990; Podvin and Lecomte, 1991...difference eikonal solvers, the FMM algorithm works by following a wavefront as it moves across a volume of grid points, updating the travel times in...the grid according to the eikonal differential equation, using a second-order finite-difference scheme. We chose to use FMM for our comparison because
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
Markov prior-based block-matching algorithm for superdimension reconstruction of porous media
NASA Astrophysics Data System (ADS)
Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong
2018-04-01
A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.
NASA Astrophysics Data System (ADS)
Ren, Zhong; Liu, Guodong; Huang, Zhen
2012-11-01
The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
NASA Technical Reports Server (NTRS)
Downie, John D.
1990-01-01
A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.
Algorithm for Wavefront Sensing Using an Extended Scene
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Green, Joseph; Ohara, Catherine
2008-01-01
A recently conceived algorithm for processing image data acquired by a Shack-Hartmann (SH) wavefront sensor is not subject to the restriction, previously applicable in SH wavefront sensing, that the image be formed from a distant star or other equivalent of a point light source. That is to say, the image could be of an extended scene. (One still has the option of using a point source.) The algorithm can be implemented in commercially available software on ordinary computers. The steps of the algorithm are the following: 1. Suppose that the image comprises M sub-images. Determine the x,y Cartesian coordinates of the centers of these sub-images and store them in a 2xM matrix. 2. Within each sub-image, choose an NxN-pixel cell centered at the coordinates determined in step 1. For the ith sub-image, let this cell be denoted as si(x,y). Let the cell of another subimage (preferably near the center of the whole extended-scene image) be designated a reference cell, denoted r(x,y). 3. Calculate the fast Fourier transforms of the sub-sub-images in the central NxN portions (where N < N and both are preferably powers of 2) of r(x,y) and si(x,y). 4. Multiply the two transforms to obtain a cross-correlation function Ci(u,v), in the Fourier domain. Then let the phase of Ci(u, v) constitute a phase function, phi(u,v). 5. Fit u and v slopes to phi (u,v) over a small u,v subdomain. 6. Compute the fast Fourier transform, Si(u,v) of the full NxN cell si(x,y). Multiply this transform by the u and phase slopes obtained in step 4. Then compute the inverse fast Fourier transform of the product. 7. Repeat steps 4 through 6 in an iteration loop, cumulating the u and slopes, until a maximum iteration number is reached or the change in image shift becomes smaller than a predetermined tolerance. 8. Repeat steps 4 through 7 for the cells of all other sub-images.
Advances in Focal Plane Wavefront Estimation for Directly Imaging Exoplanets
NASA Astrophysics Data System (ADS)
Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Groff, Tyler Dean
2015-01-01
To image cold exoplanets directly in visible light, an instrument on a telescope needs to suppress starlight by about 9 orders of magnitude at small separations from the star. A coronagraph changes the point spread function to create regions of high contrast where exoplanets or disks can be seen. Aberrations on the optics degrade the contrast by several orders of magnitude, so all high-contrast imaging systems incorporate one or more deformable mirrors (DMs) to recover regions of high contrast. With a coronagraphic instrument planned for the WFIRST-AFTA space telescope, there is a pressing need for faster, more robust estimation and control schemes for the DMs. Non-common path aberrations limit conventional phase conjugation schemes to medium star-to-planet contrast ratios of about 1e-6. High-contrast imaging requires estimation and control of both phase and amplitude in the same beam path as the science camera. Field estimation is a challenge since only intensity is measured; the most common approach, including that planned for WFIRST-AFTA, is to use DMs to create diversity, via pairs of small probe shapes, thereby allowing disambiguation of the electric field. Most implementations of DM Diversity require at least five images per electric field estimate and require narrowband measurements. This paper describes our new estimation algorithms that improve the speed (by using fewer images) and bandwidth of focal plane wavefront estimation. For narrowband estimation, we are testing nonlinear, recursive algorithms such as an iterative extended Kalman filter (IEKF) to use three images each iteration and build better, more robust estimates. We are also exploring the use of broadband estimation without the need for narrowband sub-filters and measurements. Here we present simulations of these algorithms with realistic noise and small signals to show how they might perform for WFIRST-AFTA. Once validated in simulations, we will test these algorithms experimentally in Princeton's HCIL and in the Jet Propulsion Laboratory's (JPL's) High Contrast Imaging Testbed (HCIT). Developing these faster, more robust wavefront estimators is a crucial for increasing the science yield of the WFIRST-AFTA coronagraphic instrument.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
CT cardiac imaging: evolution from 2D to 3D backprojection
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Pan, Tinsu; Sasaki, Kosuke
2004-04-01
The state-of-the-art multiple detector-row CT, which usually employs fan beam reconstruction algorithms by approximating a cone beam geometry into a fan beam geometry, has been well recognized as an important modality for cardiac imaging. At present, the multiple detector-row CT is evolving into volumetric CT, in which cone beam reconstruction algorithms are needed to combat cone beam artifacts caused by large cone angle. An ECG-gated cardiac cone beam reconstruction algorithm based upon the so-called semi-CB geometry is implemented in this study. To get the highest temporal resolution, only the projection data corresponding to 180° plus the cone angle are row-wise rebinned into the semi-CB geometry for three-dimensional reconstruction. Data extrapolation is utilized to extend the z-coverage of the ECG-gated cardiac cone beam reconstruction algorithm approaching the edge of a CT detector. A helical body phantom is used to evaluate the ECG-gated cone beam reconstruction algorithm"s z-coverage and capability of suppressing cone beam artifacts. Furthermore, two sets of cardiac data scanned by a multiple detector-row CT scanner at 16 x 1.25 (mm) and normalized pitch 0.275 and 0.3 respectively are used to evaluate the ECG-gated CB reconstruction algorithm"s imaging performance. As a reference, the images reconstructed by a fan beam reconstruction algorithm for multiple detector-row CT are also presented. The qualitative evaluation shows that, the ECG-gated cone beam reconstruction algorithm outperforms its fan beam counterpart from the perspective of cone beam artifact suppression and z-coverage while the temporal resolution is well maintained. Consequently, the scan speed can be increased to reduce the contrast agent amount and injection time, improve the patient comfort and x-ray dose efficiency. Based up on the comparison, it is believed that, with the transition of multiple detector-row CT into volumetric CT, ECG-gated cone beam reconstruction algorithms will provide better image quality for CT cardiac applications.
On-Orbit Multi-Field Wavefront Control with a Kalman Filter
NASA Technical Reports Server (NTRS)
Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David
2008-01-01
A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.
Reducing the latency of the Fractal Iterative Method to half an iteration
NASA Astrophysics Data System (ADS)
Béchet, Clémentine; Tallon, Michel
2013-12-01
The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Kern, Brian; Kuhnert, Andreas; Shaklan, Stuart
2013-01-01
We compare the broadband contrast performances of several Phase Induced Amplitude Apodization (PIAA) coronagraph configurations through modeling and simulations. The basic optical design of the PIAA coronagraph is the same as NASA's High Contrast Imaging Testbed (HCIT) setup at the Jet Propulsion Laboratory (JPL). Using a deformable mirror and a broadband wavefront sensing and control algorithm, we create a "dark hole" in the broadband point-spread function (PSF) with an inner working angle (IWA) of 2(f lambda/D)(sub sky). We evaluate two systems in parallel. One is a perfect system having a design PIAA output amplitude and not having any wavefront error at its exit-pupil. The other is a realistic system having a design PIAA output amplitude and the measured residual wavefront error. We also investigate the effect of Lyot stops of various sizes when a postapodizer is and is not present. Our simulations show that the best 7.5%-broadband contrast value achievable with the current PIAA coronagraph is approximately 1.5x10(exp -8).
A nonlinear OPC technique for laser beam control in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Markov, V.; Khizhnyak, A.; Sprangle, P.; Ting, A.; DeSandre, L.; Hafizi, B.
2013-05-01
A viable beam control technique is critical for effective laser beam transmission through turbulent atmosphere. Most of the established approaches require information on the impact of perturbations on wavefront propagated waves. Such information can be acquired by measuring the characteristics of the target-scattered light arriving from a small, preferably diffraction-limited, beacon. This paper discusses an innovative beam control approach that can support formation of a tight laser beacon in deep turbulence conditions. The technique employs Brillouin enhanced fourwave mixing (BEFWM) to generate a localized beacon spot on a remote image-resolved target. Formation of the tight beacon doesn't require a wavefront sensor, AO system, or predictive feedback algorithm. Unlike conventional adaptive optics methods which allow wavefront conjugation, the proposed total field conjugation technique is critical for beam control in the presence of strong turbulence and can be achieved by using this non-linear BEFWM technique. The phase information retrieved from the established beacon beam can then be used in conjunction with an AO system to propagate laser beams in deep turbulence.
NASA Astrophysics Data System (ADS)
Yang, Jiamiao; Shen, Yuecheng; Liu, Yan; Hemphill, Ashton S.; Wang, Lihong V.
2017-11-01
Optical scattering prevents light from being focused through thick biological tissue at depths greater than ˜1 mm. To break this optical diffusion limit, digital optical phase conjugation (DOPC) based wavefront shaping techniques are being actively developed. Previous DOPC systems employed spatial light modulators that modulated either the phase or the amplitude of the conjugate light field. Here, we achieve optical focusing through scattering media by using polarization modulation based generalized DOPC. First, we describe an algorithm to extract the polarization map from the measured scattered field. Then, we validate the algorithm through numerical simulations and find that the focusing contrast achieved by polarization modulation is similar to that achieved by phase modulation. Finally, we build a system using an inexpensive twisted nematic liquid crystal based spatial light modulator (SLM) and experimentally demonstrate light focusing through 3-mm thick chicken breast tissue. Since the polarization modulation based SLMs are widely used in displays and are having more and more pixel counts with the prevalence of 4 K displays, these SLMs are inexpensive and valuable devices for wavefront shaping.
Surgical and healing changes to ocular aberrations following refractive surgery
NASA Astrophysics Data System (ADS)
Straub, Jochen; Schwiegerling, Jim
2003-07-01
Purpose: To measure ocular aberrations before and at several time periods after LASIK surgery to determine the change to the aberration structure of the eye. Methods: A Shack-Hartmann wavefront sensor was used to measure 88 LASIK patients pre-operatively and at 1 week and 12 months following surgery. Reconstructed wavefront errors are compared to look at induced differences. Manifest refraction was measured at 1 week, 1 month, 3 months, 6 months and 12 months following surgery. Sphere, cylinder, spherical aberration, and pupil diameter are analyzed. Results: A dramatic elevation in spherical aberration is seen following surgery. This elevation appears almost immediately and remains for the duration of the study. A temporary increase in pupil size is seen following surgery. Conclusions: LASIK surgery dramatically reduces defocus and astigmatism in the eye, but simultaneously increases spherical aberration levels. This increase occurs at the time of surgery and is not an effect of the healing response.
Non-overlap subaperture interferometric testing for large optics
NASA Astrophysics Data System (ADS)
Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian
2017-08-01
It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).
NASA Astrophysics Data System (ADS)
Yamakoshi, Yoshiki; Yamamoto, Atsushi; Kasahara, Toshihiro; Iijima, Tomohiro; Yuminaka, Yasushi
2015-07-01
We have proposed a quantitative shear wave imaging technique for continuous shear wave excitation. Shear wave wavefront is observed directly by color flow imaging using a general-purpose ultrasonic imaging system. In this study, the proposed method is applied to experiments in vivo, and shear wave maps, namely, the shear wave phase map, which shows the shear wave propagation inside the medium, and the shear wave velocity map, are observed for the skeletal muscle in the shoulder. To excite the shear wave inside the skeletal muscle of the shoulder, a hybrid ultrasonic wave transducer, which combines a small vibrator with an ultrasonic wave probe, is adopted. The shear wave velocity of supraspinatus muscle, which is measured by the proposed method, is 4.11 ± 0.06 m/s (N = 4). This value is consistent with those obtained by the acoustic radiation force impulse method.
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Hemphill, Ashton S; Shen, Yuecheng; Liu, Yan; Wang, Lihong V
2017-11-27
In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ∼1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.
NASA Astrophysics Data System (ADS)
Hemphill, Ashton S.; Shen, Yuecheng; Liu, Yan; Wang, Lihong V.
2017-11-01
In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ˜1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.
Adaptive optics program update at TMT
NASA Astrophysics Data System (ADS)
Boyer, C.; Ellerbroek, B.
2016-07-01
The TMT first light AO facility consists of the Narrow Field Infra-Red AO System (NFIRAOS), the associated Laser Guide Star Facility (LGSF) and the AO Executive Software (AOESW). Design, fabrication and prototyping activities of the TMT first light AO systems and their components have significantly ramped up in Canada, China, France, and in the US. NFIRAOS is an order 60 x 60 laser guide star (LGS) multi-conjugate AO (MCAO) system, which provides uniform, diffraction-limited performance in the J, H, and K bands over 34 x 34 arc sec fields with 50 per cent sky coverage at the galactic pole, as required to support the TMT science cases. NFIRAOS includes two deformable mirrors, six laser guide star wavefront sensors, one high order Pyramid WFS for natural guide star AO, and up to three low-order, IR, natural guide star on-instrument wavefront sensors (OIWFS) and four on-detector guide windows (ODGW) within each client instrument. The first light LGSF system includes six sodium lasers to generate the NFIRAOS laser guide stars. In this paper, we will provide an update on the progress in designing, prototyping, fabricating and modeling the TMT first light AO systems and their AO components over the last two years. TMT is continuing with detailed AO modeling to support the design and development of the first light AO systems and components. Major modeling topics studied during the last two years include further studies in the area of pyramid wavefront sensing, high precision astrometry, PSF reconstruction for LGS MCAO, LGSF wavefront error budget and sophisticated low order mode temporal filtering.
NASA Astrophysics Data System (ADS)
Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Roh, Seungkuk
2016-05-01
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
The algorithm of central axis in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang
2017-09-01
Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.
A reconstruction algorithm for helical CT imaging on PI-planes.
Liang, Hongzhu; Zhang, Cishen; Yan, Ming
2006-01-01
In this paper, a Feldkamp type approximate reconstruction algorithm is presented for helical cone-beam Computed Tomography. To effectively suppress artifacts due to large cone angle scanning, it is proposed to reconstruct the object point-wisely on unique customized tilted PI-planes which are close to the data collecting helices of the corresponding points. Such a reconstruction scheme can considerably suppress the artifacts in the cone-angle scanning. Computer simulations show that the proposed algorithm can provide improved imaging performance compared with the existing approximate cone-beam reconstruction algorithms.
Photoacoustic image reconstruction via deep learning
NASA Astrophysics Data System (ADS)
Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes
2018-02-01
Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.
NASA Astrophysics Data System (ADS)
Siemons, M.; Hulleman, C. N.; Thorsen, R. Ø.; Smith, C. S.; Stallinga, S.
2018-04-01
Point Spread Function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 m$\\lambda$ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 $\\mu$m ($x/y$) and 100 $\\mu$m ($z$) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead $z$-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex `3D+$\\lambda$' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10~nm in $x$, $y$, and $\\lambda$, and below 20~nm in $z$ over an axial range of 1 $\\mu$m with 2000 signal photons and 12 background photons.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Theory and algorithms for image reconstruction on chords and within regions of interest
NASA Astrophysics Data System (ADS)
Zou, Yu; Pan, Xiaochuan; Sidky, Emilâ Y.
2005-11-01
We introduce a formula for image reconstruction on a chord of a general source trajectory. We subsequently develop three algorithms for exact image reconstruction on a chord from data acquired with the general trajectory. Interestingly, two of the developed algorithms can accommodate data containing transverse truncations. The widely used helical trajectory and other trajectories discussed in literature can be interpreted as special cases of the general trajectory, and the developed theory and algorithms are thus directly applicable to reconstructing images exactly from data acquired with these trajectories. For instance, chords on a helical trajectory are equivalent to the n-PI-line segments. In this situation, the proposed algorithms become the algorithms that we proposed previously for image reconstruction on PI-line segments. We have performed preliminary numerical studies, which include the study on image reconstruction on chords of two-circle trajectory, which is nonsmooth, and on n-PI lines of a helical trajectory, which is smooth. Quantitative results of these studies verify and demonstrate the proposed theory and algorithms.
An Analysis of Fundamental Waffle Mode in Early AEOS Adaptive Optics Images
NASA Astrophysics Data System (ADS)
Makidon, Russell B.; Sivaramakrishnan, Anand; Perrin, Marshall D.; Roberts, Lewis C., Jr.; Oppenheimer, Ben R.; Soummer, Rémi; Graham, James R.
2005-08-01
Adaptive optics (AO) systems have significantly improved astronomical imaging capabilities over the last decade and are revolutionizing the kinds of science possible with 4-5 m class ground-based telescopes. A thorough understanding of AO system performance at the telescope can enable new frontiers of science as observations push AO systems to their performance limits. We look at recent advances with wave-front reconstruction (WFR) on the Advanced Electro-Optical System (AEOS) 3.6 m telescope to show how progress made in improving WFR can be measured directly in improved science images. We describe how a ``waffle mode'' wave-front error (which is not sensed by a Fried geometry Shack-Hartmann wave-front sensor) affects the AO point-spread function. We model details of AEOS AO to simulate a PSF that matches the actual AO PSF in the I band and show that while the older observed AEOS PSF contained several times more waffle error than expected, improved WFR techniques noticeably improve AEOS AO performance. We estimate the impact of these improved WFRs on H-band imaging at AEOS, chosen based on the optimization of the Lyot Project near-infrared coronagraph at this bandpass. Based on observations made at the Maui Space Surveillance System, operated by Detachment 15 of the US Air Force Research Laboratory's Directed Energy Directorate.
Holographic Adaptive Laser Optics System
NASA Astrophysics Data System (ADS)
Andersen, G.; Ghebremichael, F.
2011-09-01
We have created a new adaptive optics system using a holographic modal wavefront sensing method with the autonomous (computer-free) closed-loop control of a MEMS deformable mirror (DM). A multiplexed hologram is recorded using the maximum and minimum actuator positions on the deformable mirror as the “modes”. On reconstruction, an input beam is diffracted into pairs of focal spots and the ratio of the intensities of certain pairs determines the absolute wavefront phase at a particular actuator location. The wavefront measurement is made using fast, sensitive silicon photomultiplier arrays with the parallel outputs directly controlling individual actuators in the MEMS DM. In this talk, we will present the results from an all-optical, ultra-compact system that runs in closed-loop without the need for a computer. The speed is limited only by the response time of any given DM actuator and not the number of actuators. In our case, our 32-actuator prototype device already operates at 10 kHz and our next generation system is being designed for > 100 kHz. As a modal system, it is largely insensitive to scintillation and obscuration and is thus ideal for extreme adaptive optics applications. We will present information on how HALOS can be used for image correction and beam propagation as well as several other novel applications.
Three-dimensional Bragg coherent diffraction imaging of an extended ZnO crystal.
Huang, Xiaojing; Harder, Ross; Leake, Steven; Clark, Jesse; Robinson, Ian
2012-08-01
A complex three-dimensional quantitative image of an extended zinc oxide (ZnO) crystal has been obtained using Bragg coherent diffraction imaging integrated with ptychography. By scanning a 2.5 µm-long arm of a ZnO tetrapod across a 1.3 µm X-ray beam with fine step sizes while measuring a three-dimensional diffraction pattern at each scan spot, the three-dimensional electron density and projected displacement field of the entire crystal were recovered. The simultaneously reconstructed complex wavefront of the illumination combined with its coherence properties determined by a partial coherence analysis implemented in the reconstruction process provide a comprehensive characterization of the incident X-ray beam.
3D mapping of turbulence: a laboratory experiment
NASA Astrophysics Data System (ADS)
Le Louarn, Miska; Dainty, Christopher; Paterson, Carl; Tallon, Michel
2000-07-01
In this paper, we present the first experimental results of the 3D mapping method. 3D mapping of turbulence is a method to remove the cone effect with multiple laser guide stars and multiple deformable mirrors. A laboratory experiment was realized to verify the theoretical predictions. The setup consisted of two turbulent phase screens (made with liquid crystal devices) and a Shack-Hartmann wavefront sensor. We describe the interaction matrix involved in reconstructing Zernike commands for multiple deformable mirror from the slope measurements made from laser guide stars. It is shown that mirror commands can indeed be reconstructed with the 3D mapping method. Limiting factors of the method, brought to light by this experiment are discussed.
NASA Astrophysics Data System (ADS)
Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun
2017-12-01
Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.
NASA Astrophysics Data System (ADS)
Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza
2015-05-01
In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm
NASA Astrophysics Data System (ADS)
Elahi, Sana; kaleem, Muhammad; Omer, Hammad
2018-01-01
Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.
Fast and robust estimation of ophthalmic wavefront aberrations
NASA Astrophysics Data System (ADS)
Dillon, Keith
2016-12-01
Rapidly rising levels of myopia, particularly in the developing world, have led to an increased need for inexpensive and automated approaches to optometry. A simple and robust technique is provided for estimating major ophthalmic aberrations using a gradient-based wavefront sensor. The approach is based on the use of numerical calculations to produce diverse combinations of phase components, followed by Fourier transforms to calculate the coefficients. The approach does not utilize phase unwrapping nor iterative solution of inverse problems. This makes the method very fast and tolerant to image artifacts, which do not need to be detected and masked or interpolated as is needed in other techniques. These features make it a promising algorithm on which to base low-cost devices for applications that may have limited access to expert maintenance and operation.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost
2016-01-01
In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.
Enabling vendor independent photoacoustic imaging systems with asynchronous laser source
NASA Astrophysics Data System (ADS)
Wu, Yixuan; Zhang, Haichong K.; Boctor, Emad M.
2018-02-01
Channel data acquisition, and synchronization between laser excitation and PA signal acquisition, are two fundamental hardware requirements for photoacoustic (PA) imaging. Unfortunately, however, neither is equipped by most clinical ultrasound scanners. Therefore, less economical specialized research platforms are used in general, which hinders a smooth clinical transition of PA imaging. In previous studies, we have proposed an algorithm to achieve PA imaging using ultrasound post-beamformed (USPB) RF data instead of channel data. This work focuses on enabling clinical ultrasound scanners to implement PA imaging, without requiring synchronization between the laser excitation and PA signal acquisition. Laser synchronization is inherently consisted of two aspects: frequency and phase information. We synchronize without communicating the laser and the ultrasound scanner by investigating USPB images of a point-target phantom in two steps. First, frequency information is estimated by solving a nonlinear optimization problem, under the assumption that the segmented wave-front can only be beamformed into a single spot when synchronization is achieved. Second, after making frequencies of two systems identical, phase delay is estimated by optimizing the image quality while varying phase value. The proposed method is validated through simulation, by manually adding both frequency and phase errors, then applying the proposed algorithm to correct errors and reconstruct PA images. Compared with the ground truth, simulation results indicate that the remaining errors in frequency correction and phase correction are 0.28% and 2.34%, respectively, which affirm the potential of overcoming hardware barriers on PA imaging through software solution.
NASA Astrophysics Data System (ADS)
Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo
2008-03-01
In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.
NASA Astrophysics Data System (ADS)
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Axial Cone-Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering.
Tang, Shaojie; Tang, Xiangyang
2016-09-01
The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone-beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane, determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. The solution is an integration of three-dimensional (3-D) weighted axial CB-BPF/DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting the reconstruction accuracy, and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate the performance of the proposed algorithm. Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3-D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Integrated with orthogonal butterfly filtering, the 3-D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3-D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. The proposed 3-D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications.
Beam shaping by using small-aperture SLM and DM in a high power laser
NASA Astrophysics Data System (ADS)
Li, Sensen; Lu, Zhiwei; Du, Pengyuan; Wang, Yulei; Ding, Lei; Yan, Xiusheng
2018-03-01
High-power laser plays an important role in many fields, such as directed energy weapon, optoelectronic contermeasures, inertial confinement fusion, industrial processing and scientific research. The uniform nearfield and wavefront are the important part of the beam quality for high power lasers, which is conducive to maintaining the high spatial beam quality in propagation. We demonstrate experimentally that the spatial intensity and wavefront distribution at the output is well compensated in the complex high-power solid-state laser system by using the small-aperture spatial light modulator (SLM) and deformable mirror (DM) in the front stage. The experimental setup is a hundred-Joule-level Nd:glass laser system operating at three wavelengths at 1053 nm (1ω), 527 nm (2ω) and 351 nm (3ω) with 3 ns pulse duration with the final output beam aperture of 60 mm. While the clear arperture of the electrically addressable SLM is less than 20 mm and the effective diameter of the 52-actuators DM is about 15 mm. In the beam shaping system, the key point is that the two front-stage beam shaping devices needs to precompensate the gain nonuniform and wavefront distortion of the laser system. The details of the iterative algorithm for improving the beam quality are presented. Experimental results show that output nearfield and wavefont are both nearly flat-topped with the nearfield modulation of 1.26:1 and wavefront peak-to-valley value of 0.29 λ at 1053nm after beam shaping.
Full-Color Plasmonic Metasurface Holograms.
Wan, Weiwei; Gao, Jie; Yang, Xiaodong
2016-12-27
Holography is one of the most attractive approaches for reconstructing optical images, due to its capability of recording both the amplitude and phase information on light scattered from objects. Recently, optical metasurfaces for manipulating the wavefront of light with well-controlled amplitude, phase, and polarization have been utilized to reproduce computer-generated holograms. However, the currently available metasurface holograms have only been designed to achieve limited colors and record either amplitude or phase information. This fact significantly limits the performance of metasurface holograms to reconstruct full-color images with low noise and high quality. Here, we report the design and realization of ultrathin plasmonic metasurface holograms made of subwavelength nanoslits for reconstructing both two- and three-dimensional full-color holographic images. The wavelength-multiplexed metasurface holograms with both amplitude and phase modulations at subwavelength scale can faithfully produce not only three primary colors but also their secondary colors. Our results will advance various holographic applications.
NASA Astrophysics Data System (ADS)
Zhao, Yuchen; Zemmamouche, Redouane; Vandenrijt, Jean-François; Georges, Marc P.
2018-05-01
A combination of digital holographic interferometry (DHI) and digital speckle photography (DSP) allows in-plane and out-of-plane displacement measurement between two states of an object. The former can be determined by correlating the two speckle patterns whereas the latter is given by the phase difference obtained from DHI. We show that the amplitude of numerically reconstructed object wavefront obtained from Fresnel in-line digital holography (DH), in combination with phase shifting techniques, can be used as speckle patterns in DSP. The accuracy of in-plane measurement is improved after correcting the phase errors induced by reference wave during reconstruction process. Furthermore, unlike conventional imaging system, Fresnel DH offers the possibility to resize the pixel size of speckle patterns situated on the reconstruction plane under the same optical configuration simply by zero-padding the hologram. The flexibility of speckle size adjustment in Fresnel DH ensures the accuracy of estimation result using DSP.
Plasmonic nanoparticle scattering for color holograms
Montelongo, Yunuen; Tenorio-Pearl, Jaime Oscar; Williams, Calum; Zhang, Shuang; Milne, William Ireland; Wilkinson, Timothy David
2014-01-01
This work presents an original approach to create holograms based on the optical scattering of plasmonic nanoparticles. By analogy to the diffraction produced by the scattering of atoms in X-ray crystallography, we show that plasmonic nanoparticles can produce a wave-front reconstruction when they are sampled on a diffractive plane. By applying this method, all of the scattering characteristics of the nanoparticles are transferred to the reconstructed field. Hence, we demonstrate that a narrow-band reconstruction can be achieved for direct white light illumination on an array of plasmonic nanoparticles. Furthermore, multicolor capabilities are shown with minimal cross-talk by multiplexing different plasmonic nanoparticles at subwavelength distances. The holograms were fabricated from a single subwavelength thin film of silver and demonstrate that the total amount of binary information stored in the plane can exceed the limits of diffraction and that this wavelength modulation can be detected optically in the far field. PMID:25122675
Plasmonic nanoparticle scattering for color holograms.
Montelongo, Yunuen; Tenorio-Pearl, Jaime Oscar; Williams, Calum; Zhang, Shuang; Milne, William Ireland; Wilkinson, Timothy David
2014-09-02
This work presents an original approach to create holograms based on the optical scattering of plasmonic nanoparticles. By analogy to the diffraction produced by the scattering of atoms in X-ray crystallography, we show that plasmonic nanoparticles can produce a wave-front reconstruction when they are sampled on a diffractive plane. By applying this method, all of the scattering characteristics of the nanoparticles are transferred to the reconstructed field. Hence, we demonstrate that a narrow-band reconstruction can be achieved for direct white light illumination on an array of plasmonic nanoparticles. Furthermore, multicolor capabilities are shown with minimal cross-talk by multiplexing different plasmonic nanoparticles at subwavelength distances. The holograms were fabricated from a single subwavelength thin film of silver and demonstrate that the total amount of binary information stored in the plane can exceed the limits of diffraction and that this wavelength modulation can be detected optically in the far field.
Digital micromirror device as amplitude diffuser for multiple-plane phase retrieval
NASA Astrophysics Data System (ADS)
Abregana, Timothy Joseph T.; Hermosa, Nathaniel P.; Almoro, Percival F.
2017-06-01
Previous implementations of the phase diffuser used in the multiple-plane phase retrieval method included a diffuser glass plate with fixed optical properties or a programmable yet expensive spatial light modulator. Here a model for phase retrieval based on a digital micromirror device as amplitude diffuser is presented. The technique offers programmable, convenient and low-cost amplitude diffuser for a non-stagnating iterative phase retrieval. The technique is demonstrated in the reconstructions of smooth object wavefronts.
NASA Astrophysics Data System (ADS)
Jin, Zhenyu; Lin, Jing; Liu, Zhong
2008-07-01
By study of the classical testing techniques (such as Shack-Hartmann Wave-front Sensor) adopted in testing the aberration of ground-based astronomical optical telescopes, we bring forward two testing methods on the foundation of high-resolution image reconstruction technology. One is based on the averaged short-exposure OTF and the other is based on the Speckle Interferometric OTF by Antoine Labeyrie. Researches made by J.Ohtsubo, F. Roddier, Richard Barakat and J.-Y. ZHANG indicated that the SITF statistical results would be affected by the telescope optical aberrations, which means the SITF statistical results is a function of optical system aberration and the atmospheric Fried parameter (seeing). Telescope diffraction-limited information can be got through two statistics methods of abundant speckle images: by the first method, we can extract the low frequency information such as the full width at half maximum (FWHM) of the telescope PSF to estimate the optical quality; by the second method, we can get a more precise description of the telescope PSF with high frequency information. We will apply the two testing methods to the 2.4m optical telescope of the GMG Observatory, in china to validate their repeatability and correctness and compare the testing results with that of the Shack-Hartmann Wave-Front Sensor got. This part will be described in detail in our paper.
High-speed parallel implementation of a modified PBR algorithm on DSP-based EH topology
NASA Astrophysics Data System (ADS)
Rajan, K.; Patnaik, L. M.; Ramakrishna, J.
1997-08-01
Algebraic Reconstruction Technique (ART) is an age-old method used for solving the problem of three-dimensional (3-D) reconstruction from projections in electron microscopy and radiology. In medical applications, direct 3-D reconstruction is at the forefront of investigation. The simultaneous iterative reconstruction technique (SIRT) is an ART-type algorithm with the potential of generating in a few iterations tomographic images of a quality comparable to that of convolution backprojection (CBP) methods. Pixel-based reconstruction (PBR) is similar to SIRT reconstruction, and it has been shown that PBR algorithms give better quality pictures compared to those produced by SIRT algorithms. In this work, we propose a few modifications to the PBR algorithms. The modified algorithms are shown to give better quality pictures compared to PBR algorithms. The PBR algorithm and the modified PBR algorithms are highly compute intensive, Not many attempts have been made to reconstruct objects in the true 3-D sense because of the high computational overhead. In this study, we have developed parallel two-dimensional (2-D) and 3-D reconstruction algorithms based on modified PBR. We attempt to solve the two problems encountered by the PBR and modified PBR algorithms, i.e., the long computational time and the large memory requirements, by parallelizing the algorithm on a multiprocessor system. We investigate the possible task and data partitioning schemes by exploiting the potential parallelism in the PBR algorithm subject to minimizing the memory requirement. We have implemented an extended hypercube (EH) architecture for the high-speed execution of the 3-D reconstruction algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs) and dual-port random access memories (DPR) as channels between the PEs. We discuss and compare the performances of the PBR algorithm on an IBM 6000 RISC workstation, on a Silicon Graphics Indigo 2 workstation, and on an EH system. The results show that an EH(3,1) using DSP chips as PEs executes the modified PBR algorithm about 100 times faster than an LBM 6000 RISC workstation. We have executed the algorithms on a 4-node IBM SP2 parallel computer. The results show that execution time of the algorithm on an EH(3,1) is better than that of a 4-node IBM SP2 system. The speed-up of an EH(3,1) system with eight PEs and one network controller is approximately 7.85.
Low dose reconstruction algorithm for differential phase contrast imaging.
Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni
2011-01-01
Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.
Optimization-based reconstruction for reduction of CBCT artifact in IGRT
NASA Astrophysics Data System (ADS)
Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan
2016-04-01
Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.
NASA Astrophysics Data System (ADS)
Deng, Shaoyong; Zhang, Shiqiang; He, Minbo; Zhang, Zheng; Guan, Xiaowei
2017-05-01
The positive-branch confocal unstable resonator with inhomogeneous gain medium was studied for the normal used high energy DF laser system. The fast changing process of the resonator's eigenmodes was coupled with the slow changing process of the thermal deformation of cavity mirrors. Influences of the thermal deformation of cavity mirrors to the outcoupled beam quality and transmission loss of high frequency components of high energy laser were computed. The simulations are done through programs compiled by MATLAB and GLAD software and the method of combination of finite elements and Fox-li iteration algorithm was used. Effects of thermal distortion, misaligned of cavity mirrors and inhomogeneous distribution of gain medium were introduced to simulate the real physical circumstances of laser cavity. The wavefront distribution and beam quality (including RMS of wavefront, power in the bucket, Strehl ratio, diffraction limit β, position of the beam spot center, spot size and intensity distribution in far-field ) of the distorted outcoupled beam were studied. The conclusions of the simulation agree with the experimental results. This work would supply references of wavefront correction range to the adaptive optics system of interior alleyway.
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
5-D interpolation with wave-front attributes
NASA Astrophysics Data System (ADS)
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that there are significant advantages for steep dipping events using the 5-D WABI method when compared to the rank-reduction-based 5-D interpolation technique. Diffraction tails substantially benefit from this improved performance of the partial CRS stacking approach while the CPU time is comparable to the CPU time consumed by the rank-reduction-based method.
Report to the Congress on the Strategic Defense Initiative, 1991
1991-05-01
ultraviolet, and infrared radiation-hardened charge-coupled device images , step-stare sensor signal processing algorithms , and processor...Demonstration Experiment (LODE) resolved central issues associated with wavefront sensing and control and the 4-meter I Large Advanced Mirror Program (LAMP...21 Figure 4-16 Firepond CO 2 Imaging Radar Demonstration .......................... 4-22 Figure 4-17 IBSS and the Shuttle
Axial Cone Beam Reconstruction by Weighted BPF/DBPF and Orthogonal Butterfly Filtering
Tang, Shaojie; Tang, Xiangyang
2016-01-01
Goal The backprojection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical reconstruction from cone beam (CB) scan data and axial reconstruction from fan beam data, respectively. These two algorithms can be heuristically extended for image reconstruction from axial CB scan data, but induce severe artifacts in images located away from the central plane determined by the circular source trajectory. We propose an algorithmic solution herein to eliminate the artifacts. Methods The solution is an integration of three-dimensional (3D) weighted axial CB-BPF/ DBPF algorithm with orthogonal butterfly filtering, namely axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering. Using the computer simulated Forbild head and thoracic phantoms that are rigorous in inspecting reconstruction accuracy and an anthropomorphic thoracic phantom with projection data acquired by a CT scanner, we evaluate performance of the proposed algorithm. Results Preliminary results show that the orthogonal butterfly filtering can eliminate the severe streak artifacts existing in the images reconstructed by the 3D weighted axial CB-BPF/DBPF algorithm located at off-central planes. Conclusion Integrated with orthogonal butterfly filtering, the 3D weighted CB-BPF/DBPF algorithm can perform at least as well as the 3D weighted CB-FBP algorithm in image reconstruction from axial CB scan data. Significance The proposed 3D weighted axial CB-BPF/DBPF cascaded with orthogonal butterfly filtering can be an algorithmic solution for CT imaging in extensive clinical and preclinical applications. PMID:26660512
Laser guide star wavefront sensing for ground-layer adaptive optics on extremely large telescopes.
Clare, Richard M; Le Louarn, Miska; Béchet, Clementine
2011-02-01
We propose ground-layer adaptive optics (GLAO) to improve the seeing on the 42 m European Extremely Large Telescope. Shack-Hartmann wavefront sensors (WFSs) with laser guide stars (LGSs) will experience significant spot elongation due to off-axis observation. This spot elongation influences the design of the laser launch location, laser power, WFS detector, and centroiding algorithm for LGS GLAO on an extremely large telescope. We show, using end-to-end numerical simulations, that with a noise-weighted matrix-vector-multiply reconstructor, the performance in terms of 50% ensquared energy (EE) of the side and central launch of the lasers is equivalent, the matched filter and weighted center of gravity centroiding algorithms are the most promising, and approximately 10×10 undersampled pixels are optimal. Significant improvement in the 50% EE can be observed with a few tens of photons/subaperture/frame, and no significant gain is seen by adding more than 200 photons/subaperture/frame. The LGS GLAO is not particularly sensitive to the sodium profile present in the mesosphere nor to a short-timescale (less than 100 s) evolution of the sodium profile. The performance of LGS GLAO is, however, sensitive to the atmospheric turbulence profile.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-10-01
Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts.
Ahmad, Moiz; Balter, Peter; Pan, Tinsu
2011-01-01
Purpose: Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4–6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. Methods: The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Results: Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3–8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm. With graphics processing unit hardware used to accelerate computations, the 4D-VOI reconstruction required a 40-s reconstruction time. Conclusions: 4D-VOI reconstruction effectively reduces undersampling artifacts and resolves lung tumor motion in 4D-CBCT. The 4D-VOI reconstruction is computationally inexpensive compared with more sophisticated iterative algorithms. Compared with these algorithms, our 4D-VOI reconstruction is an attractive alternative in 4D-CBCT for reconstructing target motion without generating numerous streak artifacts. PMID:21992381
UV Reconstruction Algorithm And Diurnal Cycle Variability
NASA Astrophysics Data System (ADS)
Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara
2009-03-01
UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.
Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters
NASA Astrophysics Data System (ADS)
Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada
2011-01-01
We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
JWST Wavefront Control Toolbox
NASA Technical Reports Server (NTRS)
Shin, Shahram Ron; Aronstein, David L.
2011-01-01
A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.
Adaptive compensation of aberrations in ultrafast 3D microscopy using a deformable mirror
NASA Astrophysics Data System (ADS)
Sherman, Leah R.; Albert, O.; Schmidt, Christoph F.; Vdovin, Gleb V.; Mourou, Gerard A.; Norris, Theodore B.
2000-05-01
3D imaging using a multiphoton scanning confocal microscope is ultimately limited by aberrations of the system. We describe a system to adaptively compensate the aberrations with a deformable mirror. We have increased the transverse scanning range of the microscope by three with compensation of off-axis aberrations.We have also significantly increased the longitudinal scanning depth with compensation of spherical aberrations from the penetration into the sample. Our correction is based on a genetic algorithm that uses second harmonic or two-photon fluorescence signal excited by femtosecond pulses from the sample as the enhancement parameter. This allows us to globally optimize the wavefront without a wavefront measurement. To improve the speed of the optimization we use Zernike polynomials as the basis for correction. Corrections can be stored in a database for look-up with future samples.
2014-09-01
to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging system that...research is to develop an optimized system design and associated image reconstruction algorithms for a hybrid three-dimensional (3D) breast imaging ...i) developed time-of- flight extraction algorithms to perform USCT, (ii) developing image reconstruction algorithms for USCT, (iii) developed
Reduced projection angles for binary tomography with particle aggregation.
Al-Rifaie, Mohammad Majid; Blackwell, Tim
This paper extends particle aggregate reconstruction technique (PART), a reconstruction algorithm for binary tomography based on the movement of particles. PART supposes that pixel values are particles, and that particles diffuse through the image, staying together in regions of uniform pixel value known as aggregates. In this work, a variation of this algorithm is proposed and a focus is placed on reducing the number of projections and whether this impacts the reconstruction of images. The algorithm is tested on three phantoms of varying sizes and numbers of forward projections and compared to filtered back projection, a random search algorithm and to SART, a standard algebraic reconstruction method. It is shown that the proposed algorithm outperforms the aforementioned algorithms on small numbers of projections. This potentially makes the algorithm attractive in scenarios where collecting less projection data are inevitable.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.
Wind reconstruction algorithm for Viking Lander 1
NASA Astrophysics Data System (ADS)
Kynkäänniemi, Tuomas; Kemppinen, Osku; Harri, Ari-Matti; Schmidt, Walter
2017-06-01
The wind measurement sensors of Viking Lander 1 (VL1) were only fully operational for the first 45 sols of the mission. We have developed an algorithm for reconstructing the wind measurement data after the wind measurement sensor failures. The algorithm for wind reconstruction enables the processing of wind data during the complete VL1 mission. The heater element of the quadrant sensor, which provided auxiliary measurement for wind direction, failed during the 45th sol of the VL1 mission. Additionally, one of the wind sensors of VL1 broke down during sol 378. Regardless of the failures, it was still possible to reconstruct the wind measurement data, because the failed components of the sensors did not prevent the determination of the wind direction and speed, as some of the components of the wind measurement setup remained intact for the complete mission. This article concentrates on presenting the wind reconstruction algorithm and methods for validating the operation of the algorithm. The algorithm enables the reconstruction of wind measurements for the complete VL1 mission. The amount of available sols is extended from 350 to 2245 sols.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.
Li, Liang; Wang, Bigong; Wang, Ge
2016-01-01
In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.
Sparsity-constrained PET image reconstruction with learned dictionaries
NASA Astrophysics Data System (ADS)
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Szostek, Kamil; Piórkowski, Adam
2016-10-01
Ultrasound (US) imaging is one of the most popular techniques used in clinical diagnosis, mainly due to lack of adverse effects on patients and the simplicity of US equipment. However, the characteristics of the medium cause US imaging to imprecisely reconstruct examined tissues. The artifacts are the results of wave phenomena, i.e. diffraction or refraction, and should be recognized during examination to avoid misinterpretation of an US image. Currently, US training is based on teaching materials and simulators and ultrasound simulation has become an active research area in medical computer science. Many US simulators are limited by the complexity of the wave phenomena, leading to intensive sophisticated computation that makes it difficult for systems to operate in real time. To achieve the required frame rate, the vast majority of simulators reduce the problem of wave diffraction and refraction. The following paper proposes a solution for an ultrasound simulator based on methods known in geophysics. To improve simulation quality, a wavefront construction method was adapted which takes into account the refraction phenomena. This technique uses ray tracing and velocity averaging to construct wavefronts in the simulation. Instead of a geological medium, real CT scans are applied. This approach can produce more realistic projections of pathological findings and is also capable of providing real-time simulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves.
Yu, Hengyong; Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction.
Dual-view-zone tabletop 3D display system based on integral imaging.
He, Min-Yang; Zhang, Han-Le; Deng, Huan; Li, Xiao-Wei; Li, Da-Hai; Wang, Qiong-Hua
2018-02-01
In this paper, we propose a dual-view-zone tabletop 3D display system based on integral imaging by using a multiplexed holographic optical element (MHOE) that has the optical properties of two sets of microlens arrays. The MHOE is recorded by a reference beam using the single-exposure method. The reference beam records the wavefronts of a microlens array from two different directions. Thus, when the display beam is projected on the MHOE, two wavefronts with the different directions will be rebuilt and the 3D virtual images can be reconstructed in two viewing zones. The MHOE has angle and wavelength selectivity. Under the conditions of the matched wavelength and the angle of the display beam, the diffraction efficiency of the MHOE is greatest. Because the unmatched light just passes through the MHOE, the MHOE has the advantage of a see-through display. The experimental results confirm the feasibility of the dual-view-zone tabletop 3D display system.
Numerical modeling and simulation studies for the M4 adaptive mirror of the E-ELT
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; Riccardi, Armando; Xompero, Marco
2012-07-01
We report in this paper on the progress of numerical modeling and simulation studies of the M4 adaptive mirror, a representative of the "adaptive secondary mirrors" technology, for the European Extremely Large Telescope (E-ELT). This is based on both dedicated routines and the existing code of the Software Package CADS. The points approached are basically the specific problems encountered with this particular type of voice-coil adaptive mirrors on the E-ELT: (*) the segmentation of the adaptive mirror, implying a fitting error due also to the edges of its six petals, as well as possible co-phasing problems to be evaluated in terms of interaction with the wavefront sensor (a pyramid here); (**) the necessary presence of "master" and "slave" actuators which management, in terms of wavefront reconstruction, implies to consider different strategies. The on-going work being performed for the two above points is described in details, and some preliminary results are given.
Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja
2017-09-01
To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Jurling, Alden S; Fienup, James R
2014-03-01
Extending previous work by Thurman on wavefront sensing for segmented-aperture systems, we developed an algorithm for estimating segment tips and tilts from multiple point spread functions in different defocused planes. We also developed methods for overcoming two common modes for stagnation in nonlinear optimization-based phase retrieval algorithms for segmented systems. We showed that when used together, these methods largely solve the capture range problem in focus-diverse phase retrieval for segmented systems with large tips and tilts. Monte Carlo simulations produced a rate of success better than 98% for the combined approach.
NOTE: A BPF-type algorithm for CT with a curved PI detector
NASA Astrophysics Data System (ADS)
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-01
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941 59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
A BPF-type algorithm for CT with a curved PI detector.
Tang, Jie; Zhang, Li; Chen, Zhiqiang; Xing, Yuxiang; Cheng, Jianping
2006-08-21
Helical cone-beam CT is used widely nowadays because of its rapid scan speed and efficient utilization of x-ray dose. Recently, an exact reconstruction algorithm for helical cone-beam CT was proposed (Zou and Pan 2004a Phys. Med. Biol. 49 941-59). The algorithm is referred to as a backprojection-filtering (BPF) algorithm. This BPF algorithm for a helical cone-beam CT with a flat-panel detector (FPD-HCBCT) requires minimum data within the Tam-Danielsson window and can naturally address the problem of ROI reconstruction from data truncated in both longitudinal and transversal directions. In practical CT systems, detectors are expensive and always take a very important position in the total cost. Hence, we work on an exact reconstruction algorithm for a CT system with a detector of the smallest size, i.e., a curved PI detector fitting the Tam-Danielsson window. The reconstruction algorithm is derived following the framework of the BPF algorithm. Numerical simulations are done to validate our algorithm in this study.
Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L
2015-07-07
Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.
Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan
2012-01-01
The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474
Accounting for hardware imperfections in EIT image reconstruction algorithms.
Hartinger, Alzbeta E; Gagnon, Hervé; Guardo, Robert
2007-07-01
Electrical impedance tomography (EIT) is a non-invasive technique for imaging the conductivity distribution of a body section. Different types of EIT images can be reconstructed: absolute, time difference and frequency difference. Reconstruction algorithms are sensitive to many errors which translate into image artefacts. These errors generally result from incorrect modelling or inaccurate measurements. Every reconstruction algorithm incorporates a model of the physical set-up which must be as accurate as possible since any discrepancy with the actual set-up will cause image artefacts. Several methods have been proposed in the literature to improve the model realism, such as creating anatomical-shaped meshes, adding a complete electrode model and tracking changes in electrode contact impedances and positions. Absolute and frequency difference reconstruction algorithms are particularly sensitive to measurement errors and generally assume that measurements are made with an ideal EIT system. Real EIT systems have hardware imperfections that cause measurement errors. These errors translate into image artefacts since the reconstruction algorithm cannot properly discriminate genuine measurement variations produced by the medium under study from those caused by hardware imperfections. We therefore propose a method for eliminating these artefacts by integrating a model of the system hardware imperfections into the reconstruction algorithms. The effectiveness of the method has been evaluated by reconstructing absolute, time difference and frequency difference images with and without the hardware model from data acquired on a resistor mesh phantom. Results have shown that artefacts are smaller for images reconstructed with the model, especially for frequency difference imaging.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Modified Polar-Format Software for Processing SAR Data
NASA Technical Reports Server (NTRS)
Chen, Curtis
2003-01-01
HMPF is a computer program that implements a modified polar-format algorithm for processing data from spaceborne synthetic-aperture radar (SAR) systems. Unlike prior polar-format processing algorithms, this algorithm is based on the assumption that the radar signal wavefronts are spherical rather than planar. The algorithm provides for resampling of SAR pulse data from slant range to radial distance from the center of a reference sphere that is nominally the local Earth surface. Then, invoking the projection-slice theorem, the resampled pulse data are Fourier-transformed over radial distance, arranged in the wavenumber domain according to the acquisition geometry, resampled to a Cartesian grid, and inverse-Fourier-transformed. The result of this process is the focused SAR image. HMPF, and perhaps other programs that implement variants of the algorithm, may give better accuracy than do prior algorithms for processing strip-map SAR data from high altitudes and may give better phase preservation relative to prior polar-format algorithms for processing spotlight-mode SAR data.
NASA Astrophysics Data System (ADS)
Luo, Shouhua; Shen, Tao; Sun, Yi; Li, Jing; Li, Guang; Tang, Xiangyang
2018-04-01
In high resolution (microscopic) CT applications, the scan field of view should cover the entire specimen or sample to allow complete data acquisition and image reconstruction. However, truncation may occur in projection data and results in artifacts in reconstructed images. In this study, we propose a low resolution image constrained reconstruction algorithm (LRICR) for interior tomography in microscopic CT at high resolution. In general, the multi-resolution acquisition based methods can be employed to solve the data truncation problem if the project data acquired at low resolution are utilized to fill up the truncated projection data acquired at high resolution. However, most existing methods place quite strict restrictions on the data acquisition geometry, which greatly limits their utility in practice. In the proposed LRICR algorithm, full and partial data acquisition (scan) at low and high resolutions, respectively, are carried out. Using the image reconstructed from sparse projection data acquired at low resolution as the prior, a microscopic image at high resolution is reconstructed from the truncated projection data acquired at high resolution. Two synthesized digital phantoms, a raw bamboo culm and a specimen of mouse femur, were utilized to evaluate and verify performance of the proposed LRICR algorithm. Compared with the conventional TV minimization based algorithm and the multi-resolution scout-reconstruction algorithm, the proposed LRICR algorithm shows significant improvement in reduction of the artifacts caused by data truncation, providing a practical solution for high quality and reliable interior tomography in microscopic CT applications. The proposed LRICR algorithm outperforms the multi-resolution scout-reconstruction method and the TV minimization based reconstruction for interior tomography in microscopic CT.
Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng
2016-12-21
In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Ma, Liang; Wang, Bin
2018-01-01
In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.
2011-01-01
Background Gene regulatory networks play essential roles in living organisms to control growth, keep internal metabolism running and respond to external environmental changes. Understanding the connections and the activity levels of regulators is important for the research of gene regulatory networks. While relevance score based algorithms that reconstruct gene regulatory networks from transcriptome data can infer genome-wide gene regulatory networks, they are unfortunately prone to false positive results. Transcription factor activities (TFAs) quantitatively reflect the ability of the transcription factor to regulate target genes. However, classic relevance score based gene regulatory network reconstruction algorithms use models do not include the TFA layer, thus missing a key regulatory element. Results This work integrates TFA prediction algorithms with relevance score based network reconstruction algorithms to reconstruct gene regulatory networks with improved accuracy over classic relevance score based algorithms. This method is called Gene expression and Transcription factor activity based Relevance Network (GTRNetwork). Different combinations of TFA prediction algorithms and relevance score functions have been applied to find the most efficient combination. When the integrated GTRNetwork method was applied to E. coli data, the reconstructed genome-wide gene regulatory network predicted 381 new regulatory links. This reconstructed gene regulatory network including the predicted new regulatory links show promising biological significances. Many of the new links are verified by known TF binding site information, and many other links can be verified from the literature and databases such as EcoCyc. The reconstructed gene regulatory network is applied to a recent transcriptome analysis of E. coli during isobutanol stress. In addition to the 16 significantly changed TFAs detected in the original paper, another 7 significantly changed TFAs have been detected by using our reconstructed network. Conclusions The GTRNetwork algorithm introduces the hidden layer TFA into classic relevance score-based gene regulatory network reconstruction processes. Integrating the TFA biological information with regulatory network reconstruction algorithms significantly improves both detection of new links and reduces that rate of false positives. The application of GTRNetwork on E. coli gene transcriptome data gives a set of potential regulatory links with promising biological significance for isobutanol stress and other conditions. PMID:21668997
Towards the automatization of the Foucault knife-edge quantitative test
NASA Astrophysics Data System (ADS)
Rodríguez, G.; Villa, J.; Martínez, G.; de la Rosa, I.; Ivanov, R.
2017-08-01
Given the increasing necessity of simple, economical and reliable methods and instruments for performing quality tests of optical surfaces such as mirrors and lenses, in the recent years we resumed the study of the long forgotten Foucault knife-edge test from the point of view of the physical optics, ultimately achieving a closed mathematical expression that directly relates the knife-edge position along the displacement paraxial axis with the observable irradiance pattern, which later allowed us to propose a quantitative methodology for estimating the wavefront error of an aspherical mirror with precision akin to interferometry. In this work, we present a further improved digital image processing algorithm in which the sigmoidal cost-function for calculating the transient slope-point of each associated intensity-illumination profile is replaced for a simplified version of it, thus making the whole process of estimating the wavefront gradient remarkably more stable and efficient, at the same time, the Fourier based algorithm employed for gradient integration has been replaced as well for a regularized quadratic cost-function that allows a considerably easier introduction of the region of interest (ROI) of the function, which solved by means of a linear gradient conjugate method largely increases the overall accuracy and efficiency of the algorithm. This revised approach of our methodology can be easily implemented and handled by most single-board microcontrollers in the market, hence enabling the implementation of a full-integrated automatized test apparatus, opening a realistic path for even the proposal of a stand-alone optical mirror analyzer prototype.
Simulation and performance of an artificial retina for 40 MHz track reconstruction
Abba, A.; Bedeschi, F.; Citterio, M.; ...
2015-03-05
We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.
A Survey of the Use of Iterative Reconstruction Algorithms in Electron Microscopy
Otón, J.; Vilas, J. L.; Kazemi, M.; Melero, R.; del Caño, L.; Cuenca, J.; Conesa, P.; Gómez-Blanco, J.; Marabini, R.; Carazo, J. M.
2017-01-01
One of the key steps in Electron Microscopy is the tomographic reconstruction of a three-dimensional (3D) map of the specimen being studied from a set of two-dimensional (2D) projections acquired at the microscope. This tomographic reconstruction may be performed with different reconstruction algorithms that can be grouped into several large families: direct Fourier inversion methods, back-projection methods, Radon methods, or iterative algorithms. In this review, we focus on the latter family of algorithms, explaining the mathematical rationale behind the different algorithms in this family as they have been introduced in the field of Electron Microscopy. We cover their use in Single Particle Analysis (SPA) as well as in Electron Tomography (ET). PMID:29312997
Local ROI Reconstruction via Generalized FBP and BPF Algorithms along More Flexible Curves
Ye, Yangbo; Zhao, Shiying; Wang, Ge
2006-01-01
We study the local region-of-interest (ROI) reconstruction problem, also referred to as the local CT problem. Our scheme includes two steps: (a) the local truncated normal-dose projections are extended to global dataset by combining a few global low-dose projections; (b) the ROI are reconstructed by either the generalized filtered backprojection (FBP) or backprojection-filtration (BPF) algorithms. The simulation results show that both the FBP and BPF algorithms can reconstruct satisfactory results with image quality in the ROI comparable to that of the corresponding global CT reconstruction. PMID:23165018
Material Interface Reconstruction in VisIt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meredith, J S
In this paper, we first survey a variety of approaches to material interface reconstruction and their applicability to visualization, and we investigate the details of the current reconstruction algorithm in the VisIt scientific analysis and visualization tool. We then provide a novel implementation of the original VisIt algorithm that makes use of a wide range of the finite element zoo during reconstruction. This approach results in dramatic improvements in quality and performance without sacrificing the strengths of the VisIt algorithm as it relates to visualization.
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
NASA Astrophysics Data System (ADS)
Humphries, T.; Winn, J.; Faridani, A.
2017-08-01
Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan
2010-01-01
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan
2010-01-01
Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.
Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B
2010-04-01
Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
Results of the NFIRAOS RTC trade study
NASA Astrophysics Data System (ADS)
Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent L.; Gilles, Luc; Herriot, Glen; Kerley, Daniel A.; Ljusic, Zoran; McVeigh, Eric A.; Prior, Robert; Smith, Malcolm; Wang, Lianqi
2014-07-01
With two large deformable mirrors with a total of more than 7000 actuators that need to be driven from the measurements of six 60x60 LGS WFSs (total 1.23Mpixels) at 800Hz with a latency of less than one frame, NFIRAOS presents an interesting real-time computing challenge. This paper reports on a recent trade study to evaluate which current technology could meet this challenge, with the plan to select a baseline architecture by the beginning of NFIRAOS construction in 2014. We have evaluated a number of architectures, ranging from very specialized layouts with custom boards to more generic architectures made from commercial off-the-shelf units (CPUs with or without accelerator boards). For each architecture, we have found the most suitable algorithm, mapped it onto the hardware and evaluated the performance through benchmarking whenever possible. We have evaluated a large number of criteria, including cost, power consumption, reliability and flexibility, and proceeded with scoring each architecture based on these criteria. We have found that, with today's technology, the NFIRAOS requirements are well within reach of off-the-shelf commercial hardware running a parallel implementation of the straightforward matrix-vector multiply (MVM) algorithm for wave-front reconstruction. Even accelerators such as GPUs and Xeon Phis are no longer necessary. Indeed, we have found that the entire NFIRAOS RTC can be handled by seven 2U high-end PC-servers using 10GbE connectivity. Accelerators are only required for the off-line process of updating the matrix control matrix every ~10s, as observing conditions change.
The phylogeography and spatiotemporal spread of south-central skunk rabies virus.
Kuzmina, Natalia A; Lemey, Philippe; Kuzmin, Ivan V; Mayes, Bonny C; Ellison, James A; Orciari, Lillian A; Hightower, Dillon; Taylor, Steven T; Rupprecht, Charles E
2013-01-01
The south-central skunk rabies virus (SCSK) is the most broadly distributed terrestrial viral lineage in North America. Skunk rabies has not been efficiently targeted by oral vaccination campaigns and represents a natural system of pathogen invasion, yielding insights to rabies emergence. In the present study we reconstructed spatiotemporal spread of SCSK in the whole territory of its circulation using a combination of Bayesian methods. The analysis based on 241 glycoprotein gene sequences demonstrated that SCSK is much more divergent phylogenetically than was appreciated previously. According to our analyses the SCSK originated in the territory of Texas ~170 years ago, and spread geographically during the following decades. The wavefront velocity in the northward direction was significantly greater than in the eastward and westward directions. Rivers (except the Mississippi River and Rio Grande River) did not constitute significant barriers for epizootic spread, in contrast to deserts and mountains. The mean dispersal rate of skunk rabies was lower than that of the raccoon and fox rabies. Viral lineages circulate in their areas with limited evidence of geographic spread during decades. However, spatiotemporal reconstruction shows that after a long period of stability the dispersal rate and wavefront velocity of SCSK are increasing. Our results indicate that there is a need to develop control measures for SCSK, and suggest how such measure can be implemented most efficiently. Our approach can be extrapolated to other rabies reservoirs and used as a tool for investigation of epizootic patterns and planning interventions towards disease elimination.
The Phylogeography and Spatiotemporal Spread of South-Central Skunk Rabies Virus
Kuzmina, Natalia A.; Lemey, Philippe; Kuzmin, Ivan V.; Mayes, Bonny C.; Ellison, James A.; Orciari, Lillian A.; Hightower, Dillon; Taylor, Steven T.; Rupprecht, Charles E.
2013-01-01
The south-central skunk rabies virus (SCSK) is the most broadly distributed terrestrial viral lineage in North America. Skunk rabies has not been efficiently targeted by oral vaccination campaigns and represents a natural system of pathogen invasion, yielding insights to rabies emergence. In the present study we reconstructed spatiotemporal spread of SCSK in the whole territory of its circulation using a combination of Bayesian methods. The analysis based on 241 glycoprotein gene sequences demonstrated that SCSK is much more divergent phylogenetically than was appreciated previously. According to our analyses the SCSK originated in the territory of Texas ~170 years ago, and spread geographically during the following decades. The wavefront velocity in the northward direction was significantly greater than in the eastward and westward directions. Rivers (except the Mississippi River and Rio Grande River) did not constitute significant barriers for epizootic spread, in contrast to deserts and mountains. The mean dispersal rate of skunk rabies was lower than that of the raccoon and fox rabies. Viral lineages circulate in their areas with limited evidence of geographic spread during decades. However, spatiotemporal reconstruction shows that after a long period of stability the dispersal rate and wavefront velocity of SCSK are increasing. Our results indicate that there is a need to develop control measures for SCSK, and suggest how such measure can be implemented most efficiently. Our approach can be extrapolated to other rabies reservoirs and used as a tool for investigation of epizootic patterns and planning interventions towards disease elimination. PMID:24312657
Laboratory and telescope demonstration of the TP3-WFS for the adaptive optics segment of AOLI
NASA Astrophysics Data System (ADS)
Colodro-Conde, C.; Velasco, S.; Fernández-Valdivia, J. J.; López, R.; Oscoz, A.; Rebolo, R.; Femenía, B.; King, D. L.; Labadie, L.; Mackay, C.; Muthusubramanian, B.; Pérez Garrido, A.; Puga, M.; Rodríguez-Coira, G.; Rodríguez-Ramos, L. F.; Rodríguez-Ramos, J. M.; Toledo-Moreo, R.; Villó-Pérez, I.
2017-05-01
Adaptive Optics Lucky Imager (AOLI) is a state-of-the-art instrument that combines adaptive optics (AO) and lucky imaging (LI) with the objective of obtaining diffraction-limited images in visible wavelength at mid- and big-size ground-based telescopes. The key innovation of AOLI is the development and use of the new Two Pupil Plane Positions Wavefront Sensor (TP3-WFS). The TP3-WFS, working in visible band, represents an advance over classical wavefront sensors such as the Shack-Hartmann WFS because it can theoretically use fainter natural reference stars, which would ultimately provide better sky coverages to AO instruments using this newer sensor. This paper describes the software, algorithms and procedures that enabled AOLI to become the first astronomical instrument performing real-time AO corrections in a telescope with this new type of WFS, including the first control-related results at the William Herschel Telescope.
Wavefront sensorless adaptive optics temporal focusing-based multiphoton microscopy
Chang, Chia-Yuan; Cheng, Li-Chung; Su, Hung-Wei; Hu, Yvonne Yuling; Cho, Keng-Chi; Yen, Wei-Chung; Xu, Chris; Dong, Chen Yuan; Chen, Shean-Jen
2014-01-01
Temporal profile distortions reduce excitation efficiency and image quality in temporal focusing-based multiphoton microscopy. In order to compensate the distortions, a wavefront sensorless adaptive optics system (AOS) was integrated into the microscope. The feedback control signal of the AOS was acquired from local image intensity maximization via a hill-climbing algorithm. The control signal was then utilized to drive a deformable mirror in such a way as to eliminate the distortions. With the AOS correction, not only is the axial excitation symmetrically refocused, but the axial resolution with full two-photon excited fluorescence (TPEF) intensity is also maintained. Hence, the contrast of the TPEF image of a R6G-doped PMMA thin film is enhanced along with a 3.7-fold increase in intensity. Furthermore, the TPEF image quality of 1μm fluorescent beads sealed in agarose gel at different depths is improved. PMID:24940539
NASA Astrophysics Data System (ADS)
Tang, Xiangyang
2003-05-01
In multi-slice helical CT, the single-tilted-plane-based reconstruction algorithm has been proposed to combat helical and cone beam artifacts by tilting a reconstruction plane to fit a helical source trajectory optimally. Furthermore, to improve the noise characteristics or dose efficiency of the single-tilted-plane-based reconstruction algorithm, the multi-tilted-plane-based reconstruction algorithm has been proposed, in which the reconstruction plane deviates from the pose globally optimized due to an extra rotation along the 3rd axis. As a result, the capability of suppressing helical and cone beam artifacts in the multi-tilted-plane-based reconstruction algorithm is compromised. An optomized tilted-plane-based reconstruction algorithm is proposed in this paper, in which a matched view weighting strategy is proposed to optimize the capability of suppressing helical and cone beam artifacts and noise characteristics. A helical body phantom is employed to quantitatively evaluate the imaging performance of the matched view weighting approach by tabulating artifact index and noise characteristics, showing that the matched view weighting improves both the helical artifact suppression and noise characteristics or dose efficiency significantly in comparison to the case in which non-matched view weighting is applied. Finally, it is believed that the matched view weighting approach is of practical importance in the development of multi-slive helical CT, because it maintains the computational structure of fan beam filtered backprojection and demands no extra computational services.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Lang, Michael; Pakin, Scott
2011-09-30
Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less
Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J; Lang, Michael; Pakin, Scott
2009-01-01
Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less
Comparison of laser ray-tracing and skiascopic ocular wavefront-sensing devices
Bartsch, D-UG; Bessho, K; Gomez, L; Freeman, WR
2009-01-01
Purpose To compare two wavefront-sensing devices based on different principles. Methods Thirty-eight healthy eyes of 19 patients were measured five times in the reproducibility study. Twenty eyes of 10 patients were measured in the comparison study. The Tracey Visual Function Analyzer (VFA), based on the ray-tracing principle and the Nidek optical pathway difference (OPD)-Scan, based on the dynamic skiascopy principle were compared. Standard deviation (SD) of root mean square (RMS) errors was compared to verify the reproducibility. We evaluated RMS errors, Zernike terms and conventional refractive indexes (Sph, Cyl, Ax, and spherical equivalent). Results In RMS errors reading, both devices showed similar ratios of SD to the mean measurement value (VFA: 57.5±11.7%, OPD-Scan: 53.9±10.9%). Comparison on the same eye showed that almost all terms were significantly greater using the VFA than using the OPD-Scan. However, certain high spatial frequency aberrations (tetrafoil, pentafoil, and hexafoil) were consistently measured near zero with the OPD-Scan. Conclusion Both devices showed similar level of reproducibility; however, there was considerable difference in the wavefront reading between machines when measuring the same eye. Differences in the number of sample points, centration, and measurement algorithms between the two instruments may explain our results. PMID:17571088
NASA Astrophysics Data System (ADS)
Zhou, Meiling; Singh, Alok Kumar; Pedrini, Giancarlo; Osten, Wolfgang; Min, Junwei; Yao, Baoli
2018-03-01
We present a tunable output-frequency filter (TOF) algorithm to reconstruct the object from noisy experimental data under low-power partially coherent illumination, such as LED, when imaging through scattering media. In the iterative algorithm, we employ Gaussian functions with different filter windows at different stages of iteration process to reduce corruption from experimental noise to search for a global minimum in the reconstruction. In comparison with the conventional iterative phase retrieval algorithm, we demonstrate that the proposed TOF algorithm achieves consistent and reliable reconstruction in the presence of experimental noise. Moreover, the spatial resolution and distinctive features are retained in the reconstruction since the filter is applied only to the region outside the object. The feasibility of the proposed method is proved by experimental results.
Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.
Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2016-04-01
To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-01-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-02-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.
Objective performance assessment of five computed tomography iterative reconstruction algorithms.
Omotayo, Azeez; Elbakri, Idris
2016-11-22
Iterative algorithms are gaining clinical acceptance in CT. We performed objective phantom-based image quality evaluation of five commercial iterative reconstruction algorithms available on four different multi-detector CT (MDCT) scanners at different dose levels as well as the conventional filtered back-projection (FBP) reconstruction. Using the Catphan500 phantom, we evaluated image noise, contrast-to-noise ratio (CNR), modulation transfer function (MTF) and noise-power spectrum (NPS). The algorithms were evaluated over a CTDIvol range of 0.75-18.7 mGy on four major MDCT scanners: GE DiscoveryCT750HD (algorithms: ASIR™ and VEO™); Siemens Somatom Definition AS+ (algorithm: SAFIRE™); Toshiba Aquilion64 (algorithm: AIDR3D™); and Philips Ingenuity iCT256 (algorithm: iDose4™). Images were reconstructed using FBP and the respective iterative algorithms on the four scanners. Use of iterative algorithms decreased image noise and increased CNR, relative to FBP. In the dose range of 1.3-1.5 mGy, noise reduction using iterative algorithms was in the range of 11%-51% on GE DiscoveryCT750HD, 10%-52% on Siemens Somatom Definition AS+, 49%-62% on Toshiba Aquilion64, and 13%-44% on Philips Ingenuity iCT256. The corresponding CNR increase was in the range 11%-105% on GE, 11%-106% on Siemens, 85%-145% on Toshiba and 13%-77% on Philips respectively. Most algorithms did not affect the MTF, except for VEO™ which produced an increase in the limiting resolution of up to 30%. A shift in the peak of the NPS curve towards lower frequencies and a decrease in NPS amplitude were obtained with all iterative algorithms. VEO™ required long reconstruction times, while all other algorithms produced reconstructions in real time. Compared to FBP, iterative algorithms reduced image noise and increased CNR. The iterative algorithms available on different scanners achieved different levels of noise reduction and CNR increase while spatial resolution improvements were obtained only with VEO™. This study is useful in that it provides performance assessment of the iterative algorithms available from several mainstream CT manufacturers.
PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.
Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge
2005-08-01
Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
Region-of-interest image reconstruction in circular cone-beam microCT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Bian, Junguo; Pelizzari, Charles A.
2007-12-15
Cone-beam microcomputed tomography (microCT) is one of the most popular choices for small animal imaging which is becoming an important tool for studying animal models with transplanted diseases. Region-of-interest (ROI) imaging techniques in CT, which can reconstruct an ROI image from the projection data set of the ROI, can be used not only for reducing imaging-radiation exposure to the subject and scatters to the detector but also for potentially increasing spatial resolution of the reconstructed images. Increasing spatial resolution in microCT images can facilitate improved accuracy in many assessment tasks. A method proposed previously for increasing CT image spatial resolutionmore » entails the exploitation of the geometric magnification in cone-beam CT. Due to finite detector size, however, this method can lead to data truncation for a large geometric magnification. The Feldkamp-Davis-Kress (FDK) algorithm yields images with artifacts when truncated data are used, whereas the recently developed backprojection filtration (BPF) algorithm is capable of reconstructing ROI images without truncation artifacts from truncated cone-beam data. We apply the BPF algorithm to reconstructing ROI images from truncated data of three different objects acquired by our circular cone-beam microCT system. Reconstructed images by use of the FDK and BPF algorithms from both truncated and nontruncated cone-beam data are compared. The results of the experimental studies demonstrate that, from certain truncated data, the BPF algorithm can reconstruct ROI images with quality comparable to that reconstructed from nontruncated data. In contrast, the FDK algorithm yields ROI images with truncation artifacts. Therefore, an implication of the studies is that, when truncated data are acquired with a configuration of a large geometric magnification, the BPF algorithm can be used for effective enhancement of the spatial resolution of a ROI image.« less
NASA Astrophysics Data System (ADS)
Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei
2018-02-01
Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
NASA Astrophysics Data System (ADS)
Tang, Shaojie; Tang, Xiangyang
2016-03-01
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A
2004-07-01
To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.
Exact BPF and FBP algorithms for nonstandard saddle curves.
Yu, Hengyong; Zhao, Shiying; Ye, Yangbo; Wang, Ge
2005-11-01
A hot topic in cone-beam CT research is exact cone-beam reconstruction from a general scanning trajectory. Particularly, a nonstandard saddle curve attracts attention, as this construct allows the continuous periodic scanning of a volume-of-interest (VOI). Here we evaluate two algorithms for reconstruction from data collected along a nonstandard saddle curve, which are in the filtered backprojection (FBP) and backprojection filtration (BPF) formats, respectively. Both the algorithms are implemented in a chord-based coordinate system. Then, a rebinning procedure is utilized to transform the reconstructed results into the natural coordinate system. The simulation results demonstrate that the FBP algorithm produces better image quality than the BPF algorithm, while both the algorithms exhibit similar noise characteristics.
Use of the Hotelling observer to optimize image reconstruction in digital breast tomosynthesis
Sánchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2015-01-01
Abstract. We propose an implementation of the Hotelling observer that can be applied to the optimization of linear image reconstruction algorithms in digital breast tomosynthesis. The method is based on considering information within a specific region of interest, and it is applied to the optimization of algorithms for detectability of microcalcifications. Several linear algorithms are considered: simple back-projection, filtered back-projection, back-projection filtration, and Λ-tomography. The optimized algorithms are then evaluated through the reconstruction of phantom data. The method appears robust across algorithms and parameters and leads to the generation of algorithm implementations which subjectively appear optimized for the task of interest. PMID:26702408
NASA Technical Reports Server (NTRS)
Whitmore, S. A.
1985-01-01
The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.
Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, T; UT Southwestern Medical Center, Dallas, TX; Yan, H
2014-06-15
Purpose: To develop a 3D dictionary learning based statistical reconstruction algorithm on graphic processing units (GPU), to improve the quality of low-dose cone beam CT (CBCT) imaging with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms) of 3x3x3 voxels was trained from a high quality volume image. During reconstruction, we utilized a Cholesky decomposition based orthogonal matching pursuit algorithm to find a sparse representation on this dictionary basis of each patch in the reconstructed image, in order to regularize the image quality. To accelerate the time-consuming sparse coding in the 3D case, we implemented our algorithm inmore » a parallel fashion by taking advantage of the tremendous computational power of GPU. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with a tight frame (TF) based one using a subset data of 121 projections. The image qualities under different resolutions in z-direction, with or without statistical weighting are also studied. Results: Compared to the TF-based CBCT reconstruction, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, to remove more streaking artifacts, and is less susceptible to blocky artifacts. It is also observed that statistical reconstruction approach is sensitive to inconsistency between the forward and backward projection operations in parallel computing. Using high a spatial resolution along z direction helps improving the algorithm robustness. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppressing noise, and hence to achieve high quality reconstruction. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application. A high zresolution is preferred to stabilize statistical iterative reconstruction. This work was supported in part by NIH(1R01CA154747-01), NSFC((No. 61172163), Research Fund for the Doctoral Program of Higher Education of China (No. 20110201110011), China Scholarship Council.« less
NASA Astrophysics Data System (ADS)
Choi, Doo-Won; Jeon, Min-Gyu; Cho, Gyeong-Rae; Kamimoto, Takahiro; Deguchi, Yoshihiro; Doh, Deog-Hee
2016-02-01
Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy (TDLAS). Multiplicative Algebraic Reconstruction Technique (MART) algorithm was adopted for data reconstruction. The data obtained in an experiment for the measurement of temperature and concentration fields of gas flows were used. The measurement theory is based upon the Beer-Lambert law, and the measurement system consists of a tunable laser, collimators, detectors, and an analyzer. Methane was used as a fuel for combustion with air in the Bunsen-type burner. The data used for the reconstruction are from the optical signals of 8-laser beams passed on a cross-section of the methane flame. The performances of MART algorithm in data reconstruction were validated and compared with those obtained by Algebraic Reconstruction Technique (ART) algorithm.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Stand-alone scattering optical device using holographic photopolymer (Conference Presentation)
NASA Astrophysics Data System (ADS)
Park, Jongchan; Lee, KyeoReh; Park, YongKeun
2016-03-01
When a light propagates through highly disordered medium, its optical parameters such as amplitude, phase and polarization states are completely scrambled because of multiple scattering events. Since the multiple scattering is a fundamental optical process that contains extremely high degrees of freedom, optical information of a transmitted light is totally mingled. Until recently, the presence of multiple scattering in an inhomogeneous medium is considered as a major obstacle when manipulating a light transmitting through the medium. However, a recent development of wavefront shaping techniques enable us to control the propagation of light through turbid media; a light transmitting through a turbid medium can be effectively controlled by modulating the spatial profile of the incident light using spatial light modulator. In this work, stand-alone scattering optical device is proposed; a holographic photopolymer film, which is much economic compared to the other digital spatial light modulators, is used to record and reconstruct permanent wavefront to generate optical field behind a scattering medium. By employing our method, arbitrary optical field can be generated since the scattering medium completely mixes all the optical parameters which allow us to access all the optical information only by modulating spatial phase profile of the impinging wavefront. The method is experimentally demonstrated in both the far-field and near-field regime where it shows promising fidelity and stability. The proposed stand-alone scattering optical device will opens up new avenues for exploiting the randomness inherent in disordered medium.
A Evaluation of Optical Aberrations in Underwater Hologrammetry
NASA Astrophysics Data System (ADS)
Kilpatrick, J. M.
Available from UMI in association with The British Library. An iterative ray-trace procedure is developed in conjunction with semi-analytic expressions for spherical aberration, coma, and astigmatism in the reconstructed holographic images of underwater objects. An exact expression for the astigmatic difference is obtained, based on the geometry of the caustic for refraction. The geometrical characteristics of the aberrated images associated with axial and non-axial field positions are represented by ray intersection diagrams. A third order expression for the wavefront aberration introduced at a planar air/water boundary is given. The associated third order aberration coefficients are used to obtain analytic expressions for the aberrations observed in underwater hologrammetry. The results of the third order treatment are shown to give good agreement with the results obtained by geometrical ray tracing and by direct measurement on the reconstructed real image. The third order aberration coefficients are employed to estimate the limit of resolution in the presence of the aberrations associated with reconstruction in air. In concurrence with practical observations it is found that the estimated resolution is primarily limited by astigmatism. The limitations of the planar window in underwater imaging applications are outlined and various schemes are considered to effect a reduction in the extent of aberration. The analogous problems encountered in underwater photography are examined in order to establish the grounds for a common solution based on a conventional optical corrector. The performance of one such system, the Ivanoff Corrector, is investigated. The spherical aberration associated with axial image formation is evaluated. The equivalence of the third order wavefront aberration introduced at a planar air/water boundary to that introduced upon reconstruction by an appropriate wavelength change is shown to provide a basis for the compensation of aberrations in underwater hologrammetry. The results of experimental trials which demonstrate the correction of astigmatism and field curvature are presented. Exact expressions are obtained for the aberrations in wavelength compensated holograms and are employed to determine the conditions for optimum compensation and the degree of residual aberration. (Abstract shortened by UMI.).
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2017-01-01
Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.
Comparison of SeaWinds Backscatter Imaging Algorithms
Long, David G.
2017-01-01
This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
GREIT: a unified approach to 2D linear EIT reconstruction of lung images.
Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K
2009-06-01
Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Reconstruction of a digital core containing clay minerals based on a clustering algorithm.
He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling
2017-10-01
It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.