Sample records for zernike polynomial expansion

  1. Zernike expansion of derivatives and Laplacians of the Zernike circle polynomials.

    PubMed

    Janssen, A J E M

    2014-07-01

    The partial derivatives and Laplacians of the Zernike circle polynomials occur in various places in the literature on computational optics. In a number of cases, the expansion of these derivatives and Laplacians in the circle polynomials are required. For the first-order partial derivatives, analytic results are scattered in the literature. Results start as early as 1942 in Nijboer's thesis and continue until present day, with some emphasis on recursive computation schemes. A brief historic account of these results is given in the present paper. By choosing the unnormalized version of the circle polynomials, with exponential rather than trigonometric azimuthal dependence, and by a proper combination of the two partial derivatives, a concise form of the expressions emerges. This form is appropriate for the formulation and solution of a model wavefront sensing problem of reconstructing a wavefront on the level of its expansion coefficients from (measurements of the expansion coefficients of) the partial derivatives. It turns out that the least-squares estimation problem arising here decouples per azimuthal order m, and per m the generalized inverse solution assumes a concise analytic form so that singular value decompositions are avoided. The preferred version of the circle polynomials, with proper combination of the partial derivatives, also leads to a concise analytic result for the Zernike expansion of the Laplacian of the circle polynomials. From these expansions, the properties of the Laplacian as a mapping from the space of circle polynomials of maximal degree N, as required in the study of the Neumann problem associated with the transport-of-intensity equation, can be read off within a single glance. Furthermore, the inverse of the Laplacian on this space is shown to have a concise analytic form.

  2. Interbasis expansions in the Zernike system

    NASA Astrophysics Data System (ADS)

    Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.

  3. New separated polynomial solutions to the Zernike system on the unit disk and interbasis expansion.

    PubMed

    Pogosyan, George S; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation proposed by Frits Zernike to obtain a basis of polynomial orthogonal solutions on the unit disk to classify wavefront aberrations in circular pupils is shown to have a set of new orthonormal solution bases involving Legendre and Gegenbauer polynomials in nonorthogonal coordinates, close to Cartesian ones. We find the overlaps between the original Zernike basis and a representative of the new set, which turn out to be Clebsch-Gordan coefficients.

  4. Zernike Basis to Cartesian Transformations

    NASA Astrophysics Data System (ADS)

    Mathar, R. J.

    2009-12-01

    The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.

  5. Non-axisymmetric Aberration Patterns from Wide-field Telescopes Using Spin-weighted Zernike Polynomials

    DOE PAGES

    Kent, Stephen M.

    2018-02-15

    If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey Chretian telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.

  6. Non-axisymmetric Aberration Patterns from Wide-field Telescopes Using Spin-weighted Zernike Polynomials

    NASA Astrophysics Data System (ADS)

    Kent, Stephen M.

    2018-04-01

    If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey–Chrétien telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.

  7. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  8. Mathematics of Zernike polynomials: a review.

    PubMed

    McAlinden, Colm; McCartney, Mark; Moore, Jonathan

    2011-11-01

    Monochromatic aberrations of the eye principally originate from the cornea and the crystalline lens. Aberrometers operate via differing principles but function by either analysing the reflected wavefront from the retina or by analysing an image on the retina. Aberrations may be described as lower order or higher order aberrations with Zernike polynomials being the most commonly employed fitting method. The complex mathematical aspects with regards the Zernike polynomial expansion series are detailed in this review. Refractive surgery has been a key clinical application of aberrometers; however, more recently aberrometers have been used in a range of other areas ophthalmology including corneal diseases, cataract and retinal imaging. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  9. Uniform versus Gaussian Beams: A Comparison of the Effects of Diffraction, Obscuration, and Aberations.

    DTIC Science & Technology

    1985-12-16

    balancing is discussed for the two types of beams. Zernike polynomials representing balanced primary aberration for uniform and Gaussian annular beams...plotted on a logarithmic scale (Figs. 3c and 3d ). The positions of maxima and minima and the correspond- ing irradiance and encircled-power values are...aberration 2 4 (representing a term in the expansion of the aberration in terms of a set of " Zernike " polynomials which are orthonormal over the amplitude

  10. Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror

    NASA Astrophysics Data System (ADS)

    Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu

    2017-02-01

    Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.

  11. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  12. Ligand Electron Density Shape Recognition Using 3D Zernike Descriptors

    NASA Astrophysics Data System (ADS)

    Gunasekaran, Prasad; Grandison, Scott; Cowtan, Kevin; Mak, Lora; Lawson, David M.; Morris, Richard J.

    We present a novel approach to crystallographic ligand density interpretation based on Zernike shape descriptors. Electron density for a bound ligand is expanded in an orthogonal polynomial series (3D Zernike polynomials) and the coefficients from this expansion are employed to construct rotation-invariant descriptors. These descriptors can be compared highly efficiently against large databases of descriptors computed from other molecules. In this manuscript we describe this process and show initial results from an electron density interpretation study on a dataset containing over a hundred OMIT maps. We could identify the correct ligand as the first hit in about 30 % of the cases, within the top five in a further 30 % of the cases, and giving rise to an 80 % probability of getting the correct ligand within the top ten matches. In all but a few examples, the top hit was highly similar to the correct ligand in both shape and chemistry. Further extensions and intrinsic limitations of the method are discussed.

  13. Alignment-independent comparison of binding sites based on DrugScore potential fields encoded by 3D Zernike descriptors.

    PubMed

    Nisius, Britta; Gohlke, Holger

    2012-09-24

    Analyzing protein binding sites provides detailed insights into the biological processes proteins are involved in, e.g., into drug-target interactions, and so is of crucial importance in drug discovery. Herein, we present novel alignment-independent binding site descriptors based on DrugScore potential fields. The potential fields are transformed to a set of information-rich descriptors using a series expansion in 3D Zernike polynomials. The resulting Zernike descriptors show a promising performance in detecting similarities among proteins with low pairwise sequence identities that bind identical ligands, as well as within subfamilies of one target class. Furthermore, the Zernike descriptors are robust against structural variations among protein binding sites. Finally, the Zernike descriptors show a high data compression power, and computing similarities between binding sites based on these descriptors is highly efficient. Consequently, the Zernike descriptors are a useful tool for computational binding site analysis, e.g., to predict the function of novel proteins, off-targets for drug candidates, or novel targets for known drugs.

  14. A recursive algorithm for Zernike polynomials

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.

  15. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  16. Determination of the paraxial focal length using Zernike polynomials over different apertures

    NASA Astrophysics Data System (ADS)

    Binkele, Tobias; Hilbig, David; Henning, Thomas; Fleischmann, Friedrich

    2017-02-01

    The paraxial focal length is still the most important parameter in the design of a lens. As presented at the SPIE Optics + Photonics 2016, the measured focal length is a function of the aperture. The paraxial focal length can be found when the aperture approaches zero. In this work, we investigate the dependency of the Zernike polynomials on the aperture size with respect to 3D space. By this, conventional wavefront measurement systems that apply Zernike polynomial fitting (e.g. Shack-Hartmann-Sensor) can be used to determine the paraxial focal length, too. Since the Zernike polynomials are orthogonal over a unit circle, the aperture used in the measurement has to be normalized. By shrinking the aperture and keeping up with the normalization, the Zernike coefficients change. The relation between these changes and the paraxial focal length are investigated. The dependency of the focal length on the aperture size is derived analytically and evaluated by simulation and measurement of a strong focusing lens. The measurements are performed using experimental ray tracing and a Shack-Hartmann-Sensor. Using experimental ray tracing for the measurements, the aperture can be chosen easily. Regarding the measurements with the Shack-Hartmann- Sensor, the aperture size is fixed. Thus, the Zernike polynomials have to be adapted to use different aperture sizes by the proposed method. By doing this, the paraxial focal length can be determined from the measurements in both cases.

  17. Zernike analysis of all-sky night brightness maps.

    PubMed

    Bará, Salvador; Nievas, Miguel; Sánchez de Miguel, Alejandro; Zamorano, Jaime

    2014-04-20

    All-sky night brightness maps (calibrated images of the night sky with hemispherical field-of-view (FOV) taken at standard photometric bands) provide useful data to assess the light pollution levels at any ground site. We show that these maps can be efficiently described and analyzed using Zernike circle polynomials. The relevant image information can be compressed into a low-dimensional coefficients vector, giving an analytical expression for the sky brightness and alleviating the effects of noise. Moreover, the Zernike expansions allow us to quantify in a straightforward way the average and zenithal sky brightness and its variation across the FOV, providing a convenient framework to study the time course of these magnitudes. We apply this framework to analyze the results of a one-year campaign of night sky brightness measurements made at the UCM observatory in Madrid.

  18. Method for Expressing Clinical and Statistical Significance of Ocular and Corneal Wavefront Error Aberrations

    PubMed Central

    Smolek, Michael K.

    2011-01-01

    Purpose The significance of ocular or corneal aberrations may be subject to misinterpretation whenever eyes with different pupil sizes or the application of different Zernike expansion orders are compared. A method is shown that uses simple mathematical interpolation techniques based on normal data to rapidly determine the clinical significance of aberrations, without concern for pupil and expansion order. Methods Corneal topography (Tomey, Inc.; Nagoya, Japan) from 30 normal corneas was collected and the corneal wavefront error analyzed by Zernike polynomial decomposition into specific aberration types for pupil diameters of 3, 5, 7, and 10 mm and Zernike expansion orders of 6, 8, 10 and 12. Using this 4×4 matrix of pupil sizes and fitting orders, best-fitting 3-dimensional functions were determined for the mean and standard deviation of the RMS error for specific aberrations. The functions were encoded into software to determine the significance of data acquired from non-normal cases. Results The best-fitting functions for 6 types of aberrations were determined: defocus, astigmatism, prism, coma, spherical aberration, and all higher-order aberrations. A clinical screening method of color-coding the significance of aberrations in normal, postoperative LASIK, and keratoconus cases having different pupil sizes and different expansion orders is demonstrated. Conclusions A method to calibrate wavefront aberrometry devices by using a standard sample of normal cases was devised. This method could be potentially useful in clinical studies involving patients with uncontrolled pupil sizes or in studies that compare data from aberrometers that use different Zernike fitting-order algorithms. PMID:22157570

  19. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  20. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  1. Comparison of the Performance of Modal Control Schemes for an Adaptive Optics System and Analysis of the Effect of Actuator Limitations

    DTIC Science & Technology

    2012-06-01

    the open-loop path is established, the feedback system can be treated as a set of SISO feedback loops and a single SISO control law can be applied...Zernike polynomials are commonly referred to by the names, such as focus, coma, astigmatism , and etc. Zernike polynomials can be transformed into

  2. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  3. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  4. Quantum superintegrable Zernike system

    NASA Astrophysics Data System (ADS)

    Pogosyan, George S.; Salto-Alegre, Cristina; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-07-01

    We consider the differential equation that Zernike proposed to classify aberrations of wavefronts in a circular pupil, whose value at the boundary can be nonzero. On this account, the quantum Zernike system, where that differential equation is seen as a Schrödinger equation with a potential, is special in that it has a potential and a boundary condition that are not standard in quantum mechanics. We project the disk on a half-sphere and there we find that, in addition to polar coordinates, this system separates into two additional coordinate systems (non-orthogonal on the pupil disk), which lead to Schrödinger-type equations with Pöschl-Teller potentials, whose eigen-solutions involve Legendre, Gegenbauer, and Jacobi polynomials. This provides new expressions for separated polynomial solutions of the original Zernike system that are real. The operators which provide the separation constants are found to participate in a superintegrable cubic Higgs algebra.

  5. Spatial modeling and classification of corneal shape.

    PubMed

    Marsolo, Keith; Twa, Michael; Bullimore, Mark A; Parthasarathy, Srinivasan

    2007-03-01

    One of the most promising applications of data mining is in biomedical data used in patient diagnosis. Any method of data analysis intended to support the clinical decision-making process should meet several criteria: it should capture clinically relevant features, be computationally feasible, and provide easily interpretable results. In an initial study, we examined the feasibility of using Zernike polynomials to represent biomedical instrument data in conjunction with a decision tree classifier to distinguish between the diseased and non-diseased eyes. Here, we provide a comprehensive follow-up to that work, examining a second representation, pseudo-Zernike polynomials, to determine whether they provide any increase in classification accuracy. We compare the fidelity of both methods using residual root-mean-square (rms) error and evaluate accuracy using several classifiers: neural networks, C4.5 decision trees, Voting Feature Intervals, and Naïve Bayes. We also examine the effect of several meta-learning strategies: boosting, bagging, and Random Forests (RFs). We present results comparing accuracy as it relates to dataset and transformation resolution over a larger, more challenging, multi-class dataset. They show that classification accuracy is similar for both data transformations, but differs by classifier. We find that the Zernike polynomials provide better feature representation than the pseudo-Zernikes and that the decision trees yield the best balance of classification accuracy and interpretability.

  6. Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, Zachary; Verduijn, Erik; Wood, Obert R.

    2016-04-01

    Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less

  7. Imaging characteristics of Zernike and annular polynomial aberrations.

    PubMed

    Mahajan, Virendra N; Díaz, José Antonio

    2013-04-01

    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  8. Partial null astigmatism-compensated interferometry for a concave freeform Zernike mirror

    NASA Astrophysics Data System (ADS)

    Dou, Yimeng; Yuan, Qun; Gao, Zhishan; Yin, Huimin; Chen, Lu; Yao, Yanxia; Cheng, Jinlong

    2018-06-01

    Partial null interferometry without using any null optics is proposed to measure a concave freeform Zernike mirror. Oblique incidence on the freeform mirror is used to compensate for astigmatism as the main component in its figure, and to constrain the divergence of the test beam as well. The phase demodulated from the partial nulled interferograms is divided into low-frequency phase and high-frequency phase by Zernike polynomial fitting. The low-frequency surface figure error of the freeform mirror represented by the coefficients of Zernike polynomials is reconstructed from the low-frequency phase, applying the reverse optimization reconstruction technology in the accurate model of the interferometric system. The high-frequency surface figure error of the freeform mirror is retrieved from the high-frequency phase adopting back propagating technology, according to the updated model in which the low-frequency surface figure error has been superimposed on the sag of the freeform mirror. Simulations verified that this method is capable of testing a wide variety of astigmatism-dominated freeform mirrors due to the high dynamic range. The experimental result using our proposed method for a concave freeform Zernike mirror is consistent with the null test result employing the computer-generated hologram.

  9. Use Of Zernike Polynomials And Interferometry In The Optical Design And Assembly Of Large Carbon-Dioxide Laser Systems

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. K.

    1982-02-01

    This paper describes the need for non-raytracing schemes in the optical design and analysis of large carbon-dioxide lasers like the Gigawatt,1 Gemini, 2 and Helios3 lasers currently operational at Los Alamos, and the Antares 4 laser fusion system under construction. The scheme currently used at Los Alamos involves characterizing the various optical components with a Zernike polynomial sets obtained by the digitization6 of experimentally produced interferograms of the components. A Fast Fourier Transform code then propagates the complex amplitude and phase of the beam through the whole system and computes the optical parameters of interest. The analysis scheme is illustrated through examples of the Gigawatt, Gemini, and Helios systems. A possible way of using the Zernike polynomials in optical design problems of this type is discussed. Comparisons between the computed values and experimentally obtained results are made and it is concluded that this appears to be a valid approach. As this is a review article, some previously published results are also used where relevant.

  10. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  11. Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao

    2017-10-01

    Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.

  12. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  13. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  14. Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk

    2017-05-01

    In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.

  15. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  16. Cylinder surface test with Chebyshev polynomial fitting method

    NASA Astrophysics Data System (ADS)

    Yu, Kui-bang; Guo, Pei-ji; Chen, Xi

    2017-10-01

    Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.

  17. Characteristic of entire corneal topography and tomography for the detection of sub-clinical keratoconus with Zernike polynomials using Pentacam.

    PubMed

    Xu, Zhe; Li, Weibo; Jiang, Jun; Zhuang, Xiran; Chen, Wei; Peng, Mei; Wang, Jianhua; Lu, Fan; Shen, Meixiao; Wang, Yuanyuan

    2017-11-28

    The study aimed to characterize the entire corneal topography and tomography for the detection of sub-clinical keratoconus (KC) with a Zernike application method. Normal subjects (n = 147; 147 eyes), sub-clinical KC patients (n = 77; 77 eyes), and KC patients (n = 139; 139 eyes) were imaged with the Pentacam HR system. The entire corneal data of pachymetry and elevation of both the anterior and posterior surfaces were exported from the Pentacam HR software. Zernike polynomials fitting was used to quantify the 3D distribution of the corneal thickness and surface elevation. The root mean square (RMS) values for each order and the total high-order irregularity were calculated. Multimeric discriminant functions combined with individual indices were built using linear step discriminant analysis. Receiver operating characteristic curves determined the diagnostic accuracy (area under the curve, AUC). The 3rd-order RMS of the posterior surface (AUC: 0.928) obtained the highest discriminating capability in sub-clinical KC eyes. The multimeric function, which consisted of the Zernike fitting indices of corneal posterior elevation, showed the highest discriminant ability (AUC: 0.951). Indices generated from the elevation of posterior surface and thickness measurements over the entire cornea using the Zernike method based on the Pentacam HR system were able to identify very early KC.

  18. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  19. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  20. Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps.

    PubMed

    Carvalho, Luis Alberto

    2005-02-01

    Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.

  1. Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.

    PubMed

    Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang

    2012-06-20

    Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.

  2. Aberrated laser beams in terms of Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Alda, Javier; Alonso, Jose; Bernabeu, Eusebio

    1996-11-01

    The characterization of light beams has devoted a lot of attention in the past decade. Several formalisms have been presented to treat the problem of parameter invariance and characterization in the propagation of light beam along ideal, ABCD, optical systems. The hard and soft apertured optical systems have been treated too. Also some aberrations have been analyzed, but it has not appeared a formalism able to treat the problem as a whole. In this contribution we use a classical approach to describe the problem of aberrated, and therefore apertured, light beams. The wavefront aberration is included in a pure phase term expanded in terms of the Zernike polynomials. Then, we can use the relation between the lower order Zernike polynomia and the Seidel or third order aberrations. We analyze the astigmatism, the spherical aberration and the coma, and we show how higher order aberrations can be taken into account. We have calculated the divergence, and the radius of curvature of such aberrated beams and the influence of these aberrations in the quality of the light beam. Some numerical simulations have been done to illustrate the method.

  3. Characterization of bone collagen organization defects in murine hypophosphatasia using a Zernike model of optical aberrations

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Pendleton, Emily G.; Leitmann, Bobby; Barrow, Ruth; Mortensen, Luke J.

    2018-02-01

    Bone growth and strength is severely impacted by Hypophosphatasia (HPP). It is a genetic disease that affects the mineralization of the bone. We hypothesize that it impacts overall organization, density, and porosity of collagen fibers. Lower density of fibers and higher porosity cause less absorption and scattering of light, and therefore a different regime of transport mean free path. To find a cure for this disease, a metric for the evaluation of bone is required. Here we present an evaluation method based on our Phase Accumulation Ray Tracing (PART) method. This method uses second harmonic generation (SHG) in bone collagen fiber to model bone indices of refraction, which is used to calculate phase retardation on the propagation path of light in bone. The calculated phase is then expanded using Zernike polynomials up to 15th order, to make a quantitative analysis of tissue anomalies. Because the Zernike modes are a complete set of orthogonal polynomials, we can compare low and high order modes in HPP, compare them with healthy wild type mice, to identify the differences between their geometry and structure. Larger coefficients of low order modes show more uniform fiber density and less porosity, whereas the opposite is shown with larger coefficients of higher order modes. Our analyses show significant difference between Zernike modes in different types of bone evidenced by Principal Components Analysis (PCA).

  4. A parallel implementation of 3D Zernike moment analysis

    NASA Astrophysics Data System (ADS)

    Berjón, Daniel; Arnaldo, Sergio; Morán, Francisco

    2011-01-01

    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system.

  5. Wavefront reconstruction for multi-lateral shearing interferometry using difference Zernike polynomials fitting

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu

    2018-07-01

    For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.

  6. An FPGA Architecture for Extracting Real-Time Zernike Coefficients from Measured Phase Gradients

    NASA Astrophysics Data System (ADS)

    Moser, Steven; Lee, Peter; Podoleanu, Adrian

    2015-04-01

    Zernike modes are commonly used in adaptive optics systems to represent optical wavefronts. However, real-time calculation of Zernike modes is time consuming due to two factors: the large factorial components in the radial polynomials used to define them and the large inverse matrix calculation needed for the linear fit. This paper presents an efficient parallel method for calculating Zernike coefficients from phase gradients produced by a Shack-Hartman sensor and its real-time implementation using an FPGA by pre-calculation and storage of subsections of the large inverse matrix. The architecture exploits symmetries within the Zernike modes to achieve a significant reduction in memory requirements and a speed-up of 2.9 when compared to published results utilising a 2D-FFT method for a grid size of 8×8. Analysis of processor element internal word length requirements show that 24-bit precision in precalculated values of the Zernike mode partial derivatives ensures less than 0.5% error per Zernike coefficient and an overall error of <1%. The design has been synthesized on a Xilinx Spartan-6 XC6SLX45 FPGA. The resource utilisation on this device is <3% of slice registers, <15% of slice LUTs, and approximately 48% of available DSP blocks independent of the Shack-Hartmann grid size. Block RAM usage is <16% for Shack-Hartmann grid sizes up to 32×32.

  7. Wave-aberration control with a liquid crystal on silicon (LCOS) spatial phase modulator.

    PubMed

    Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo

    2009-06-22

    Liquid crystal on Silicon (LCOS) spatial phase modulators offer enhanced possibilities for adaptive optics applications in terms of response velocity and fidelity. Unlike deformable mirrors, they present a capability for reproducing discontinuous phase profiles. This ability also allows an increase in the effective stroke of the device by means of phase wrapping. The latter is only limited by the diffraction related effects that become noticeable as the number of phase cycles increase. In this work we estimated the ranges of generation of the Zernike polynomials as a means for characterizing the performance of the device. Sets of images systematically degraded with the different Zernike polynomials generated using a LCOS phase modulator have been recorded and compared with their theoretical digital counterparts. For each Zernike mode, we have found that image degradation reaches a limit for a certain coefficient value; further increase in the aberration amount has no additional effect in image quality. This behavior is attributed to the intensification of the 0-order diffraction. These results have allowed determining the usable limits of the phase modulator virtually free from diffraction artifacts. The results are particularly important for visual simulation and ophthalmic testing applications, although they are equally interesting for any adaptive optics application with liquid crystal based devices.

  8. Automated Decision Tree Classification of Corneal Shape

    PubMed Central

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification problems. PMID:16357645

  9. Solar occultation images analysis using Zernike polynomials ­— an ALTIUS imaging spectrometer application

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Fussen, Didier; Loodts, Nicolas; Neefs, Eddy

    The ALTIUS (Atmospheric Limb Tracker for the Investigation of the Upcoming Stratosphere) instrument is a major project of the Belgian Institute for Space Aeronomy (BIRA-IASB) in Brussels, Belgium. It has been designed to profit from the benefits of the limb scattering ge-ometry (vertical resolution, global coverage,...), while providing better accuracy on the tangent height knowledge than classical "knee" methods used by scanning spectrometers. The optical concept is based on 3 AOTF's (UV-Vis-NIR) responsible for the instantaneous spectral filtering of the incoming image (complete FOV larger than 100km x 100km at tangent point), ranging from 250nm to 1800nm, with a moderate resolution of a few nm and a typical acquisition time of 1-10s per image. While the primary goal of the instrument is the measurement of ozone with a good vertical resolution, the ability to record full images of the limb can lead to other applications, like solar occultations. With a pixel FOV of 200rad, the full high-sun image is formed of 45x45 pixels, which is sufficient for pattern recognition using moments analysis for instance. The Zernike polynomials form a complete othogonal set of functions over the unit circle. It is well suited for images showing circular shape. Any such image can then be decomposed into a finite set of weighted polynomials, the weighting is called the moments. Due to atmospheric refraction, the sun shape is modified during apparent sunsets and sunrises. The sun appears more flattened which leads to a modification of its zernike moment description. A link between the pressure or the temperature profile (equivalent to air density through the perfect gas law and the hydrostatic equation) and the Zernike moments of a given image can then be made and used to retrieve these atmospheric parameters, with the advantage that the whole sun is used and not only central or edge pixels. Some retrievals will be performed for different conditions and the feasibility of the method will be discussed.

  10. Orthogonal basis with a conicoid first mode for shape specification of optical surfaces.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2016-03-07

    A rigorous and powerful theoretical framework is proposed to obtain systems of orthogonal functions (or shape modes) to represent optical surfaces. The method is general so it can be applied to different initial shapes and different polynomials. Here we present results for surfaces with circular apertures when the first basis function (mode) is a conicoid. The system for aspheres with rotational symmetry is obtained applying an appropriate change of variables to Legendre polynomials, whereas the system for general freeform case is obtained applying a similar procedure to spherical harmonics. Numerical comparisons with standard systems, such as Forbes and Zernike polynomials, are performed and discussed.

  11. Sound radiation quantities arising from a resilient circular radiator.

    PubMed

    Aarts, Ronald M; Janssen, Augustus J E M

    2009-10-01

    Power series expansions in ka are derived for the pressure at the edge of a radiator, the reaction force on the radiator, and the total radiated power arising from a harmonically excited, resilient, flat, circular radiator of radius a in an infinite baffle. The velocity profiles on the radiator are either Stenzel functions (1-(sigma/a)2)n, with sigma the radial coordinate on the radiator, or linear combinations of Zernike functions Pn(2(sigma/a)2-1), with Pn the Legendre polynomial of degree n. Both sets of functions give rise, via King's integral for the pressure, to integrals for the quantities of interest involving the product of two Bessel functions. These integrals have a power series expansion and allow an expression in terms of Bessel functions of the first kind and Struve functions. Consequently, many of the results in [M. Greenspan, J. Acoust. Soc. Am. 65, 608-621 (1979)] are generalized and treated in a unified manner. A foreseen application is for loudspeakers. The relation between the radiated power in the near-field on one hand and in the far field on the other is highlighted.

  12. Statistical virtual eye model based on wavefront aberration

    PubMed Central

    Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie

    2012-01-01

    Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112

  13. Zernike-like systems in polygons and polygonal facets.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2015-07-20

    Zernike polynomials are commonly used to represent the wavefront phase on circular optical apertures, since they form a complete and orthonormal basis on the unit disk. In [Opt. Lett.32, 74 (2007)10.1364/OL.32.000074OPLEDP0146-9592] we introduced a new Zernike basis for elliptic and annular optical apertures based on an appropriate diffeomorphism between the unit disk and the ellipse and the annulus. Here, we present a generalization of this Zernike basis for a variety of important optical apertures, paying special attention to polygons and the polygonal facets present in segmented mirror telescopes. On the contrary to ad hoc solutions, most of them based on the Gram-Smith orthonormalization method, here we consider a piecewise diffeomorphism that transforms the unit disk into the polygon under consideration. We use this mapping to define a Zernike-like orthonormal system over the polygon. We also consider ensembles of polygonal facets that are essential in the design of segmented mirror telescopes. This generalization, based on in-plane warping of the basis functions, provides a unique solution, and what is more important, it guarantees a reasonable level of invariance of the mathematical properties and the physical meaning of the initial basis functions. Both the general form and the explicit expressions for a typical example of telescope optical aperture are provided.

  14. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  15. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients.

    PubMed

    Solano-Altamirano, Juan Manuel; Vázquez-Otero, Alejandro; Khikhlukha, Danila; Dormido, Raquel; Duro, Natividad

    2017-11-30

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features.

  16. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients

    PubMed Central

    Solano-Altamirano, Juan Manuel; Khikhlukha, Danila

    2017-01-01

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features. PMID:29189722

  17. Dynamic Bidirectional Reflectance Distribution Functions: Measurement and Representation

    DTIC Science & Technology

    2008-02-01

    be included in the harmonic fits. Other sets of orthogonal functions such as Zernike polynomials have also been used to characterize BRDF and could...reflectance spectra of 3D objects,” Proc. SPIE 4663, 370–378 2001. 13J. R. Shell II, C. Salvagio, and J. R. Schott, “A novel BRDF measurement technique

  18. Application of field dependent polynomial model

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Páta, Petr; Skala, Petr; Fliegel, Karel; Vítek, Stanislav; Bednář, Jan

    2016-09-01

    Extremely wide-field imaging systems have many advantages regarding large display scenes whether for use in microscopy, all sky cameras, or in security technologies. The Large viewing angle is paid by the amount of aberrations, which are included with these imaging systems. Modeling wavefront aberrations using the Zernike polynomials is known a longer time and is widely used. Our method does not model system aberrations in a way of modeling wavefront, but directly modeling of aberration Point Spread Function of used imaging system. This is a very complicated task, and with conventional methods, it was difficult to achieve the desired accuracy. Our optimization techniques of searching coefficients space-variant Zernike polynomials can be described as a comprehensive model for ultra-wide-field imaging systems. The advantage of this model is that the model describes the whole space-variant system, unlike the majority models which are partly invariant systems. The issue that this model is the attempt to equalize the size of the modeled Point Spread Function, which is comparable to the pixel size. Issues associated with sampling, pixel size, pixel sensitivity profile must be taken into account in the design. The model was verified in a series of laboratory test patterns, test images of laboratory light sources and consequently on real images obtained by an extremely wide-field imaging system WILLIAM. Results of modeling of this system are listed in this article.

  19. The Zernike expansion--an example of a merit function for 2D/3D registration based on orthogonal functions.

    PubMed

    Dong, Shuo; Kettenbach, Joachim; Hinterleitner, Isabella; Bergmann, Helmar; Birkfellner, Wolfgang

    2008-01-01

    Current merit functions for 2D/3D registration usually rely on comparing pixels or small regions of images using some sort of statistical measure. Problems connected to this paradigm the sometimes problematic behaviour of the method if noise or artefacts (for instance a guide wire) are present on the projective image. We present a merit function for 2D/3D registration which utilizes the decomposition of the X-ray and the DRR under comparison into orthogonal Zernike moments; the quality of the match is assessed by an iterative comparison of expansion coefficients. Results in a imaging study on a physical phantom show that--compared to standard cross--correlation the Zernike moment based merit function shows better robustness if histogram content in images under comparison is different, and that time expenses are comparable if the merit function is constructed out of a few significant moments only.

  20. Fuzzy logic controller for the LOLA AO tip-tilt corrector system

    NASA Astrophysics Data System (ADS)

    Sotelo, Pablo D.; Flores, Ruben; Garfias, Fernando; Cuevas, Salvador

    1998-09-01

    At the INSTITUTO DE ASTRONOMIA we developed an adaptive optics system for the correction of the two first orders of the Zernike polynomials measuring the image controid. Here we discus the two system modalities based in two different control strategies and we present simulations comparing the systems. For the classic control system we present telescope results.

  1. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound.

    PubMed

    Kaye, Elena A; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-10-01

    To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat et al., "MR-guided adaptive focusing of ultrasound," IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734-1747 (2010)] was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients' phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy.

  2. Application of Zernike polynomials towards accelerated adaptive focusing of transcranial high intensity focused ultrasound

    PubMed Central

    Kaye, Elena A.; Hertzberg, Yoni; Marx, Michael; Werner, Beat; Navon, Gil; Levoy, Marc; Pauly, Kim Butts

    2012-01-01

    Purpose: To study the phase aberrations produced by human skulls during transcranial magnetic resonance imaging guided focused ultrasound surgery (MRgFUS), to demonstrate the potential of Zernike polynomials (ZPs) to accelerate the adaptive focusing process, and to investigate the benefits of using phase corrections obtained in previous studies to provide the initial guess for correction of a new data set. Methods: The five phase aberration data sets, analyzed here, were calculated based on preoperative computerized tomography (CT) images of the head obtained during previous transcranial MRgFUS treatments performed using a clinical prototype hemispherical transducer. The noniterative adaptive focusing algorithm [Larrat , “MR-guided adaptive focusing of ultrasound,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57(8), 1734–1747 (2010)]10.1109/TUFFC.2010.1612 was modified by replacing Hadamard encoding with Zernike encoding. The algorithm was tested in simulations to correct the patients’ phase aberrations. MR acoustic radiation force imaging (MR-ARFI) was used to visualize the effect of the phase aberration correction on the focusing of a hemispherical transducer. In addition, two methods for constructing initial phase correction estimate based on previous patient's data were investigated. The benefits of the initial estimates in the Zernike-based algorithm were analyzed by measuring their effect on the ultrasound intensity at the focus and on the number of ZP modes necessary to achieve 90% of the intensity of the nonaberrated case. Results: Covariance of the pairs of the phase aberrations data sets showed high correlation between aberration data of several patients and suggested that subgroups can be based on level of correlation. Simulation of the Zernike-based algorithm demonstrated the overall greater correction effectiveness of the low modes of ZPs. The focal intensity achieves 90% of nonaberrated intensity using fewer than 170 modes of ZPs. The initial estimates based on using the average of the phase aberration data from the individual subgroups of subjects was shown to increase the intensity at the focal spot for the five subjects. Conclusions: The application of ZPs to phase aberration correction was shown to be beneficial for adaptive focusing of transcranial ultrasound. The skull-based phase aberrations were found to be well approximated by the number of ZP modes representing only a fraction of the number of elements in the hemispherical transducer. Implementing the initial phase aberration estimate together with Zernike-based algorithm can be used to improve the robustness and can potentially greatly increase the viability of MR-ARFI-based focusing for a clinical transcranial MRgFUS therapy. PMID:23039661

  3. Segmented Mirror Telescope Model and Simulation

    DTIC Science & Technology

    2011-06-01

    mirror surface is treated as a grid of masses and springs. The actuators have surface normal forces applied to individual masses. The equation to...are not widely treated in the literature. The required modifications for the wavefront reconstruction algorithm of a circular aperture to correctly...Zernike polynomials, which are particularly suitable to describe the common optical character- izations of astigmatism , coma, defocus and others [9

  4. Wavefront analysis from its slope data

    NASA Astrophysics Data System (ADS)

    Mahajan, Virendra N.; Acosta, Eva

    2017-08-01

    In the aberration analysis of a wavefront over a certain domain, the polynomials that are orthogonal over and represent balanced wave aberrations for this domain are used. For example, Zernike circle polynomials are used for the analysis of a circular wavefront. Similarly, the annular polynomials are used to analyze the annular wavefronts for systems with annular pupils, as in a rotationally symmetric two-mirror system, such as the Hubble space telescope. However, when the data available for analysis are the slopes of a wavefront, as, for example, in a Shack- Hartmann sensor, we can integrate the slope data to obtain the wavefront data, and then use the orthogonal polynomials to obtain the aberration coefficients. An alternative is to find vector functions that are orthogonal to the gradients of the wavefront polynomials, and obtain the aberration coefficients directly as the inner products of these functions with the slope data. In this paper, we show that an infinite number of vector functions can be obtained in this manner. We show further that the vector functions that are irrotational are unique and propagate minimum uncorrelated additive random noise from the slope data to the aberration coefficients.

  5. On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions

    NASA Astrophysics Data System (ADS)

    Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard

    2006-02-01

    Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.

  6. Enhanced Interferometry with Programmable Spatial Light Modulator

    DTIC Science & Technology

    2010-06-07

    metrolaserinc.com6-7-2010-Monday 6 Simulated by Zemax  Lenslet diameters, d, define spatial resolution over the wavefront being measured.  (sensitivity...MetroLaser Irvine, California Fitted Zernike Polynomials upto 36 terms, found and put into Zemax Simulated Cats’ eye wavefronts by ZEMAX Experimental...measurement Simulated Fringes Leftover < 0.1λ 23 Cat’s eye wavefronts by ZEMAX based on Experimental results Jtrolinger@metrolaserinc.com6-7-2010

  7. Adaptive optics with a magnetic deformable mirror: applications in the human eye

    NASA Astrophysics Data System (ADS)

    Fernandez, Enrique J.; Vabre, Laurent; Hermann, Boris; Unterhuber, Angelika; Povazay, Boris; Drexler, Wolfgang

    2006-10-01

    A novel deformable mirror using 52 independent magnetic actuators (MIRAO 52, Imagine Eyes) is presented and characterized for ophthalmic applications. The capabilities of the device to reproduce different surfaces, in particular Zernike polynomials up to the fifth order, are investigated in detail. The study of the influence functions of the deformable mirror reveals a significant linear response with the applied voltage. The correcting device also presents a high fidelity in the generation of surfaces. The ranges of production of Zernike polynomials fully cover those typically found in the human eye, even for the cases of highly aberrated eyes. Data from keratoconic eyes are confronted with the obtained ranges, showing that the deformable mirror is able to compensate for these strong aberrations. Ocular aberration correction with polychromatic light, using a near Gaussian spectrum of 130 nm full width at half maximum centered at 800 nm, in five subjects is accomplished by simultaneously using the deformable mirror and an achromatizing lens, in order to compensate for the monochromatic and chromatic aberrations, respectively. Results from living eyes, including one exhibiting 4.66 D of myopia and a near pathologic cornea with notable high order aberrations, show a practically perfect aberration correction. Benefits and applications of simultaneous monochromatic and chromatic aberration correction are finally discussed in the context of retinal imaging and vision.

  8. RBCs as microlenses: wavefront analysis and applications

    NASA Astrophysics Data System (ADS)

    Merola, Francesco; Barroso, Álvaro; Miccio, Lisa; Memmolo, Pasquale; Mugnano, Martina; Ferraro, Pietro; Denz, Cornelia

    2017-06-01

    Developing the recently discovered concept of RBCs as microlenses, we demonstrate further applications in wavefront analysis and diagnostics. Correlation between RBC's morphology and its behavior as a refractive optical element has been established. In fact, any deviation from the healthy RBC morphology can be seen as additional aberration in the optical wavefront passing through the cell. By this concept, accurate localization of focal spots of RBCs can become very useful in blood disorders identification. Moreover, By modelling RBC as bio-lenses through Zernike polynomials it is possible to identify a series of orthogonal parameters able to recognise RBC shapes. The main improvement concerns the possibility to combine such parameters because of their independence conversely to standard image-based analysis where morphological factors are dependent each-others. We investigate the three-dimensional positioning of such focal spots over time for samples with two different osmolarity conditions, i.e. discocytes and spherocytes. Finally, Zernike polynomials wavefront analysis allows us to study the optical behavior of RBCs under an optically-induced mechanical stress. Detailed wavefront analysis provides comprehensive information about the aberrations induced by the deformation obtained using optical tweezers. This could open new routes for analyzing cell elasticity by examining optical parameters instead of direct but with low resolution strain analysis, thanks to the high sensitivity of the interferometric tool.

  9. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  10. Multi-optical-axis measurement of freeform progressive addition lenses using a Hartmann-Shack wavefront sensor

    NASA Astrophysics Data System (ADS)

    Xiang, Huazhong; Guo, Hang; Fu, Dongxiang; Zheng, Gang; Zhuang, Songlin; Chen, JiaBi; Wang, Cheng; Wu, Jie

    2018-05-01

    To precisely measure the whole-surface characterization of freeform progressive addition lenses (PALs), considering the multi-optical-axis conditions is becoming particularly important. Spherical power and astigmatism (cylinder) measurements for freeform PALs, using a Hartmann-Shack wavefront sensor (HSWFS) are proposed herein. Conversion formulas for the optical performance results were provided as HSWFS Zernike polynomial expansions. For each selected zone, the studied PALs were placed and tilted to simulate the multi-optical-axis conditions. The results of two tested PALs were analyzed using MATLAB programs and represented as contour plots of the spherical equivalent and cylinder of the whole-surface. The proposed experimental setup can provide a high accuracy as well as a possibility of choosing 12 lines and positions of 193 measurement zones on the entire surface. This approach to PAL analysis is potentially an efficient and useful method to objectively evaluate the optical performances, in which the full lens surface is defined and expressed as the contour plots of power in different regions (i.e., the distance region, progressive region, and near region) of the lens for regions of interest.

  11. Efficient numerical modeling of the cornea, and applications

    NASA Astrophysics Data System (ADS)

    Gonzalez, L.; Navarro, Rafael M.; Hdez-Matamoros, J. L.

    2004-10-01

    Corneal topography has shown to be an essential tool in the ophthalmology clinic both in diagnosis and custom treatments (refractive surgery, keratoplastia), having also a strong potential in optometry. The post processing and analysis of corneal elevation, or local curvature data, is a necessary step to refine the data and also to extract relevant information for the clinician. In this context a parametric cornea model is proposed consisting of a surface described mathematically by two terms: one general ellipsoid corresponding to a regular base surface, expressed by a general quadric term located at an arbitrary position and free orientation in 3D space and a second term, described by a Zernike polynomial expansion, which accounts for irregularities and departures from the basic geometry. The model has been validated obtaining better adjustment of experimental data than other previous models. Among other potential applications, here we present the determination of the optical axis of the cornea by transforming the general quadric to its canonical form. This has permitted us to perform 3D registration of corneal topographical maps to improve the signal-to-noise ratio. Other basic and clinical applications are also explored.

  12. Limitations of the paraxial Debye approximation.

    PubMed

    Sheppard, Colin J R

    2013-04-01

    In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.

  13. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence systems or Legendre-Fourier polynomials for grazing incidence systems. It removes low order terms from the metrology data, calculates statistical ACV or PSD functions, and fits these data to OSAC models for the scatter analysis. In this paper we briefly describe the laboratory image testing of FUSE spare mirror performed in the near and vacuum ultraviolet at John Hopkins University and OSAC modeling of the test setup performed at NASA/GSFC. The test setup is a double-pass configuration consisting of a Hg discharge source, the FUSE off-axis parabolic mirror under test, an autocollimating flat mirror, and a tomographic imaging detector. Two additional, small fold flats are used in the optical train to accommodate the light source and the detector. The modeling is based on Zernike fitting and PSD analysis of surface metrology data measured by both the mirror vendor (Tinsley) and JHU. The results of our models agree well with the laboratory imaging data, thus validating our theoretical model. Finally, we predict the imaging performance of FUSE mirrors in their flight configuration at far-ultraviolet wavelengths.

  14. A comparison between space-time video descriptors

    NASA Astrophysics Data System (ADS)

    Costantini, Luca; Capodiferro, Licia; Neri, Alessandro

    2013-02-01

    The description of space-time patches is a fundamental task in many applications such as video retrieval or classification. Each space-time patch can be described by using a set of orthogonal functions that represent a subspace, for example a sphere or a cylinder, within the patch. In this work, our aim is to investigate the differences between the spherical descriptors and the cylindrical descriptors. In order to compute the descriptors, the 3D spherical and cylindrical Zernike polynomials are employed. This is important because both the functions are based on the same family of polynomials, and only the symmetry is different. Our experimental results show that the cylindrical descriptor outperforms the spherical descriptor. However, the performances of the two descriptors are similar.

  15. Wavefront aberrations of x-ray dynamical diffraction beams.

    PubMed

    Liao, Keliang; Hong, Youli; Sheng, Weifan

    2014-10-01

    The effects of dynamical diffraction in x-ray diffractive optics with large numerical aperture render the wavefront aberrations difficult to describe using the aberration polynomials, yet knowledge of them plays an important role in a vast variety of scientific problems ranging from optical testing to adaptive optics. Although the diffraction theory of optical aberrations was established decades ago, its application in the area of x-ray dynamical diffraction theory (DDT) is still lacking. Here, we conduct a theoretical study on the aberration properties of x-ray dynamical diffraction beams. By treating the modulus of the complex envelope as the amplitude weight function in the orthogonalization procedure, we generalize the nonrecursive matrix method for the determination of orthonormal aberration polynomials, wherein Zernike DDT and Legendre DDT polynomials are proposed. As an example, we investigate the aberration evolution inside a tilted multilayer Laue lens. The corresponding Legendre DDT polynomials are obtained numerically, which represent balanced aberrations yielding minimum variance of the classical aberrations of an anamorphic optical system. The balancing of classical aberrations and their standard deviations are discussed. We also present the Strehl ratio of the primary and secondary balanced aberrations.

  16. Image based method for aberration measurement of lithographic tools

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  17. High quality adaptive optics zoom with adaptive lenses

    NASA Astrophysics Data System (ADS)

    Quintavalla, M.; Santiago, F.; Bonora, S.; Restaino, S.

    2018-02-01

    We present the combined use of large aperture adaptive lens with large optical power modulation with a multi actuator adaptive lens. The Multi-actuator Adaptive Lens (M-AL) can correct up to the 4th radial order of Zernike polynomials, without any obstructions (electrodes and actuators) placed inside its clear aperture. We demonstrated that the use of both lenses together can lead to better image quality and to the correction of aberrations of adaptive optics optical systems.

  18. Wavefront propagation from one plane to another with the use of Zernike polynomials and Taylor monomials.

    PubMed

    Dai, Guang-ming; Campbell, Charles E; Chen, Li; Zhao, Huawei; Chernyak, Dimitri

    2009-01-20

    In wavefront-driven vision correction, ocular aberrations are often measured on the pupil plane and the correction is applied on a different plane. The problem with this practice is that any changes undergone by the wavefront as it propagates between planes are not currently included in devising customized vision correction. With some valid approximations, we have developed an analytical foundation based on geometric optics in which Zernike polynomials are used to characterize the propagation of the wavefront from one plane to another. Both the boundary and the magnitude of the wavefront change after the propagation. Taylor monomials were used to realize the propagation because of their simple form for this purpose. The method we developed to identify changes in low-order aberrations was verified with the classical vertex correction formula. The method we developed to identify changes in high-order aberrations was verified with ZEMAX ray-tracing software. Although the method may not be valid for highly irregular wavefronts and it was only proven for wavefronts with low-order or high-order aberrations, our analysis showed that changes in the propagating wavefront are significant and should, therefore, be included in calculating vision correction. This new approach could be of major significance in calculating wavefront-driven vision correction whether by refractive surgery, contact lenses, intraocular lenses, or spectacles.

  19. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  20. On the coefficients of differentiated expansions of ultraspherical polynomials

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1989-01-01

    A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  1. Quality factor analysis for aberrated laser beam

    NASA Astrophysics Data System (ADS)

    Ghafary, B.; Alavynejad, M.; Kashani, F. D.

    2006-12-01

    The quality factor of laser beams has attracted considerable attention and some different approaches have been reported to treat the problem. In this paper we analyze quality factor of laser beam and compare the effect of different aberrations on beam quality by expanding pure phase term of wavefront in terms of Zernike polynomials. Also we analyze experimentally the change of beam quality for different Astigmatism aberrations, and compare theoretical results with experimentally results. The experimental and theoretical results are in good agreement.

  2. Correlator optical wavefront sensor COWS

    NASA Astrophysics Data System (ADS)

    1991-02-01

    This report documents the significant upgrades and improvements made to the correlator optical wavefront sensor (COWS) optical bench during this phase of the program. Software for the experiment was reviewed and documented. Flowcharts showing the program flow are included as well as documentation for programs which were written to calculate and display Zernike polynomials. The system was calibrated and aligned and a series of experiments to determine the optimum settings for the input and output MOSLM polarizers were conducted. In addition, design of a simple aberration generation is included.

  3. Partial-fraction expansion and inverse Laplace transform of a rational function with real coefficients

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.; Mott, H.

    1974-01-01

    This paper presents a technique for the partial-fraction expansion of functions which are ratios of polynomials with real coefficients. The expansion coefficients are determined by writing the polynomials as Taylor's series and obtaining the Laurent series expansion of the function. The general formula for the inverse Laplace transform is also derived.

  4. Eye investigation with optical microradar techniques

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.; Kurashov, Vitalij N.; Chyzh, Igor H.

    1997-08-01

    Many problems exist in ophthalmology, where accurate measurements of eye structure and its parameters can be provided using optical radar concept is of remote sensing. Coherent and non-coherent approaches are reviewed aiming cornea shape measurement and measurement of aberration distribution in the elements and media of an eye. Coherent radar techniques are analyzed taking into account non- reciprocity of eye media and anisoplanatism of the fovea, that results in an exiting image being not an auto- correlation of the point-spread function of a single pass, even in the approximation of spatial invariance of the system. It is found, that aberrations of the cornea and lens are not additive, and may not be brought to summary aberrations on the entrance aperture of the lens. Anisoplanatism of the fovea and its roughness lead to low degree of coherence in scattered light. To estimate the result of measurements, methodology has been developed using Zernike polynomials expansions. Aberration distributions were gotten from measurements in 16 points of an eye situated on two concentric circles. Wave aberration functions have been approximated using least-square criterion. Thus, all data were provided necessary for cornea ablation with PRK procedure.

  5. Corneal aberrations in keratoconic eyes: influence of pupil size and centering

    NASA Astrophysics Data System (ADS)

    Comastri, S. A.; Perez, L. I.; Pérez, G. D.; Martin, G.; Bianchetti, A.

    2011-01-01

    Ocular aberrations vary among subjects and under different conditions and are commonly analyzed expanding the wavefront aberration function in Zernike polynomials. In previous articles, explicit analytical formulas to transform Zernike coefficients of up to 7th order corresponding to an original pupil into those related to a contracted displaced new pupil are obtained. In the present paper these formulas are applied to 20 keratoconic corneas of varying severity. Employing the SN CT1000 topographer, aberrations of the anterior corneal surface for a pupil of semi-diameter 3 mm centered on the keratometric axis are evaluated, the relation between the higher-order root mean square wavefront error and the index KISA% characterizing keratoconus is studied and the size and centering of the ocular photopic natural pupil are determined. Using these data and the transformation formulas, new coefficients associated to the photopic pupil size are computed and their variation when coordinates origin is shifted from the keratometric axis to the ocular pupil centre is analyzed.

  6. Robust source and mask optimization compensating for mask topography effects in computational lithography.

    PubMed

    Li, Jia; Lam, Edmund Y

    2014-04-21

    Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.

  7. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  8. On the coefficients of integrated expansions of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2006-03-01

    A new formula expressing explicitly the integrals of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another new explicit formula relating the Bessel coefficients of an expansion for infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is also established. An application of these formulae for solving ordinary differential equations with varying coefficients is discussed.

  9. Analysis of nodal aberration properties in off-axis freeform system design.

    PubMed

    Shi, Haodong; Jiang, Huilin; Zhang, Xin; Wang, Chao; Liu, Tao

    2016-08-20

    Freeform surfaces have the advantage of balancing off-axis aberration. In this paper, based on the framework of nodal aberration theory (NAT) applied to the coaxial system, the third-order astigmatism and coma wave aberration expressions of an off-axis system with Zernike polynomial surfaces are derived. The relationship between the off-axis and surface shape acting on the nodal distributions is revealed. The nodal aberration properties of the off-axis freeform system are analyzed and validated by using full-field displays (FFDs). It has been demonstrated that adding Zernike terms, up to nine, to the off-axis system modifies the nodal locations, but the field dependence of the third-order aberration does not change. On this basis, an off-axis two-mirror freeform system with 500 mm effective focal length (EFL) and 300 mm entrance pupil diameter (EPD) working in long-wave infrared is designed. The field constant aberrations induced by surface tilting are corrected by selecting specific Zernike terms. The design results show that the nodes of third-order astigmatism and coma move back into the field of view (FOV). The modulation transfer function (MTF) curves are above 0.4 at 20 line pairs per millimeter (lp/mm) which meets the infrared reconnaissance requirement. This work provides essential insight and guidance for aberration correction in off-axis freeform system design.

  10. Computerized lateral-shear interferometer

    NASA Astrophysics Data System (ADS)

    Hasegan, Sorin A.; Jianu, Angela; Vlad, Valentin I.

    1998-07-01

    A lateral-shear interferometer, coupled with a computer for laser wavefront analysis, is described. A CCD camera is used to transfer the fringe images through a frame-grabber into a PC. 3D phase maps are obtained by fringe pattern processing using a new algorithm for direct spatial reconstruction of the optical phase. The program describes phase maps by Zernike polynomials yielding an analytical description of the wavefront aberration. A compact lateral-shear interferometer has been built using a laser diode as light source, a CCD camera and a rechargeable battery supply, which allows measurements in-situ, if necessary.

  11. Efficient algorithms for construction of recurrence relations for the expansion and connection coefficients in series of Al-Salam Carlitz I polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2005-12-01

    Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.

  12. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  13. Integrated thermal disturbance analysis of optical system of astronomical telescope

    NASA Astrophysics Data System (ADS)

    Yang, Dehua; Jiang, Zibo; Li, Xinnan

    2008-07-01

    During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.

  14. Ocular wavefront aberrations in the common marmoset Callithrix jacchus: effects of age and refractive error

    PubMed Central

    Coletta, Nancy J.; Marcos, Susana; Troilo, David

    2012-01-01

    The common marmoset, Callithrix jacchus, is a primate model for emmetropization studies. The refractive development of the marmoset eye depends on visual experience, so knowledge of the optical quality of the eye is valuable. We report on the wavefront aberrations of the marmoset eye, measured with a clinical Hartmann-Shack aberrometer (COAS, AMO Wavefront Sciences). Aberrations were measured on both eyes of 23 marmosets whose ages ranged from 18 to 452 days. Twenty-one of the subjects were members of studies of emmetropization and accommodation, and two were untreated normal subjects. Eleven of the 21 experimental subjects had worn monocular diffusers or occluders and ten had worn binocular spectacle lenses of equal power. Monocular deprivation or lens rearing began at about 45 days of age and ended at about 108 days of age. All refractions and aberration measures were performed while the eyes were cyclopleged; most aberration measures were made while subjects were awake, but some control measurements were performed under anesthesia. Wavefront error was expressed as a seventh-order Zernike polynomial expansion, using the Optical Society of America’s naming convention. Aberrations in young marmosets decreased up to about 100 days of age, after which the higher-order RMS aberration leveled off to about 0.10 micron over a 3 mm diameter pupil. Higher-order aberrations were 1.8 times greater when the subjects were under general anesthesia than when they were awake. Young marmoset eyes were characterized by negative spherical aberration. Visually deprived eyes of the monocular deprivation animals had greater wavefront aberrations than their fellow untreated eyes, particularly for asymmetric aberrations in the odd-numbered Zernike orders. Both lens-treated and deprived eyes showed similar significant increases in Z3-3 trefoil aberration, suggesting the increase in trefoil may be related to factors that do not involve visual feedback. PMID:20800078

  15. Subjective and Quantitative Measurement of Wavefront Aberrations in Nuclear Cataracts – A Retrospective Case Controlled Study

    PubMed Central

    Wali, Upender K.; Bialasiewicz, Alexander A.; Al-Kharousi, Nadia; Rizvi, Syed G.; Baloushi, Habiba

    2009-01-01

    Purpose: To measure, quantify and compare Ocular Aberrations due to nuclear cataracts. Setting: Department of ophthalmology and school for ophthalmic technicians, college of medicine and health sciences, Sultan Qaboos University, Muscat, Oman. Design: Retrospective case controlled study. Methods: 113 eyes of 77 patients with nuclear cataract (NC) were recruited from outpatient clinic of a major tertiary referral center for Ophthalmology. Patients having NC with no co-existing ocular pathologies were selected. All patients were subjected to wavefront aberrometry (make) using Hartmann-Shack (HS) aberrometer. Consents were taken from all patients. Higher order Aberrations (HOA) were calculated with Zernike polynomials up to the fourth order. For comparison 28 eyes of 15 subjects with no lenticular opacities (control group) were recruited and evaluated in an identical manner. No pupillary mydriasis was done in both groups. Results: Total aberrations were almost six times higher in NC group compared to control (normal) subjects. The HOA were 21 times higher in NC group, and coma was significantly higher in NC eyes compared to normal (control) group. The pupillary diameter was significantly larger in control group (5.48mm ± 1.0024, p<.001) compared to NC (3.05mm ± 1.9145) subjects (probably due to younger control age group). Amongst Zernike coefficients up to fourth order, two polynomials, defocus (Z20) and spherical aberration (Z42) were found to be significantly greater amongst NC group, compared to normal control group. Conclusion: Nuclear cataracts predominantly produce increased defocus and spherical aberrations. This could explain visual symptoms like image deterioration in spite of normal Visual acuity. PMID:20142953

  16. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  17. Fast protein tertiary structure retrieval based on global surface shape similarity.

    PubMed

    Sael, Lee; Li, Bin; La, David; Fang, Yi; Ramani, Karthik; Rustamov, Raif; Kihara, Daisuke

    2008-09-01

    Characterization and identification of similar tertiary structure of proteins provides rich information for investigating function and evolution. The importance of structure similarity searches is increasing as structure databases continue to expand, partly due to the structural genomics projects. A crucial drawback of conventional protein structure comparison methods, which compare structures by their main-chain orientation or the spatial arrangement of secondary structure, is that a database search is too slow to be done in real-time. Here we introduce a global surface shape representation by three-dimensional (3D) Zernike descriptors, which represent a protein structure compactly as a series expansion of 3D functions. With this simplified representation, the search speed against a few thousand structures takes less than a minute. To investigate the agreement between surface representation defined by 3D Zernike descriptor and conventional main-chain based representation, a benchmark was performed against a protein classification generated by the combinatorial extension algorithm. Despite the different representation, 3D Zernike descriptor retrieved proteins of the same conformation defined by combinatorial extension in 89.6% of the cases within the top five closest structures. The real-time protein structure search by 3D Zernike descriptor will open up new possibility of large-scale global and local protein surface shape comparison. 2008 Wiley-Liss, Inc.

  18. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  19. Aberration influence and active compensation on laser mode properties for asymmetric folded resonators

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Hu, Zhiqiu; Yang, Wentao; Su, Likun

    2017-09-01

    We demonstrate the influence on mode features with introducing typical intracavity perturbation and results of aberrated wavefront compensation in a folded-type unstable resonator used in high energy lasers. The mode properties and aberration coefficient with intracavity misalignment are achieved by iterative calculation and Zernike polynomial fitting. Experimental results for the relation of intracavity maladjustment and mode characteristics are further obtained in terms of S-H detection and model wavefront reconstruction. It indicates that intracavity phase perturbation has significant influence on out coupling beam properties, and the uniform and symmetry of the mode is rapidly disrupted even by a slight misalignment of the resonator mirrors. Meanwhile, the far-field beam patterns will obviously degrade with increasing the distance between the convex mirror and the phase perturbation position even if the equivalent disturbation is inputted into such the resonator. The closed-loop device for compensating intracavity low order aberration is successfully fabricated. Moreover, Zernike defocus aberration is also effectively controlled by precisely adjusting resonator length, and the beam quality is noticeably improved.

  20. Characterising a holographic modal phase mask for the detection of ocular aberrations

    NASA Astrophysics Data System (ADS)

    Corbett, A. D.; Leyva, D. Gil; Diaz-Santana, L.; Wilkinson, T. D.; Zhong, J. J.

    2005-12-01

    The accurate measurement of the double-pass ocular wave front has been shown to have a broad range of applications from LASIK surgery to adaptively corrected retinal imaging. The ocular wave front can be accurately described by a small number of Zernike circle polynomials. The modal wave front sensor was first proposed by Neil et al. and allows the coefficients of the individual Zernike modes to be measured directly. Typically the aberrations measured with the modal sensor are smaller than those seen in the ocular wave front. In this work, we investigated a technique for adapting a modal phase mask for the sensing of the ocular wave front. This involved extending the dynamic range of the sensor by increasing the pinhole size to 2.4mm and optimising the mask bias to 0.75λ. This was found to decrease the RMS error by up to a factor of three for eye-like aberrations with amplitudes up to 0.2μm. For aberrations taken from a sample of real-eye measurements a 20% decrease in the RMS error was observed.

  1. Design of space-borne imager with wide field of view based on freeform aberration theory

    NASA Astrophysics Data System (ADS)

    Shi, Haodong; Zhang, Jizhen; Wang, Lingjie; Zhang, Xin; Jiang, Huilin

    2016-10-01

    Freeform surfaces have advantages on balancing asymmetric aberration of the unobscured push-broom imager. However, since the conventional paraxial aberration theory is no longer appropriate for the freeform system design, designers are lack of insights on the imaging quality from the freeform aberration distribution. In order to design the freeform optical system efficiently, the contribution and nodal behavior of coma and astigmatism introduced by Standard Zernike polynomial surface are discussed in detail. An unobscured three-mirror optical system with 850 mm effective focal length, 20°× 2° field of view (FOV) is designed. The coma and astigmatism nodal positions are moved into the real-FOV by selecting and optimizing the Zernike terms pointedly, which has balanced the off-axis asymmetric aberration. The results show that the modulation transfer function (MTF) is close to diffraction limit, and the distortion throughout full-FOV is less than 0.25%. At last, a computer-generated hologram (CGH) for freeform surface testing is designed. The CGH design error RMS is lower than λ/1000 at 632.8 nm, which meets the requirements for measurement.

  2. A conforming spectral collocation strategy for Stokes flow through a channel contraction

    NASA Technical Reports Server (NTRS)

    Phillips, Timothy N.; Karageorghis, Andreas

    1989-01-01

    A formula expressing the coefficients of an expansion of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  3. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  4. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  5. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  6. Generic expansion of the Jastrow correlation factor in polynomials satisfying symmetry and cusp conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander

    2015-02-28

    Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less

  7. Optical design of a novel instrument that uses the Hartmann-Shack sensor and Zernike polynomials to measure and simulate customized refraction correction surgery outcomes and patient satisfaction

    NASA Astrophysics Data System (ADS)

    Yasuoka, Fatima M. M.; Matos, Luciana; Cremasco, Antonio; Numajiri, Mirian; Marcato, Rafael; Oliveira, Otavio G.; Sabino, Luis G.; Castro N., Jarbas C.; Bagnato, Vanderlei S.; Carvalho, Luis A. V.

    2016-03-01

    An optical system that conjugates the patient's pupil to the plane of a Hartmann-Shack (HS) wavefront sensor has been simulated using optical design software. And an optical bench prototype is mounted using mechanical eye device, beam splitter, illumination system, lenses, mirrors, mirrored prism, movable mirror, wavefront sensor and camera CCD. The mechanical eye device is used to simulate aberrations of the eye. From this device the rays are emitted and travelled by the beam splitter to the optical system. Some rays fall on the camera CCD and others pass in the optical system and finally reach the sensor. The eye models based on typical in vivo eye aberrations is constructed using the optical design software Zemax. The computer-aided outcomes of each HS images for each case are acquired, and these images are processed using customized techniques. The simulated and real images for low order aberrations are compared using centroid coordinates to assure that the optical system is constructed precisely in order to match the simulated system. Afterwards a simulated version of retinal images is constructed to show how these typical eyes would perceive an optotype positioned 20 ft away. Certain personalized corrections are allowed by eye doctors based on different Zernike polynomial values and the optical images are rendered to the new parameters. Optical images of how that eye would see with or without corrections of certain aberrations are generated in order to allow which aberrations can be corrected and in which degree. The patient can then "personalize" the correction to their own satisfaction. This new approach to wavefront sensing is a promising change in paradigm towards the betterment of the patient-physician relationship.

  8. Optical Imaging and Radiometric Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge diffusion modulation transfer function (MTF).

  9. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  10. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  11. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  12. Long-term stable active mount for reflective optics

    NASA Astrophysics Data System (ADS)

    Reinlein, C.; Brady, A.; Damm, C.; Mohaupt, M.; Kamm, A.; Lange, N.; Goy, M.

    2016-07-01

    We report on the development of an active mount with an orthogonal actuator matrix offering a stable shape optimization for gratings or mirrors. We introduce the actuator distribution and calculate the accessible Zernike polynomials from their actuator influence function. Experimental tests show the capability of the device to compensate for aberrations of grating substrates as we report measurements of a 110x105 mm2 and 220x210 mm2 device With these devices, we evaluate the position depending aberrations, long-term stability shape results, and temperature-induced shape variations. Therewith we will discuss potential applications in space telescopes and Earth-based facilities where long-term stability is mandatory.

  13. Artificial neural network for the determination of Hubble Space Telescope aberration from stellar images

    NASA Technical Reports Server (NTRS)

    Barrett, Todd K.; Sandler, David G.

    1993-01-01

    An artificial-neural-network method, first developed for the measurement and control of atmospheric phase distortion, using stellar images, was used to estimate the optical aberration of the Hubble Space Telescope. A total of 26 estimates of distortion was obtained from 23 stellar images acquired at several secondary-mirror axial positions. The results were expressed as coefficients of eight orthogonal Zernike polynomials: focus through third-order spherical. For all modes other than spherical the measured aberration was small. The average spherical aberration of the estimates was -0.299 micron rms, which is in good agreement with predictions obtained when iterative phase-retrieval algorithms were used.

  14. High resolution particle tracking method by suppressing the wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Chang, Xinyu; Yang, Yuan; Kou, Li; Jin, Lei; Lu, Junsheng; Hu, Xiaodong

    2018-01-01

    Digital in-line holographic microscopy is one of the most efficient methods for particle tracking as it can precisely measure the axial position of particles. However, imaging systems are often limited by detector noise, image distortions and human operator misjudgment making the particles hard to locate. A general method is used to solve this problem. The normalized holograms of particles were reconstructed to the pupil plane and then fit to a linear superposition of the Zernike polynomial functions to suppress the aberrations. Relative experiments were implemented to validate the method and the results show that nanometer scale resolution was achieved even when the holograms were poorly recorded.

  15. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  16. Selective corneal optical aberration (SCOA) for customized ablation

    NASA Astrophysics Data System (ADS)

    Jean, Benedikt J.; Bende, Thomas

    2001-06-01

    Wavefront analysis still have some technical problems which may be solved within the next years. There are some limitations to use wavefront as a diagnostic tool for customized ablation alone. An ideal combination would be wavefront and topography. Meanwhile Selective Corneal Aberration is a method to visualize the optical quality of a measured corneal surface. It is based on a true measured 3D elevation information of a video topometer. Thus values can be interpreted either using Zernike polynomials or visualized as a so called color coded surface quality map. This map gives a quality factor (corneal aberration) for each measured point of the cornea.

  17. Radius of curvature measurement of spherical smooth surfaces by multiple-beam interferometry in reflection

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk

    2010-06-01

    In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.

  18. Analytical and numerical construction of vertical periodic orbits about triangular libration points based on polynomial expansion relations among directions

    NASA Astrophysics Data System (ADS)

    Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei

    2017-08-01

    Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.

  19. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  20. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fei; Department of Mathematics, University of California, Berkeley; Morzfeld, Matthias, E-mail: mmo@math.lbl.gov

    2015-02-01

    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.

  1. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  2. Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval

    NASA Technical Reports Server (NTRS)

    Alford, John A., II

    2012-01-01

    We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.

  3. Wilson polynomials/functions and intertwining operators for the generic quantum superintegrable system on the 2-sphere

    NASA Astrophysics Data System (ADS)

    Miller, W., Jr.; Li, Q.

    2015-04-01

    The Wilson and Racah polynomials can be characterized as basis functions for irreducible representations of the quadratic symmetry algebra of the quantum superintegrable system on the 2-sphere, HΨ = EΨ, with generic 3-parameter potential. Clearly, the polynomials are expansion coefficients for one eigenbasis of a symmetry operator L2 of H in terms of an eigenbasis of another symmetry operator L1, but the exact relationship appears not to have been made explicit. We work out the details of the expansion to show, explicitly, how the polynomials arise and how the principal properties of these functions: the measure, 3-term recurrence relation, 2nd order difference equation, duality of these relations, permutation symmetry, intertwining operators and an alternate derivation of Wilson functions - follow from the symmetry of this quantum system. This paper is an exercise to show that quantum mechancal concepts and recurrence relations for Gausian hypergeometrc functions alone suffice to explain these properties; we make no assumptions about the structure of Wilson polynomial/functions, but derive them from quantum principles. There is active interest in the relation between multivariable Wilson polynomials and the quantum superintegrable system on the n-sphere with generic potential, and these results should aid in the generalization. Contracting function space realizations of irreducible representations of this quadratic algebra to the other superintegrable systems one can obtain the full Askey scheme of orthogonal hypergeometric polynomials. All of these contractions of superintegrable systems with potential are uniquely induced by Wigner Lie algebra contractions of so(3, C) and e(2,C). All of the polynomials produced are interpretable as quantum expansion coefficients. It is important to extend this process to higher dimensions.

  4. A globally accurate theory for a class of binary mixture models

    NASA Astrophysics Data System (ADS)

    Dickman, Adriana G.; Stell, G.

    The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.

  5. Zernike expansion coefficients: rescaling and decentring for different pupils and evaluation of corneal aberrations

    NASA Astrophysics Data System (ADS)

    Comastri, Silvia A.; Perez, Liliana I.; Pérez, Gervasio D.; Martin, Gabriel; Bastida, Karina

    2007-03-01

    An analytical method to convert the set of Zernike coefficients that fits the wavefront aberration for a pupil into another corresponding to a contracted and horizontally translated pupil is proposed. The underlying selection rules are provided and the resulting conversion formulae for a seventh-order expansion are given. These formulae are applied to calculate corneal aberrations referred to a given pupil centre in terms of those referred to the keratometric vertex supplied by the SN CT1000 topographer. Four typical cases are considered: a sphere and three eyes—normal, keratoconic and post-LASIK. When the pupil centre is fixed and the pupil diameter decreases from 6 mm to the photopic natural one, leaving aside piston, tilt and defocus, the difference between the root mean square wavefront error computed with the formulae and the topographer is less than 0.04 µm. When the pupil diameter is kept equal to the natural one and the pupil centre is displaced, coefficients vary according to the eye. For a 0.3 mm pupil shift, the variation of coma is at most 0.35 µm and that of spherical aberration 0.01 µm.

  6. [Development of Nanotechnology for X-Ray Astronomy Instrumentation

    NASA Technical Reports Server (NTRS)

    Schattenburg, Mark L.

    2004-01-01

    This Research Grant provides support for development of nanotechnology for x-ray astronomy instrumentation. MIT has made significant progress in several development areas. In the last year we have made considerable progress in demonstrating the high-fidelity patterning and replication of x-ray reflection gratings. We developed a process for fabricating blazed gratings in silicon with extremely smooth and sharp sawtooth profiles, and developed a nanoimprint process for replication. We also developed sophisticated new fixturing for holding thin optics during metrology without causing distortion. We developed a new image processing algorithm for our Shack-Hartmann tool that uses Zernike polynomials. This has resulted in much more accurate and repeatable measurements on thin optics.

  7. Wavefront sensing with a thin diffuser

    NASA Astrophysics Data System (ADS)

    Berto, Pascal; Rigneault, Hervé; Guillon, Marc

    2017-12-01

    We propose and implement a broadband, compact, and low-cost wavefront sensing scheme by simply placing a thin diffuser in the close vicinity of a camera. The local wavefront gradient is determined from the local translation of the speckle pattern. The translation vector map is computed thanks to a fast diffeomorphic image registration algorithm and integrated to reconstruct the wavefront profile. The simple translation of speckle grains under local wavefront tip/tilt is ensured by the so-called "memory effect" of the diffuser. Quantitative wavefront measurements are experimentally demonstrated both for the few first Zernike polynomials and for phase-imaging applications requiring high resolution. We finally provided a theoretical description of the resolution limit that is supported experimentally.

  8. A new version of Stochastic-parallel-gradient-descent algorithm (SPGD) for phase correction of a distorted orbital angular momentum (OAM) beam

    NASA Astrophysics Data System (ADS)

    Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian

    2018-02-01

    Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.

  9. Optical Design And Analysis Of Carbon Dioxide Laser Fusion Systems Using Interferometry And Fast Fourier Transform Techniques

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. K.

    1980-11-01

    The optical design and analysis of the LASL carbon dioxide laser fusion systems required the use of techniques that are quite different from the currently used method in conventional optical design problems. The necessity for this is explored and the method that has been successfully used at Los Alamos to understand these systems is discussed with examples. This method involves characterization of the various optical components in their mounts by a Zernike polynomial set and using fast Fourier transform techniques to propagate the beam, taking diffraction and other nonlinear effects that occur in these types of systems into account. The various programs used for analysis are briefly discussed.

  10. MRI segmentation by active contours model, 3D reconstruction, and visualization

    NASA Astrophysics Data System (ADS)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  11. Simple method based on intensity measurements for characterization of aberrations from micro-optical components.

    PubMed

    Perrin, Stephane; Baranski, Maciej; Froehly, Luc; Albero, Jorge; Passilly, Nicolas; Gorecki, Christophe

    2015-11-01

    We report a simple method, based on intensity measurements, for the characterization of the wavefront and aberrations produced by micro-optical focusing elements. This method employs the setup presented earlier in [Opt. Express 22, 13202 (2014)] for measurements of the 3D point spread function, on which a basic phase-retrieval algorithm is applied. This combination allows for retrieval of the wavefront generated by the micro-optical element and, in addition, quantification of the optical aberrations through the wavefront decomposition with Zernike polynomials. The optical setup requires only an in-motion imaging system. The technique, adapted for the optimization of micro-optical component fabrication, is demonstrated by characterizing a planoconvex microlens.

  12. Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights

    NASA Astrophysics Data System (ADS)

    Kwon, K. H.; Lee, D. W.

    2001-08-01

    Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.

  13. Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data

    NASA Astrophysics Data System (ADS)

    Bogdanova, Nina; Koleva, Mihaela

    2018-02-01

    Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  14. State-vector formalism and the Legendre polynomial solution for modelling guided waves in anisotropic plates

    NASA Astrophysics Data System (ADS)

    Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin

    2018-01-01

    We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.

  15. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  16. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  17. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  18. Fitting relationship between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam

    NASA Astrophysics Data System (ADS)

    Ji, Zhong-Ye; Zhang, Xiao-Fang

    2018-01-01

    The mathematical relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam is important in beam quality control theory of the high-energy laser weapon system. In order to obtain this mathematical relation, numerical simulation is used in the research. Firstly, the Zernike representations of typically distorted atmospheric wavefront aberrations caused by the Kolmogoroff turbulence are generated. And then, the corresponding beam quality β factors of the different distorted wavefronts are calculated numerically through fast Fourier transform. Thus, the statistical distribution rule between the beam quality β factors of high-energy laser and the wavefront aberrations of the beam can be established by the calculated results. Finally, curve fitting method is chosen to establish the mathematical fitting relationship of these two parameters. And the result of the curve fitting shows that there is a quadratic curve relation between the beam quality β factor of high-energy laser and the wavefront aberration of laser beam. And in this paper, 3 fitting curves, in which the wavefront aberrations are consisted of Zernike Polynomials of 20, 36, 60 orders individually, are established to express the relationship between the beam quality β factor and atmospheric wavefront aberrations with different spatial frequency.

  19. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  20. Wavefront measurements of phase plates combining a point-diffraction interferometer and a Hartmann-Shack sensor.

    PubMed

    Bueno, Juan M; Acosta, Eva; Schwarz, Christina; Artal, Pablo

    2010-01-20

    A dual setup composed of a point diffraction interferometer (PDI) and a Hartmann-Shack (HS) wavefront sensor was built to compare the estimates of wavefront aberrations provided by the two different and complementary techniques when applied to different phase plates. Results show that under the same experimental and fitting conditions both techniques provide similar information concerning the wavefront aberration map. When taking into account all Zernike terms up to 6th order, the maximum difference in root-mean-square wavefront error was 0.08 microm, and this reduced up to 0.03 microm when excluding lower-order terms. The effects of the pupil size and the order of the Zernike expansion used to reconstruct the wavefront were evaluated. The combination of the two techniques can accurately measure complicated phase profiles, combining the robustness of the HS and the higher resolution and dynamic range of the PDI.

  1. Molecular surface representation using 3D Zernike descriptors for protein shape comparison and docking.

    PubMed

    Kihara, Daisuke; Sael, Lee; Chikhi, Rayan; Esquivel-Rodriguez, Juan

    2011-09-01

    The tertiary structures of proteins have been solved in an increasing pace in recent years. To capitalize the enormous efforts paid for accumulating the structure data, efficient and effective computational methods need to be developed for comparing, searching, and investigating interactions of protein structures. We introduce the 3D Zernike descriptor (3DZD), an emerging technique to describe molecular surfaces. The 3DZD is a series expansion of mathematical three-dimensional function, and thus a tertiary structure is represented compactly by a vector of coefficients of terms in the series. A strong advantage of the 3DZD is that it is invariant to rotation of target object to be represented. These two characteristics of the 3DZD allow rapid comparison of surface shapes, which is sufficient for real-time structure database screening. In this article, we review various applications of the 3DZD, which have been recently proposed.

  2. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  3. Polynomial Similarity Transformation Theory: A smooth interpolation between coupled cluster doubles and projected BCS applied to the reduced BCS Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degroote, M.; Henderson, T. M.; Zhao, J.

    We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less

  4. Spectral likelihood expansions for Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nagel, Joseph B.; Sudret, Bruno

    2016-03-01

    A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.

  5. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  6. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  7. Integrating opto-thermo-mechanical design tools: open engineering's project presentation

    NASA Astrophysics Data System (ADS)

    De Vincenzo, P.; Klapka, Igor

    2017-11-01

    An integrated numerical simulation package dedicated to the analysis of the coupled interactions of optical devices is presented. To reduce human interventions during data transfers, it is based on in-memory communications between the structural analysis software OOFELIE and the optical design application ZEMAX. It allows the automated enhancement of the existing optical design with information related to the deformations of optical surfaces due to thermomechanical solicitations. From the knowledge of these deformations, a grid of points or a decomposition based on Zernike polynomials can be generated for each surface. These data are then applied to the optical design. Finally, indicators can be retrieved from ZEMAX in order to compare the optical performances with those of the system in its nominal configuration.

  8. Beyond the usual mapping functions in GPS, VLBI and Deep Space tracking.

    NASA Astrophysics Data System (ADS)

    Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie

    2014-05-01

    We describe here a new algorithm to model the water contents of the atmosphere (including ZWD) from GPS slant wet delays relative to a single receiver. We first make the assumption that the water vapor contents are mainly governed by a scale height (exponential law), and secondly that the departures from this decaying exponential can be mapped as a set of low degree 3D Zernike functions (w.r.t. space) and Tchebyshev polynomials (w.r.t. time.) We compare this new algorithm with previous algorithms known as mapping functions in GPS, VLBI and Deep Space tracking and give an example with data acquired over a one day time span at the Geodesy Observatory of Tahiti.

  9. Joint optimization of source, mask, and pupil in optical lithography

    NASA Astrophysics Data System (ADS)

    Li, Jia; Lam, Edmund Y.

    2014-03-01

    Mask topography effects need to be taken into consideration for more advanced resolution enhancement techniques in optical lithography. However, rigorous 3D mask model achieves high accuracy at a large computational cost. This work develops a combined source, mask and pupil optimization (SMPO) approach by taking advantage of the fact that pupil phase manipulation is capable of partially compensating for mask topography effects. We first design the pupil wavefront function by incorporating primary and secondary spherical aberration through the coefficients of the Zernike polynomials, and achieve optimal source-mask pair under the condition of aberrated pupil. Evaluations against conventional source mask optimization (SMO) without incorporating pupil aberrations show that SMPO provides improved performance in terms of pattern fidelity and process window sizes.

  10. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  11. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  12. The time-fractional radiative transport equation—Continuous-time random walk, diffusion approximation, and Legendre-polynomial expansion

    NASA Astrophysics Data System (ADS)

    Machida, Manabu

    2017-01-01

    We consider the radiative transport equation in which the time derivative is replaced by the Caputo derivative. Such fractional-order derivatives are related to anomalous transport and anomalous diffusion. In this paper we describe how the time-fractional radiative transport equation is obtained from continuous-time random walk and see how the equation is related to the time-fractional diffusion equation in the asymptotic limit. Then we solve the equation with Legendre-polynomial expansion.

  13. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M = 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions.

  14. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  15. Heat transfer of phase-change materials in two-dimensional cylindrical coordinates

    NASA Technical Reports Server (NTRS)

    Labdon, M. B.; Guceri, S. I.

    1981-01-01

    Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.

  16. Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.

    PubMed

    Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko

    2014-04-01

    The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.

  17. Towards a unification of the hierarchical reference theory and the self-consistent Ornstein-Zernike approximation.

    PubMed

    Reiner, A; Høye, J S

    2005-12-01

    The hierarchical reference theory and the self-consistent Ornstein-Zernike approximation are two liquid state theories that both furnish a largely satisfactory description of the critical region as well as phase coexistence and the equation of state in general. Furthermore, there are a number of similarities that suggest the possibility of a unification of both theories. As a first step towards this goal, we consider the problem of combining the lowest order gamma expansion result for the incorporation of a Fourier component of the interaction with the requirement of consistency between internal and free energies, leaving aside the compressibility relation. For simplicity, we restrict ourselves to a simplified lattice gas that is expected to display the same qualitative behavior as more elaborate models. It turns out that the analytically tractable mean spherical approximation is a solution to this problem, as are several of its generalizations. Analysis of the characteristic equations shows the potential for a practical scheme and yields necessary conditions that any closure to the Ornstein-Zernike relation must fulfill for the consistency problem to be well posed and to have a unique differentiable solution. These criteria are expected to remain valid for more general discrete and continuous systems, even if consistency with the compressibility route is also enforced where possible explicit solutions will require numerical evaluations.

  18. Comparison of parametric methods for modeling corneal surfaces

    NASA Astrophysics Data System (ADS)

    Bouazizi, Hala; Brunette, Isabelle; Meunier, Jean

    2017-02-01

    Corneal topography is a medical imaging technique to get the 3D shape of the cornea as a set of 3D points of its anterior and posterior surfaces. From these data, topographic maps can be derived to assist the ophthalmologist in the diagnosis of disorders. In this paper, we compare three different mathematical parametric representations of the corneal surfaces leastsquares fitted to the data provided by corneal topography. The parameters obtained from these models reduce the dimensionality of the data from several thousand 3D points to only a few parameters and could eventually be useful for diagnosis, biometry, implant design etc. The first representation is based on Zernike polynomials that are commonly used in optics. A variant of these polynomials, named Bhatia-Wolf will also be investigated. These two sets of polynomials are defined over a circular domain which is convenient to model the elevation (height) of the corneal surface. The third representation uses Spherical Harmonics that are particularly well suited for nearly-spherical object modeling, which is the case for cornea. We compared the three methods using the following three criteria: the root-mean-square error (RMSE), the number of parameters and the visual accuracy of the reconstructed topographic maps. A large dataset of more than 2000 corneal topographies was used. Our results showed that Spherical Harmonics were superior with a RMSE mean lower than 2.5 microns with 36 coefficients (order 5) for normal corneas and lower than 5 microns for two diseases affecting the corneal shapes: keratoconus and Fuchs' dystrophy.

  19. The s-Ordered Fock Space Projectors Gained by the General Ordering Theorem

    NASA Astrophysics Data System (ADS)

    Farid, Shähandeh; Mohammad, Reza Bazrafkan; Mahmoud, Ashrafi

    2012-09-01

    Employing the general ordering theorem (GOT), operational methods and incomplete 2-D Hermite polynomials, we derive the t-ordered expansion of Fock space projectors. Using the result, the general ordered form of the coherent state projectors is obtained. This indeed gives a new integration formula regarding incomplete 2-D Hermite polynomials. In addition, the orthogonality relation of the incomplete 2-D Hermite polynomials is derived to resolve Dattoli's failure.

  20. Paraxial light distribution in the focal region of a lens: a comparison of several analytical solutions and a numerical result.

    PubMed

    Wu, Yang; Kelly, Damien P

    2014-12-12

    The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of [Formula: see text] and [Formula: see text] type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of [Formula: see text] and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by [Formula: see text], where [Formula: see text] is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.

  1. Paraxial light distribution in the focal region of a lens: a comparison of several analytical solutions and a numerical result

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Kelly, Damien P.

    2014-12-01

    The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of ? and ? type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of ? and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by ?, where ? is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.

  2. NFIRAOS beamsplitters subsystems optomechanical design

    NASA Astrophysics Data System (ADS)

    Lamontagne, Frédéric; Desnoyers, Nichola; Nash, Reston; Boucher, Marc-André; Martin, Olivier; Buteau-Vaillancourt, Louis; Châteauneuf, François; Atwood, Jenny; Hill, Alexis; Byrnes, Peter W. G.; Herriot, Glen; Véran, Jean-Pierre

    2016-07-01

    The early-light facility adaptive optics system for the Thirty Meter Telescope (TMT) is the Narrow-Field InfraRed Adaptive Optics System (NFIRAOS). The science beam splitter changer mechanism and the visible light beam splitter are subsystems of NFIRAOS. This paper presents the opto-mechanical design of the NFIRAOS beam splitters subsystems (NBS). In addition to the modal and the structural analyses, the beam splitters surface deformations are computed considering the environmental constraints during operation. Surface deformations are fit to Zernike polynomials using SigFit software. Rigid body motion as well as residual RMS and peak-to-valley surface deformations are calculated. Finally, deformed surfaces are exported to Zemax to evaluate the transmitted and reflected wave front error. The simulation results of this integrated opto-mechanical analysis have shown compliance with all optical requirements.

  3. How to make thermodynamic perturbation theory to be suitable for low temperature?

    NASA Astrophysics Data System (ADS)

    Zhou, Shiqi

    2009-02-01

    Low temperature unsuitability is a problem plaguing thermodynamic perturbation theory (TPT) for years. Present investigation indicates that the low temperature predicament can be overcome by employing as reference system a nonhard sphere potential which incorporates one part of the attractive ingredient in a potential function of interest. In combination with a recently proposed TPT [S. Zhou, J. Chem. Phys. 125, 144518 (2006)] based on a λ expansion (λ being coupling parameter), the new perturbation strategy is employed to predict for several model potentials. It is shown that the new perturbation strategy can very accurately predict various thermodynamic properties even if the potential range is extremely short and hence the temperature of interest is very low and current theoretical formalisms seriously deteriorate or critically fail to predict even the existence of the critical point. Extensive comparison with existing liquid state theories and available computer simulation data discloses a superiority of the present TPT to two Ornstein-Zernike-type integral equation theories, i.e., hierarchical reference theory and self-consistent Ornstein-Zernike approximation.

  4. How to make thermodynamic perturbation theory to be suitable for low temperature?

    PubMed

    Zhou, Shiqi

    2009-02-07

    Low temperature unsuitability is a problem plaguing thermodynamic perturbation theory (TPT) for years. Present investigation indicates that the low temperature predicament can be overcome by employing as reference system a nonhard sphere potential which incorporates one part of the attractive ingredient in a potential function of interest. In combination with a recently proposed TPT [S. Zhou, J. Chem. Phys. 125, 144518 (2006)] based on a lambda expansion (lambda being coupling parameter), the new perturbation strategy is employed to predict for several model potentials. It is shown that the new perturbation strategy can very accurately predict various thermodynamic properties even if the potential range is extremely short and hence the temperature of interest is very low and current theoretical formalisms seriously deteriorate or critically fail to predict even the existence of the critical point. Extensive comparison with existing liquid state theories and available computer simulation data discloses a superiority of the present TPT to two Ornstein-Zernike-type integral equation theories, i.e., hierarchical reference theory and self-consistent Ornstein-Zernike approximation.

  5. Fitting Multimeric Protein Complexes into Electron Microscopy Maps Using 3D Zernike Descriptors

    PubMed Central

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2012-01-01

    A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root mean square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases. PMID:22417139

  6. Fitting multimeric protein complexes into electron microscopy maps using 3D Zernike descriptors.

    PubMed

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2012-06-14

    A novel computational method for fitting high-resolution structures of multiple proteins into a cryoelectron microscopy map is presented. The method named EMLZerD generates a pool of candidate multiple protein docking conformations of component proteins, which are later compared with a provided electron microscopy (EM) density map to select the ones that fit well into the EM map. The comparison of docking conformations and the EM map is performed using the 3D Zernike descriptor (3DZD), a mathematical series expansion of three-dimensional functions. The 3DZD provides a unified representation of the surface shape of multimeric protein complex models and EM maps, which allows a convenient, fast quantitative comparison of the three-dimensional structural data. Out of 19 multimeric complexes tested, near native complex structures with a root-mean-square deviation of less than 2.5 Å were obtained for 14 cases while medium range resolution structures with correct topology were computed for the additional 5 cases.

  7. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  8. Fine Surface Control of Flexible Space Mirrors Using Adaptive Optics and Robust Control

    DTIC Science & Technology

    2009-03-01

    an AO system not only increases complexity but also lends itself to coupling between actuators. Whereas historically, control laws treated AO...adaptive optic in large ground based AO systems is treated as a static system with no dynamics. In the case of a deformable mirror, it is assumed... astigmatism , and so on. As with any series expansion, the more terms used, the more accurate the approximation will be. For this research, 21 Zernike

  9. Best uniform approximation to a class of rational functions

    NASA Astrophysics Data System (ADS)

    Zheng, Zhitong; Yong, Jun-Hai

    2007-10-01

    We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.

  10. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  11. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M equals 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions. Also, the authors give the weakness of the scheme and suggest areas for further investigation.

  12. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  13. A discrete method for modal analysis of overhead line conductor bundles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdalovici, M.A.; Sireteanu, T.D.; Albrecht, A.A.

    The paper presents a mathematical model and a semi-analytical procedure to calculate the vibration modes and eigenfrequencies of single or bundled conductors with spacers which are needed for evaluation of the wind induced vibration of conductors and for optimization of spacer-dampers placement. The method consists in decomposition of conductors in modules and the expansion by polynomial series of unknown displacements on each module. A complete system of polynomials are deduced for this by Legendre polynomials. Each module is considered either boundary conditions at the extremity of the module or the continuity conditions between the modules and also a number ofmore » projections of module equilibrium equation on the polynomials from the expansion series of unknown displacement. The global system of the eigenmodes and eigenfrequencies is of the matrix form: A X + {omega}{sup 2} M X = 0. The theoretical considerations are exemplified on one conductor and on bundle of two conductors with spacers. From this, a method for forced vibration calculus of a single or bundled conductors is also presented.« less

  14. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  15. A Near to Far Transformation using Spherical Expansions Phase 1: Verification on Simulated Antennas

    DTIC Science & Technology

    2014-09-01

    Antenna Pattern Range. . . . . 75 List of Tables 1 Notation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Legendre polynomials ...first kind Pmn (x) are [3, Equation 12.84 and footnote]: Pmn (x) := (−1)m(1− x2)m/2 dm dxm Pn(x), where Pn(x)’s are the Legendre polynomials . There is the...n ) (4) 9 that computes Pmn (x) = 0 for m > n (5) Table 2 lists the initial Legendre polynomials and their derivatives. Figure 8 plots the first few

  16. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  17. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  18. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  19. Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles

    NASA Astrophysics Data System (ADS)

    Dennis, S.

    2016-02-01

    Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research

  20. Four-zone varifocus mirrors with adaptive control of primary and higher-order spherical aberration

    PubMed Central

    Lukes, Sarah J.; Downey, Ryan D.; Kreitinger, Seth T.; Dickensheets, David L.

    2017-01-01

    Electrostatically actuated deformable mirrors with four concentric annular electrodes can exert independent control over defocus as well as primary, secondary, and tertiary spherical aberration. In this paper we use both numerical modeling and physical measurements to characterize recently developed deformable mirrors with respect to the amount of spherical aberration each can impart, and the dependence of that aberration control on the amount of defocus the mirror is providing. We find that a four-zone, 4 mm diameter mirror can generate surface shapes with arbitrary primary, secondary, and tertiary spherical aberration over ranges of ±0.4, ±0.2, and ±0.15 μm, respectively, referred to a non-normalized Zernike polynomial basis. We demonstrate the utility of this mirror for aberration-compensated focusing of a high NA optical system. PMID:27409212

  1. Optimization of wavefront coding imaging system using heuristic algorithms

    NASA Astrophysics Data System (ADS)

    González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.

    2017-08-01

    Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.

  2. Reference hypernetted chain theory for ferrofluid bilayer: Distribution functions compared with Monte Carlo

    NASA Astrophysics Data System (ADS)

    Polyakov, Evgeny A.; Vorontsov-Velyaminov, Pavel N.

    2014-08-01

    Properties of ferrofluid bilayer (modeled as a system of two planar layers separated by a distance h and each layer carrying a soft sphere dipolar liquid) are calculated in the framework of inhomogeneous Ornstein-Zernike equations with reference hypernetted chain closure (RHNC). The bridge functions are taken from a soft sphere (1/r12) reference system in the pressure-consistent closure approximation. In order to make the RHNC problem tractable, the angular dependence of the correlation functions is expanded into special orthogonal polynomials according to Lado. The resulting equations are solved using the Newton-GRMES algorithm as implemented in the public-domain solver NITSOL. Orientational densities and pair distribution functions of dipoles are compared with Monte Carlo simulation results. A numerical algorithm for the Fourier-Hankel transform of any positive integer order on a uniform grid is presented.

  3. Testing large aspheric surfaces with complementary annular subaperture interferometric method

    NASA Astrophysics Data System (ADS)

    Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang

    2008-07-01

    Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.

  4. Uncertainty-enabled design of electromagnetic reflectors with integrated shape control

    NASA Astrophysics Data System (ADS)

    Haque, Samiul; Kindrat, Laszlo P.; Zhang, Li; Mikheev, Vikenty; Kim, Daewa; Liu, Sijing; Chung, Jooyeon; Kuian, Mykhailo; Massad, Jordan E.; Smith, Ralph C.

    2018-03-01

    We implemented a computationally efficient model for a corner-supported, thin, rectangular, orthotropic polyvinylidene fluoride (PVDF) laminate membrane, actuated by a two-dimensional array of segmented electrodes. The laminate can be used as shape-controlled electromagnetic reflector and the model estimates the reflector's shape given an array of control voltages. In this paper, we describe a model to determine the shape of the laminate for a given distribution of control voltages. Then, we investigate the surface shape error and its sensitivity to the model parameters. Subsequently, we analyze the simulated deflection of the actuated bimorph using a Zernike polynomial decomposition. Finally, we provide a probabilistic description of reflector performance using statistical methods to quantify uncertainty. We make design recommendations for nominal parameter values and their tolerances based on optimization under uncertainty using multiple methods.

  5. Study of the performance of image restoration under different wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Wang, Xinqiu; Hu, Xinqi

    2016-10-01

    Image restoration is an effective way to improve the quality of images degraded by wave-front aberrations. If the wave-front aberration is too large, the performance of the image restoration will not be good. In this paper, the relationship between the performance of image restoration and the degree of wave-front aberrations is studied. A set of different wave-front aberrations is constructed by Zernike polynomials, and the corresponding PSF under white-light illumination is calculated. A set of blurred images is then obtained through convolution methods. Next we recover the images with the regularized Richardson-Lucy algorithm and use the RMS of the original image and the homologous deblurred image to evaluate the quality of restoration. Consequently, we determine the range of wave-front errors in which the recovered images are acceptable.

  6. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  7. Measuring large aspherics using a commercially available 3D-coordinate measuring machine

    NASA Astrophysics Data System (ADS)

    Otto, Wolfgang; Matthes, Axel; Schiehle, Heinz

    2000-07-01

    A CNC-controlled precision measuring machine is a very powerful tool in the optical shop not only to determine the surface figure, but also to qualify the radius of curvature and conic constant of aspherics. We used a commercially available 3D-coordinate measuring machine (CMM, ZEISS UPMC 850 CARAT S-ACC) to measure the shape of the GEMINI 1-m convex secondary mirrors at different lapping and polishing stages. To determine the measuring accuracy we compared the mechanical measurements with the results achieved by means of an interferometrical test setup. The data obtained in an early stage of polishing were evaluated in Zernike polynomials which show a very good agreement. The deviation concerning long wave rotational symmetrical errors was 20 nm rms, whereas the accuracy measuring of mid spatial frequency deviations was limited to about 100 nm rms.

  8. Adaptive compensation of aberrations in ultrafast 3D microscopy using a deformable mirror

    NASA Astrophysics Data System (ADS)

    Sherman, Leah R.; Albert, O.; Schmidt, Christoph F.; Vdovin, Gleb V.; Mourou, Gerard A.; Norris, Theodore B.

    2000-05-01

    3D imaging using a multiphoton scanning confocal microscope is ultimately limited by aberrations of the system. We describe a system to adaptively compensate the aberrations with a deformable mirror. We have increased the transverse scanning range of the microscope by three with compensation of off-axis aberrations.We have also significantly increased the longitudinal scanning depth with compensation of spherical aberrations from the penetration into the sample. Our correction is based on a genetic algorithm that uses second harmonic or two-photon fluorescence signal excited by femtosecond pulses from the sample as the enhancement parameter. This allows us to globally optimize the wavefront without a wavefront measurement. To improve the speed of the optimization we use Zernike polynomials as the basis for correction. Corrections can be stored in a database for look-up with future samples.

  9. Analysis investigation of supporting and restraint conditions on the surface deformation of a collimator primary mirror

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Yen; You, Zhen-Ting; Huang, Bo-Kai; Chen, Yi-Cheng; Huang, Ting-Ming

    2015-09-01

    For meeting the requirements of the high-precision telescopes, the design of collimator is essential. The diameter of the collimator should be larger than that of the target for the using of alignment. Special supporting structures are demanded to reduce the deformation of gravity and to control the surface deformation induced by the mounting force when inspecting large-aperture primary mirrors. By using finite element analysis, a ZERODUR® mirror of a diameter of 620 mm will be analyzed to obtain the deformation induced by the supporting structures. Zernike polynomials will also be adopted to fit the optical surface and separate corresponding aberrations. Through the studies under different boundary conditions and supporting positions of the inner ring, it is concluded that the optical performance will be excellent under a strong enough supporter.

  10. The ABLE ACE wavefront sensor

    NASA Astrophysics Data System (ADS)

    Butts, Robert R.

    1997-08-01

    A low noise, high resolution Shack-Hartmann wavefront sensor was included in the ABLE-ACE instrument suite to obtain direct high resolution phase measurements of the 0.53 micrometers pulsed laser beam propagated through high altitude atmospheric turbulence. The wavefront sensor employed a Fired geometry using a lenslet array which provided approximately 17 sub-apertures across the pupil. The lenslets focused the light in each sub-aperture onto a 21 by 21 array of pixels in the camera focal plane with 8 pixels in the camera focal plane with 8 pixels across the central lobe of the diffraction limited spot. The goal of the experiment was to measure the effects of the turbulence in the free atmosphere on propagation, but the wavefront sensor also detected the aberrations induced by the aircraft boundary layer and the receiver aircraft internal beam path. Data analysis methods used to extract the desired atmospheric contribution to the phase measurements from the data corrupted by non-atmospheric aberrations are described. Approaches which were used included a reconstruction of the phase as a linear combination of Zernike polynomials coupled with optical estimator sand computation of structure functions of the sub-aperture slopes. The theoretical basis for the data analysis techniques is presented. Results are described, and comparisons with theory and simulations are shown. Estimates of average turbulence strength along the propagation path from the wavefront sensor showed good agreement with other sensor. The Zernike spectra calculated from the wavefront sensor data were consistent with the standard Kolmogorov model of turbulence.

  11. Discovery Channel Telescope active optics system early integration and test

    NASA Astrophysics Data System (ADS)

    Venetiou, Alexander J.; Bida, Thomas A.

    2012-09-01

    The Discovery Channel Telescope (DCT) is a 4.3-meter telescope with a thin meniscus primary mirror (M1) and a honeycomb secondary mirror (M2). The optical design is an f/6.1 Ritchey-Chrétien (RC) with an unvignetted 0.5° Field of View (FoV) at the Cassegrain focus. We describe the design, implementation and performance of the DCT active optics system (AOS). The DCT AOS maintains collimation and controls the figure of the mirror to provide seeing-limited images across the focal plane. To minimize observing overhead, rapid settling times are achieved using a combination of feed-forward and low-bandwidth feedback control using a wavefront sensing system. In 2011, we mounted a Shack-Hartmann wavefront sensor at the prime focus of M1, the Prime Focus Test Assembly (PFTA), to test the AOS with the wavefront sensor, and the feedback loop. The incoming wavefront is decomposed using Zernike polynomials, and the mirror figure is corrected with a set of bending modes. Components of the system that we tested and tuned included the Zernike to Bending Mode transformations. We also started open-loop feed-forward coefficients determination. In early 2012, the PFTA was replaced by M2, and the wavefront sensor moved to its normal location on the Cassegrain instrument assembly. We present early open loop wavefront test results with the full optical system and instrument cube, along with refinements to the overall control loop operating at RC Cassegrain focus.

  12. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  13. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Beyond Euler angles: exploiting the angle-axis parametrization in a multipole expansion of the rotation operator.

    PubMed

    Siemens, Mark; Hancock, Jason; Siminovitch, David

    2007-02-01

    Euler angles (alpha,beta,gamma) are cumbersome from a computational point of view, and their link to experimental parameters is oblique. The angle-axis {Phi, n} parametrization, especially in the form of quaternions (or Euler-Rodrigues parameters), has served as the most promising alternative, and they have enjoyed considerable success in rf pulse design and optimization. We focus on the benefits of angle-axis parameters by considering a multipole operator expansion of the rotation operator D(Phi, n), and a Clebsch-Gordan expansion of the rotation matrices D(MM')(J)(Phi, n). Each of the coefficients in the Clebsch-Gordan expansion is proportional to the product of a spherical harmonic of the vector n specifying the axis of rotation, Y(lambdamu)(n), with a fixed function of the rotation angle Phi, a Gegenbauer polynomial C(2J-lambda)(lambda+1)(cosPhi/2). Several application examples demonstrate that this Clebsch-Gordan expansion gives easy and direct access to many of the parameters of experimental interest, including coherence order changes (isolated in the Clebsch-Gordan coefficients), and rotation angle (isolated in the Gegenbauer polynomials).

  15. Determination of the expansion of the potential of the earth's normal gravitational field

    NASA Astrophysics Data System (ADS)

    Kochiev, A. A.

    The potential of the generalized problem of 2N fixed centers is expanded in a polynomial and Legendre function series. Formulas are derived for the expansion coefficients, and the disturbing function of the problem is constructed in an explicit form.

  16. Theory of adsorption in a polydisperse templated porous material: Hard sphere systems

    NASA Astrophysics Data System (ADS)

    RŻysko, Wojciech; Sokołowski, Stefan; Pizio, Orest

    2002-03-01

    A theoretical description of adsorption in a templated porous material, formed by an equilibrium quench of a polydisperse fluid composed of matrix and template particles and subsequent removal of the template particles is presented. The approach is based on the solution of the replica Ornstein-Zernike equations with Percus-Yevick and hypernetted chain closures. The method of solution uses expansions of size-dependent correlation functions into Fourier series, as described by Lado [J. Chem. Phys. 108, 6441 (1998)]. Specific calculations have been carried out for model systems, composed of hard spheres.

  17. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  18. Solution of the mean spherical approximation for polydisperse multi-Yukawa hard-sphere fluid mixture using orthogonal polynomial expansions

    NASA Astrophysics Data System (ADS)

    Kalyuzhnyi, Yurij V.; Cummings, Peter T.

    2006-03-01

    The Blum-Høye [J. Stat. Phys. 19 317 (1978)] solution of the mean spherical approximation for a multicomponent multi-Yukawa hard-sphere fluid is extended to a polydisperse multi-Yukawa hard-sphere fluid. Our extension is based on the application of the orthogonal polynomial expansion method of Lado [Phys. Rev. E 54, 4411 (1996)]. Closed form analytical expressions for the structural and thermodynamic properties of the model are presented. They are given in terms of the parameters that follow directly from the solution. By way of illustration the method of solution is applied to describe the thermodynamic properties of the one- and two-Yukawa versions of the model.

  19. Precipitate shape fitting and reconstruction by means of 3D Zernike functions

    NASA Astrophysics Data System (ADS)

    Callahan, P. G.; De Graef, M.

    2012-01-01

    3D Zernike functions are defined and used for the reconstruction of precipitate shapes. These functions are orthogonal over the unit ball and allow for an arbitrary shape, scaled to fit inside an embedding sphere, to be decomposed into 3D harmonics. Explicit expressions are given for the general Zernike moments, correcting typographical errors in the literature. Explicit expressions of the Zernike moments for the ellipsoid and the cube are given. The 3D Zernike functions and moments are applied to the reconstruction of γ' precipitate shapes in two Ni-based superalloys, one with nearly cuboidal precipitate shapes, and one with more complex dendritic shapes.

  20. Using Taylor Expansions to Prepare Students for Calculus

    ERIC Educational Resources Information Center

    Lutzer, Carl V.

    2011-01-01

    We propose an alternative to the standard introduction to the derivative. Instead of using limits of difference quotients, students develop Taylor expansions of polynomials. This alternative allows students to develop many of the central ideas about the derivative at an intuitive level, using only skills and concepts from precalculus, and…

  1. Aberration corrections for free-space optical communications in atmosphere turbulence using orbital angular momentum states.

    PubMed

    Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y

    2012-01-02

    The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.

  2. Astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens.

    PubMed

    Fu, Xiao; Duan, Fajie; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Lv, Changrong

    2017-10-01

    As a special kind of spectrometer with the Czerny-Turner structure, the echelle spectrometer features two-dimensional dispersion, which leads to a complex astigmatic condition. In this work, we propose an optical design of astigmatism-corrected echelle spectrometer using an off-the-shelf cylindrical lens. The mathematical model considering astigmatism introduced by the off-axis mirrors, the echelle grating, and the prism is established. Our solution features simplified calculation and low-cost construction, which is capable of overall compensation of the astigmatism in a wide spectral range (200-600 nm). An optical simulation utilizing ZEMAX software, astigmatism assessment based on Zernike polynomials, and an instrument experiment is implemented to validate the effect of astigmatism correction. The results demonstrated that astigmatism of the echelle spectrometer was corrected to a large extent, and high spectral resolution better than 0.1 nm was achieved.

  3. Opto-mechanical design and gravity-deformation analysis on optical telescope in laser communication system

    NASA Astrophysics Data System (ADS)

    Fu, Sen; Du, Jindan; Song, Yiwei; Gao, Tianyu; Zhang, Daqing; Wang, Yongzhi

    2017-11-01

    In space laser communication, optical antennas are one of the main components and the precision of optical antennas is very high. In this paper, it is based on the R-C telescope and it is carried out that the design and simulation of optical lens and supporting truss, according to the parameters of the systems. And a finite element method (FEM) was used to analyze the deformation of the optical lens. Finally, the Zernike polynomial was introduced to fit the primary mirror with a diameter of 250mm. The objective of this study is to determine whether the wave-front aberration of the primary mirror can meet the imaging quality. The results show that the deterioration of the imaging quality caused by the gravity deformation of primary and secondary mirrors. At the same time, the optical deviation of optical antenna increase with the diameter of the pupil.

  4. Non-overlap subaperture interferometric testing for large optics

    NASA Astrophysics Data System (ADS)

    Wu, Xin; Yu, Yingjie; Zeng, Wenhan; Qi, Te; Chen, Mingyi; Jiang, Xiangqian

    2017-08-01

    It has been shown that the number of subapertures and the amount of overlap has a significant influence on the stitching accuracy. In this paper, a non-overlap subaperture interferometric testing method (NOSAI) is proposed to inspect large optical components. This method would greatly reduce the number of subapertures and the influence of environmental interference while maintaining the accuracy of reconstruction. A general subaperture distribution pattern of NOSAI is also proposed for the large rectangle surface. The square Zernike polynomial is employed to fit such wavefront. The effect of the minimum fitting terms on the accuracy of NOSAI and the sensitivities of NOSAI to subaperture's alignment error, power systematic error, and random noise are discussed. Experimental results validate the feasibility and accuracy of the proposed NOSAI in comparison with wavefront obtained by a large aperture interferometer and stitching surface by multi-aperture overlap-scanning technique (MAOST).

  5. Image recovery from defocused 2D fluorescent images in multimodal digital holographic microscopy.

    PubMed

    Quan, Xiangyu; Matoba, Osamu; Awatsuji, Yasuhiro

    2017-05-01

    A technique of three-dimensional (3D) intensity retrieval from defocused, two-dimensional (2D) fluorescent images in the multimodal digital holographic microscopy (DHM) is proposed. In the multimodal DHM, 3D phase and 2D fluorescence distributions are obtained simultaneously by an integrated system of an off-axis DHM and a conventional epifluorescence microscopy, respectively. This gives us more information of the target; however, defocused fluorescent images are observed due to the short depth of field. In this Letter, we propose a method to recover the defocused images based on the phase compensation and backpropagation from the defocused plane to the focused plane using the distance information that is obtained from a 3D phase distribution. By applying Zernike polynomial phase correction, we brought back the fluorescence intensity to the focused imaging planes. The experimental demonstration using fluorescent beads is presented, and the expected applications are suggested.

  6. Stability of corneal topography and wavefront aberrations in young Singaporeans.

    PubMed

    Zhu, Mingxia; Collins, Michael J; Yeo, Anna C H

    2013-09-01

    The aim was to investigate the differences between and variations across time in corneal topography and ocular wavefront aberrations in young Singaporean myopes and emmetropes. We used a videokeratoscope and wavefront sensor to measure the ocular surface topography and wavefront aberrations of the total-eye optics in the morning, midday and late afternoon on two separate days. Topographic data were used to derive the corneal surface wavefront aberrations. Both the corneal and total wavefronts were analysed up to the fourth radial order of the Zernike polynomial expansion and were centred on the entrance pupil (5.0 mm). The participants included 12 young progressing myopes, 13 young stable myopes and 15 young age-matched emmetropes. For all subjects considered together, there were significant changes in some of the aberrations across the day, such as spherical aberration ( Z(4 0)) and vertical coma ( Z (3 - 1)) (repeated measures analysis of variance, p < 0.05). The magnitude of positive spherical aberration ( Z(4 0)) was significantly lower in the progressing myopic group than in the stable myopic (p = 0.04) and emmetropic (p = 0.02) groups. There were also significant interactions between refractive group and time of day for with and against-the-rule astigmatism ( Z(2 2)). Significantly lower fourth-order root mean square of ocular wavefront aberrations were found in the progressing myopic group compared with the stable myopes and emmetropes (p < 0.01). These differences and variations in the corneal and total aberrations may have significance for our understanding of refractive error development and for clinical applications requiring accurate wavefront measurements. © 2013 The Authors. Clinical and Experimental Optometry © 2013 Optometrists Association Australia.

  7. Spatially resolved wavefront aberrations of ophthalmic progressive-power lenses in normal viewing conditions.

    PubMed

    Villegas, Eloy A; Artal, Pablo

    2003-02-01

    To measure the wavefront aberration at different locations in progressive-power lenses (PPL's) isolated and in situ (PPL's plus eye). A Hartmann-Shack wavefront sensor was used to measure progressive-power lenses and human eyes either independently or in combination. In each selected zone, the lens was placed and tilted accordingly to simulate natural viewing conditions. We measured 21 relevant locations across an isolated PPL (plano lens of power addition of 2 D). In six of the locations, the wavefront aberration of the eye plus PPL were obtained in two ways: (1) by direct measurement of the system and (2) by adding the individual wavefront aberrations of the eye and the lens for each appropriate zone. In every case, we obtained the wavefront aberration as Zernike polynomials expansions, the root mean square error, the point-spread function, and the Strehl ratio. Along the corridor of the PPL, third-order coma and trefoil, and astigmatism were the dominant aberrations. In areas of the PPL outside the corridor, astigmatism increased, whereas other aberrations remained similar to the lens center. Small differences were found between the direct and calculated methods used to obtain the wavefront aberration of the eye with the lens, and the possible sources of errors were discussed. In some lenses zones, the aberrations of the lens may be compensated by the particular aberrations of the eye, yielding improved optical performance over that present in the lens alone. We designed and built a wavefront sensor to perform spatially resolved aberration measurements in ophthalmic lenses, in particular in PPL's, either isolated or in combination with the eye. The aberrations appearing in the PPL were compared with those in normal aged eyes.

  8. Characterization of contour shapes achievable with a MEMS deformable mirror

    NASA Astrophysics Data System (ADS)

    Zhou, Yaopeng; Bifano, Thomas

    2006-01-01

    An important consideration in the design of an adaptive optics controller is the range of physical shapes required by the DM to compensate the existing aberrations. Conversely, if the range of surface shapes achievable with a DM is known, its suitability for a particular AO application can be determined. In this paper, we characterize one MEMS DM that was recently developed for vision science applications. The device has 140 actuators supporting a continuous face sheet deformable mirror having 4mm square aperture. The total range of actuation is about 4μm, achieved using electrostatic actuation in an architecture that has been described previously. We incorporated the MEMS mirror into an adaptive optics (AO) testbed to measure its capacity to transform an initially planar wavefront into a wavefront having one of thirty-six orthogonal shapes corresponding to the first seven orders of Zernike polynomials. The testbed included a superluminescent diode source emitting light with a wavelength 630nm, a MEMS DM, and a Shack Hartmann wavefront sensor (SHWS). The DM was positioned in a plane conjugate to the SHWS lenslets, using a pair of relay lenses. Wavefront slope measurements provided by the SHWS were used in an integral controller to regulate DM shape. The control software used the difference between the the wavefront measured by the SHWS and the desired (reference) wavefront as feedback for the DM. The DM is able to produce all 36 terms with a wavefront height root mean square (RMS) from 1.35μm for the lower order Zernike shapes to 0.2μm for the 7th order.

  9. Carrier and aberrations removal in interferometric fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Blain, P.; Michel, F.; Renotte, Y.; Habraken, S.

    2012-04-01

    A profilometer which takes advantage of polarization states splitting technique and monochromatic light projection method as a way to overcome ambient lighting for in-situ measurement is under development [1, 2]. Because of the Savart plate which refracts two out of axis beams, the device suffers from aberrations (mostly coma and astigmatism). These aberrations affect the quality of the sinusoidal fringe pattern. In fringe projection profilometry, the unwrapped phase distribution map contains the sum of the object's shape-related phase and carrier-fringe-related phase. In order to extract the 3D shape of the object, the carrier phase has to be removed [3, 4]. An easy way to remove both the fringe carrier and the aberrations of the optical system is to measure the phases of the test object and to measure the phase of a reference plane with the same set up and to subtract both phase maps. This time consuming technique is suitable for laboratory but not for industry. We propose a method to numerically remove both the fringe carrier and the aberrations. A first reference phase of a calibration plane is evaluated knowing the position of the different elements in the set up and the orientation of the fringes. Then a fitting of the phase map by Zernike polynomials is computed [5]. As the triangulation parameters are known during the calibration, the computation of Zernike coefficients has only to be made once. The wavefront error can be adjusted by a scale factor which depends on the position of the test object.

  10. Computation of solar wind parameters from the OGO-5 plasma spectrometer data using Hermite polynomials

    NASA Technical Reports Server (NTRS)

    Neugebauer, M.

    1971-01-01

    The method used to calculate the velocity, temperature, and density of the solar wind plasma is presented from spectra obtained by attitude-stabilized plasma detectors on the earth satellite OGO 5. The method, which used expansions in terms of Hermite polynomials, is very inexpensive to implement on an electronic computer compared to the least-squares and other iterative methods often used for similar problems.

  11. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  12. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  13. The acoustic power of a vibrating clamped circular plate revisited in the wide low frequency range using expansion into the radial polynomials.

    PubMed

    Rdzanek, Wojciech P

    2016-06-01

    This study deals with the classical problem of sound radiation of an excited clamped circular plate embedded into a flat rigid baffle. The system of the two coupled differential equations is solved, one for the excited and damped vibrations of the plate and the other one-the Helmholtz equation. An approach using the expansion into radial polynomials leads to results for the modal impedance coefficients useful for a comprehensive numerical analysis of sound radiation. The results obtained are accurate and efficient in a wide low frequency range and can easily be adopted for a simply supported circular plate. The fluid loading is included providing accurate results in resonance.

  14. Analysis of the impacts of horizontal translation and scaling on wavefront approximation coefficients with rectangular pupils for Chebyshev and Legendre polynomials.

    PubMed

    Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong

    2013-12-01

    Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.

  15. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  16. Final Shape of Precision Molded Optics: Part 1 - Computational Approach, Material Definitions and the Effect of Lens Shape

    DTIC Science & Technology

    2012-05-15

    subroutine by adding time-dependence to the thermal expansion coefficient. The user subroutine was written in Intel Visual Fortran that is compatible...temperature history dependent expansion and contraction, and the molds were modeled as elastic taking into account both mechanical and thermal strain. In...behavior was approximated by assuming the thermal coefficient of expansion to be a fourth order polynomial function of temperature. The authors

  17. Polynomial modal analysis of slanted lamellar gratings.

    PubMed

    Granet, Gérard; Randriamihaja, Manjakavola Honore; Raniriharinosy, Karyl

    2017-06-01

    The problem of diffraction by slanted lamellar dielectric and metallic gratings in classical mounting is formulated as an eigenvalue eigenvector problem. The numerical solution is obtained by using the moment method with Legendre polynomials as expansion and test functions, which allows us to enforce in an exact manner the boundary conditions which determine the eigensolutions. Our method is successfully validated by comparison with other methods including in the case of highly slanted gratings.

  18. Diffraction Theory for Polygonal Apertures

    DTIC Science & Technology

    1988-07-01

    and utilized oblate spheroidal vector wave functions, and Nomura and Katsura (1955), who employed an expansion of the hypergeometric polynomial ...21 2 - 1 4, 2 - 1 3 4k3 - 3k 8 3 - 4 factor relates directly to the orthogonality relations for the Chebyshev polynomials given below. I T(Q TieQdk...convergence. 3.1.2.2 Gaussian Illuminated Corner In the sample calculation just discussed we discovered some of the basic characteristics of the GBE

  19. A Phase-Shifting Zernike Wavefront Sensor for the Palomar P3K Adaptive Optics System

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Crawford, Sam; Loya, Frank; Moore, James

    2012-01-01

    A phase-shifting Zernike wavefront sensor has distinct advantages over other types of wavefront sensors. Chief among them are: 1) improved sensitivity to low-order aberrations and 2) efficient use of photons (hence reduced sensitivity to photon noise). We are in the process of deploying a phase-shifting Zernike wavefront sensor to be used with the realtime adaptive optics system for Palomar. Here we present the current state of the Zernike wavefront sensor to be integrated into the high-order adaptive optics system at Mount Palomar's Hale Telescope.

  20. Single particle analysis based on Zernike phase contrast transmission electron microscopy.

    PubMed

    Danev, Radostin; Nagayama, Kuniaki

    2008-02-01

    We present the first application of Zernike phase-contrast transmission electron microscopy to single-particle 3D reconstruction of a protein, using GroEL chaperonin as the test specimen. We evaluated the performance of the technique by comparing 3D models derived from Zernike phase contrast imaging, with models from conventional underfocus phase contrast imaging. The same resolution, about 12A, was achieved by both imaging methods. The reconstruction based on Zernike phase contrast data required about 30% fewer particles. The advantages and prospects of each technique are discussed.

  1. Theoretical study on the dispersion curves of Lamb waves in piezoelectric-semiconductor sandwich plates GaAs-FGPM-AlAs: Legendre polynomial series expansion

    NASA Astrophysics Data System (ADS)

    Othmani, Cherif; Takali, Farid; Njeh, Anouar

    2017-06-01

    In this paper, the propagation of the Lamb waves in the GaAs-FGPM-AlAs sandwich plate is studied. Based on the orthogonal function, Legendre polynomial series expansion is applied along the thickness direction to obtain the Lamb dispersion curves. The convergence and accuracy of this polynomial method are discussed. In addition, the influences of the volume fraction p and thickness hFGPM of the FGPM middle layer on the Lamb dispersion curves are developed. The numerical results also show differences between the characteristics of Lamb dispersion curves in the sandwich plate for various gradient coefficients of the FGPM middle layer. In fact, if the volume fraction p increases the phase velocity will increases and the number of modes will decreases at a given frequency range. All the developments performed in this paper were implemented in Matlab software. The corresponding results presented in this work may have important applications in several industry areas and developing novel acoustic devices such as sensors, electromechanical transducers, actuators and filters.

  2. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  3. Particle identification with neural networks using a rotational invariant moment representation

    NASA Astrophysics Data System (ADS)

    Sinkus, R.; Voss, T.

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The multidimensional input distribution for the neural network is transformed via a principle component analysis and rescaled by its respective variances to ensure input values of the order of one. This results is a better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  4. Linear precoding based on polynomial expansion: reducing complexity in massive MIMO.

    PubMed

    Mueller, Axel; Kammoun, Abla; Björnson, Emil; Debbah, Mérouane

    Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively "antenna-efficient" regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.

  5. On the modular structure of the genus-one Type II superstring low energy expansion

    NASA Astrophysics Data System (ADS)

    D'Hoker, Eric; Green, Michael B.; Vanhove, Pierre

    2015-08-01

    The analytic contribution to the low energy expansion of Type II string amplitudes at genus-one is a power series in space-time derivatives with coefficients that are determined by integrals of modular functions over the complex structure modulus of the world-sheet torus. These modular functions are associated with world-sheet vacuum Feynman diagrams and given by multiple sums over the discrete momenta on the torus. In this paper we exhibit exact differential and algebraic relations for a certain infinite class of such modular functions by showing that they satisfy Laplace eigenvalue equations with inhomogeneous terms that are polynomial in non-holomorphic Eisenstein series. Furthermore, we argue that the set of modular functions that contribute to the coefficients of interactions up to order are linear sums of functions in this class and quadratic polynomials in Eisenstein series and odd Riemann zeta values. Integration over the complex structure results in coefficients of the low energy expansion that are rational numbers multiplying monomials in odd Riemann zeta values.

  6. Uncertainty analysis for the steady-state flows in a dual throat nozzle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q.-Y.; Gottlieb, David; Hesthaven, Jan S.

    2005-03-20

    It is well known that the steady state of an isentropic flow in a dual-throat nozzle with equal throat areas is not unique. In particular there is a possibility that the flow contains a shock wave, whose location is determined solely by the initial condition. In this paper, we consider cases with uncertainty in this initial condition and use generalized polynomial chaos methods to study the steady-state solutions for stochastic initial conditions. Special interest is given to the statistics of the shock location. The polynomial chaos (PC) expansion modes are shown to be smooth functions of the spatial variable x,more » although each solution realization is discontinuous in the spatial variable x. When the variance of the initial condition is small, the probability density function of the shock location is computed with high accuracy. Otherwise, many terms are needed in the PC expansion to produce reasonable results due to the slow convergence of the PC expansion, caused by non-smoothness in random space.« less

  7. Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.

    PubMed

    Mahajan, Virendra N

    2010-12-20

    The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.

  8. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  9. Influence of eye micromotions on spatially resolved refractometry

    NASA Astrophysics Data System (ADS)

    Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Osipova, Irina Y.

    2001-01-01

    The influence eye micromotions on the accuracy of estimation of Zernike coefficients form eye transverse aberration measurements was investigated. By computer modeling, the following found eye aberrations have been examined: defocusing, primary astigmatism, spherical aberration of the 3rd and the 5th orders, as well as their combinations. It was determined that the standard deviation of estimated Zernike coefficients is proportional to the standard deviation of angular eye movements. Eye micromotions cause the estimation errors of Zernike coefficients of present aberrations and produce the appearance of Zernike coefficients of aberrations, absent in the eye. When solely defocusing is present, the biggest errors, cased by eye micromotions, are obtained for aberrations like coma and astigmatism. In comparison with other aberrations, spherical aberration of the 3rd and the 5th orders evokes the greatest increase of the standard deviation of other Zernike coefficients.

  10. Probing baryogenesis through the Higgs boson self-coupling

    NASA Astrophysics Data System (ADS)

    Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.

    2018-04-01

    The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.

  11. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  12. Thermal/structural/optical integrated design for optical sensor mounted on unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Gaopeng; Yang, Hongtao; Mei, Chao; Wu, Dengshan; Shi, Kui

    2016-01-01

    With the rapid development of science and technology and the promotion of many local wars in the world, altitude optical sensor mounted on unmanned aerial vehicle is more widely applied in the airborne remote sensing, measurement and detection. In order to obtain high quality image of the aero optical remote sensor, it is important to analysis its thermal-optical performance on the condition of high speed and high altitude. Especially for the key imaging assembly, such as optical window, the temperature variation and temperature gradient can result in defocus and aberrations in optical system, which will lead to the poor quality image. In order to improve the optical performance of a high speed aerial camera optical window, the thermal/structural/optical integrated design method is developed. Firstly, the flight environment of optical window is analyzed. Based on the theory of aerodynamics and heat transfer, the convection heat transfer coefficient is calculated. The temperature distributing of optical window is simulated by the finite element analysis software. The maximum difference in temperature of the inside and outside of optical window is obtained. Then the deformation of optical window under the boundary condition of the maximum difference in temperature is calculated. The optical window surface deformation is fitted in Zernike polynomial as the interface, the calculated Zernike fitting coefficients is brought in and analyzed by CodeV Optical Software. At last, the transfer function diagrams of the optical system on temperature field are comparatively analyzed. By comparing and analyzing the result, it can be obtained that the optical path difference caused by thermal deformation of the optical window is 138.2 nm, which is under PV ≤1 4λ . The above study can be used as an important reference for other optical window designs.

  13. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  14. Polynomial equations for science orbits around Europa

    NASA Astrophysics Data System (ADS)

    Cinelli, Marco; Circi, Christian; Ortore, Emiliano

    2015-07-01

    In this paper, the design of science orbits for the observation of a celestial body has been carried out using polynomial equations. The effects related to the main zonal harmonics of the celestial body and the perturbation deriving from the presence of a third celestial body have been taken into account. The third body describes a circular and equatorial orbit with respect to the primary body and, for its disturbing potential, an expansion in Legendre polynomials up to the second order has been considered. These polynomial equations allow the determination of science orbits around Jupiter's satellite Europa, where the third body gravitational attraction represents one of the main forces influencing the motion of an orbiting probe. Thus, the retrieved relationships have been applied to this moon and periodic sun-synchronous and multi-sun-synchronous orbits have been determined. Finally, numerical simulations have been carried out to validate the analytical results.

  15. Automatic differentiation for Fourier series and the radii polynomial approach

    NASA Astrophysics Data System (ADS)

    Lessard, Jean-Philippe; Mireles James, J. D.; Ransford, Julian

    2016-11-01

    In this work we develop a computer-assisted technique for proving existence of periodic solutions of nonlinear differential equations with non-polynomial nonlinearities. We exploit ideas from the theory of automatic differentiation in order to formulate an augmented polynomial system. We compute a numerical Fourier expansion of the periodic orbit for the augmented system, and prove the existence of a true solution nearby using an a-posteriori validation scheme (the radii polynomial approach). The problems considered here are given in terms of locally analytic vector fields (i.e. the field is analytic in a neighborhood of the periodic orbit) hence the computer-assisted proofs are formulated in a Banach space of sequences satisfying a geometric decay condition. In order to illustrate the use and utility of these ideas we implement a number of computer-assisted existence proofs for periodic orbits of the Planar Circular Restricted Three-Body Problem (PCRTBP).

  16. Monograph on the use of the multivariate Gram Charlier series Type A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatayodom, T.; Heydt, G.

    1978-01-01

    The Gram-Charlier series in an infinite series expansion for a probability density function (pdf) in which terms of the series are Hermite polynomials. There are several Gram-Charlier series - the best known is Type A. The Gram-Charlier series, Type A (GCA) exists for both univariate and multivariate random variables. This monograph introduces the multivariate GCA and illustrates its use through several examples. A brief bibliography and discussion of Hermite polynomials is also included. 9 figures, 2 tables.

  17. Are there p-adic knot invariants?

    NASA Astrophysics Data System (ADS)

    Morozov, A. Yu.

    2016-04-01

    We suggest using the Hall-Littlewood version of the Rosso-Jones formula to define the germs of p-adic HOMFLY-PT polynomials for torus knots [ m, n] as coefficients of superpolynomials in a q-expansion. In this form, they have at least the [ m, n] ↔ [ n, m] topological invariance. This opens a new possibility to interpret superpolynomials as p-adic deformations of HOMFLY polynomials and poses a question of generalizing to other knot families, which is a substantial problem for several branches of modern theory.

  18. Precise calibration of spatial phase response nonuniformity arising in liquid crystal on silicon.

    PubMed

    Xu, Jingquan; Qin, SiYi; Liu, Chen; Fu, Songnian; Liu, Deming

    2018-06-15

    In order to calibrate the spatial phase response nonuniformity of liquid crystal on silicon (LCoS), we propose to use a Twyman-Green interferometer to characterize the wavefront distortion, due to the inherent curvature of the device. During the characterization, both the residual carrier frequency introduced by the Fourier transform evaluation method and the lens aberration are error sources. For the tilted phase error introduced by residual carrier frequency, the least mean square fitting method is used to obtain the tilted phase error. Meanwhile, we use Zernike polynomials fitting based on plane mirror calibration to mitigate the lens aberration. For a typical LCoS with 1×12,288 pixels after calibration, the peak-to-valley value of the inherent wavefront distortion is approximately 0.25λ at 1550 nm, leading to a half-suppression of wavefront distortion. All efforts can suppress the root mean squares value of the inherent wavefront distortion to approximately λ/34.

  19. Wavefront-aberration measurement and systematic-error analysis of a high numerical-aperture objective

    NASA Astrophysics Data System (ADS)

    Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin

    2018-02-01

    A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.

  20. Novel asymmetric cryptosystem based on distorted wavefront beam illumination and double-random phase encoding.

    PubMed

    Yu, Honghao; Chang, Jun; Liu, Xin; Wu, Chuhan; He, Yifan; Zhang, Yongjian

    2017-04-17

    Herein, we propose a new security enhancing method that employs wavefront aberrations as optical keys to improve the resistance capabilities of conventional double-random phase encoding (DRPE) optical cryptosystems. This study has two main innovations. First, we exploit a special beam-expander afocal-reflecting to produce different types of aberrations, and the wavefront distortion can be altered by changing the shape of the afocal-reflecting system using a deformable mirror. Then, we reconstruct the wavefront aberrations via the surface fitting of Zernike polynomials and use the reconstructed aberrations as novel asymmetric vector keys. The ideal wavefront and the distorted wavefront obtained by wavefront sensing can be regarded as a pair of private and public keys. The wavelength and focal length of the Fourier lens can be used as additional keys to increase the number of degrees of freedom. This novel cryptosystem can enhance the resistance to various attacks aimed at DRPE systems. Finally, we conduct ZEMAX and MATLAB simulations to demonstrate the superiority of this method.

  1. Super-resolution pupil filtering for visual performance enhancement using adaptive optics

    NASA Astrophysics Data System (ADS)

    Zhao, Lina; Dai, Yun; Zhao, Junlei; Zhou, Xiaojun

    2018-05-01

    Ocular aberration correction can significantly improve visual function of the human eye. However, even under ideal aberration correction conditions, pupil diffraction restricts the resolution of retinal images. Pupil filtering is a simple super-resolution (SR) method that can overcome this diffraction barrier. In this study, a 145-element piezoelectric deformable mirror was used as a pupil phase filter because of its programmability and high fitting accuracy. Continuous phase-only filters were designed based on Zernike polynomial series and fitted through closed-loop adaptive optics. SR results were validated using double-pass point spread function images. Contrast sensitivity was further assessed to verify the SR effect on visual function. An F-test was conducted for nested models to statistically compare different CSFs. These results indicated CSFs for the proposed SR filter were significantly higher than the diffraction correction (p < 0.05). As such, the proposed filter design could provide useful guidance for supernormal vision optical correction of the human eye.

  2. Analysis of a spaceborne mirror on a main plate with isostatic mounts

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Yen; Lien, Chun-Chieh; Huang, Po-Hsuan; Chang, Shenq-Tsong; Huang, Ting-Ming

    2014-09-01

    The paper is aimed at obtaining the deformation results and optical aberration configurations of a spaceborne mirror made of ZERODUR® glass on a main plate with three isostatic mounts for a space Cassegrain telescope. On the rear side of the main plate four screws will be locked to fix the focal plane assembly. The locking modes for the four screws will be simulated as push and pull motions in the Z axis for simplification. The finite element analysis and Zernike polynomial fitting are applied to the whole integrated optomechanical analysis process. Under the analysis, three isostatic mounts are bonded to the neutral plane of the mirror. The deformation results and optical aberration configurations under six types of push and pull motions as well as self-weight loading have been obtained. In addition, the comparison between the results under push and pull motions with 0.01 mm and 0.1 mm displacements in Z axis will be attained.

  3. Factorization of differential expansion for non-rectangular representations

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2018-04-01

    Factorization of the differential expansion (DE) coefficients for colored HOMFLY-PT polynomials of antiparallel double braids, originally discovered for rectangular representations R, in the case of rectangular representations R, is extended to the first non-rectangular representations R = [2, 1] and R = [3, 1]. This increases chances that such factorization will take place for generic R, thus fixing the shape of the DE. We illustrate the power of the method by conjecturing the DE-induced expression for double-braid polynomials for all R = [r, 1]. In variance with the rectangular case, the knowledge for double braids is not fully sufficient to deduce the exclusive Racah matrix S¯ — the entries in the sectors with nontrivial multiplicities sum up and remain unseparated. Still, a considerable piece of the matrix is extracted directly and its other elements can be found by solving the unitarity constraints.

  4. The accurate solution of Poisson's equation by expansion in Chebyshev polynomials

    NASA Technical Reports Server (NTRS)

    Haidvogel, D. B.; Zang, T.

    1979-01-01

    A Chebyshev expansion technique is applied to Poisson's equation on a square with homogeneous Dirichlet boundary conditions. The spectral equations are solved in two ways - by alternating direction and by matrix diagonalization methods. Solutions are sought to both oscillatory and mildly singular problems. The accuracy and efficiency of the Chebyshev approach compare favorably with those of standard second- and fourth-order finite-difference methods.

  5. Theory of aberration fields for general optical systems with freeform surfaces.

    PubMed

    Fuerschbach, Kyle; Rolland, Jannick P; Thompson, Kevin P

    2014-11-03

    This paper utilizes the framework of nodal aberration theory to describe the aberration field behavior that emerges in optical systems with freeform optical surfaces, particularly φ-polynomial surfaces, including Zernike polynomial surfaces, that lie anywhere in the optical system. If the freeform surface is located at the stop or pupil, the net aberration contribution of the freeform surface is field constant. As the freeform optical surface is displaced longitudinally away from the stop or pupil of the optical system, the net aberration contribution becomes field dependent. It is demonstrated that there are no new aberration types when describing the aberration fields that arise with the introduction of freeform optical surfaces. Significantly it is shown that the aberration fields that emerge with the inclusion of freeform surfaces in an optical system are exactly those that have been described by nodal aberration theory for tilted and decentered optical systems. The key contribution here lies in establishing the field dependence and nodal behavior of each freeform term that is essential knowledge for effective application to optical system design. With this development, the nodes that are distributed throughout the field of view for each aberration type can be anticipated and targeted during optimization for the correction or control of the aberrations in an optical system with freeform surfaces. This work does not place any symmetry constraints on the optical system, which could be packaged in a fully three dimensional geometry, without fold mirrors.

  6. Mathematics of Computed Tomography

    NASA Astrophysics Data System (ADS)

    Hawkins, William Grant

    A review of the applications of the Radon transform is presented, with emphasis on emission computed tomography and transmission computed tomography. The theory of the 2D and 3D Radon transforms, and the effects of attenuation for emission computed tomography are presented. The algebraic iterative methods, their importance and limitations are reviewed. Analytic solutions of the 2D problem the convolution and frequency filtering methods based on linear shift invariant theory, and the solution of the circular harmonic decomposition by integral transform theory--are reviewed. The relation between the invisible kernels, the inverse circular harmonic transform, and the consistency conditions are demonstrated. The discussion and review are extended to the 3D problem-convolution, frequency filtering, spherical harmonic transform solutions, and consistency conditions. The Cormack algorithm based on reconstruction with Zernike polynomials is reviewed. An analogous algorithm and set of reconstruction polynomials is developed for the spherical harmonic transform. The relations between the consistency conditions, boundary conditions and orthogonal basis functions for the 2D projection harmonics are delineated and extended to the 3D case. The equivalence of the inverse circular harmonic transform, the inverse Radon transform, and the inverse Cormack transform is presented. The use of the number of nodes of a projection harmonic as a filter is discussed. Numerical methods for the efficient implementation of angular harmonic algorithms based on orthogonal functions and stable recursion are presented. The derivation of a lower bound for the signal-to-noise ratio of the Cormack algorithm is derived.

  7. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  8. Study on the mapping of dark matter clustering from real space to redshift space

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Song, Yong-Seon

    2016-08-01

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown in this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ``one-point" FoG term being independent of the separation vector between two different points, and 2) the ``correlated" FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k~ 0.2 Mpc-1, considering the resolution of future experiments.

  9. From Jack to Double Jack Polynomials via the Supersymmetric Bridge

    NASA Astrophysics Data System (ADS)

    Lapointe, Luc; Mathieu, Pierre

    2015-07-01

    The Calogero-Sutherland model occurs in a large number of physical contexts, either directly or via its eigenfunctions, the Jack polynomials. The supersymmetric counterpart of this model, although much less ubiquitous, has an equally rich structure. In particular, its eigenfunctions, the Jack superpolynomials, appear to share the very same remarkable combinatorial and structural properties as their non-supersymmetric version. These super-functions are parametrized by superpartitions with fixed bosonic and fermionic degrees. Now, a truly amazing feature pops out when the fermionic degree is sufficiently large: the Jack superpolynomials stabilize and factorize. Their stability is with respect to their expansion in terms of an elementary basis where, in the stable sector, the expansion coefficients become independent of the fermionic degree. Their factorization is seen when the fermionic variables are stripped off in a suitable way which results in a product of two ordinary Jack polynomials (somewhat modified by plethystic transformations), dubbed the double Jack polynomials. Here, in addition to spelling out these results, which were first obtained in the context of Macdonal superpolynomials, we provide a heuristic derivation of the Jack superpolynomial case by performing simple manipulations on the supersymmetric eigen-operators, rendering them independent of the number of particles and of the fermionic degree. In addition, we work out the expression of the Hamiltonian which characterizes the double Jacks. This Hamiltonian, which defines a new integrable system, involves not only the expected Calogero-Sutherland pieces but also combinations of the generators of an underlying affine {widehat{sl}_2} algebra.

  10. MULTIMODAL CLASSIFICATION OF DEMENTIA USING FUNCTIONAL DATA, ANATOMICAL FEATURES AND 3D INVARIANT SHAPE DESCRIPTORS

    PubMed Central

    Mikhno, Arthur; Nuevo, Pablo Martinez; Devanand, Davangere P.; Parsey, Ramin V.; Laine, Andrew F.

    2013-01-01

    Multimodality classification of Alzheimer’s disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), is of interest to the medical community. We improve on prior classification frameworks by incorporating multiple features from MRI and PET data obtained with multiple radioligands, fluorodeoxyglucose (FDG) and Pittsburg compound B (PIB). We also introduce a new MRI feature, invariant shape descriptors based on 3D Zernike moments applied to the hippocampus region. Classification performance is evaluated on data from 17 healthy controls (CTR), 22 MCI, and 17 AD subjects. Zernike significantly outperforms volume, accuracy (Zernike to volume): CTR/AD (90.7% to 71.6%), CTR/MCI (76.2% to 60.0%), MCI/AD (84.3% to 65.5%). Zernike also provides comparable and complementary performance to PET. Optimal accuracy is achieved when Zernike and PET features are combined (accuracy, specificity, sensitivity), CTR/AD (98.8%, 99.5%, 98.1%), CTR/MCI (84.3%, 82.9%, 85.9%) and MCI/AD (93.3%, 93.6%, 93.3%). PMID:24576927

  11. MULTIMODAL CLASSIFICATION OF DEMENTIA USING FUNCTIONAL DATA, ANATOMICAL FEATURES AND 3D INVARIANT SHAPE DESCRIPTORS.

    PubMed

    Mikhno, Arthur; Nuevo, Pablo Martinez; Devanand, Davangere P; Parsey, Ramin V; Laine, Andrew F

    2012-01-01

    Multimodality classification of Alzheimer's disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), is of interest to the medical community. We improve on prior classification frameworks by incorporating multiple features from MRI and PET data obtained with multiple radioligands, fluorodeoxyglucose (FDG) and Pittsburg compound B (PIB). We also introduce a new MRI feature, invariant shape descriptors based on 3D Zernike moments applied to the hippocampus region. Classification performance is evaluated on data from 17 healthy controls (CTR), 22 MCI, and 17 AD subjects. Zernike significantly outperforms volume, accuracy (Zernike to volume): CTR/AD (90.7% to 71.6%), CTR/MCI (76.2% to 60.0%), MCI/AD (84.3% to 65.5%). Zernike also provides comparable and complementary performance to PET. Optimal accuracy is achieved when Zernike and PET features are combined (accuracy, specificity, sensitivity), CTR/AD (98.8%, 99.5%, 98.1%), CTR/MCI (84.3%, 82.9%, 85.9%) and MCI/AD (93.3%, 93.6%, 93.3%).

  12. Structural deformation measurement via efficient tensor polynomial calibrated electro-active glass targets

    NASA Astrophysics Data System (ADS)

    Gugg, Christoph; Harker, Matthew; O'Leary, Paul

    2013-03-01

    This paper describes the physical setup and mathematical modelling of a device for the measurement of structural deformations over large scales, e.g., a mining shaft. Image processing techniques are used to determine the deformation by measuring the position of a target relative to a reference laser beam. A particular novelty is the incorporation of electro-active glass; the polymer dispersion liquid crystal shutters enable the simultaneous calibration of any number of consecutive measurement units without manual intervention, i.e., the process is fully automatic. It is necessary to compensate for optical distortion if high accuracy is to be achieved in a compact hardware design where lenses with short focal lengths are used. Wide-angle lenses exhibit significant distortion, which are typically characterized using Zernike polynomials. Radial distortion models assume that the lens is rotationally symmetric; such models are insufficient in the application at hand. This paper presents a new coordinate mapping procedure based on a tensor product of discrete orthogonal polynomials. Both lens distortion and the projection are compensated by a single linear transformation. Once calibrated, to acquire the measurement data, it is necessary to localize a single laser spot in the image. For this purpose, complete interpolation and rectification of the image is not required; hence, we have developed a new hierarchical approach based on a quad-tree subdivision. Cross-validation tests verify the validity, demonstrating that the proposed method accurately models both the optical distortion as well as the projection. The achievable accuracy is e <= +/-0.01 [mm] in a field of view of 150 [mm] x 150 [mm] at a distance of the laser source of 120 [m]. Finally, a Kolmogorov Smirnov test shows that the error distribution in localizing a laser spot is Gaussian. Consequently, due to the linearity of the proposed method, this also applies for the algorithm's output. Therefore, first-order covariance propagation provides an accurate estimate of the measurement uncertainty, which is essential for any measurement device.

  13. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  16. Are Khovanov-Rozansky polynomials consistent with evolution in the space of knots?

    NASA Astrophysics Data System (ADS)

    Anokhina, A.; Morozov, A.

    2018-04-01

    R-coloured knot polynomials for m-strand torus knots Torus [ m, n] are described by the Rosso-Jones formula, which is an example of evolution in n with Lyapunov exponents, labelled by Young diagrams from R ⊗ m . This means that they satisfy a finite-difference equation (recursion) of finite degree. For the gauge group SL( N ) only diagrams with no more than N lines can contribute and the recursion degree is reduced. We claim that these properties (evolution/recursion and reduction) persist for Khovanov-Rozansky (KR) polynomials, obtained by additional factorization modulo 1 + t, which is not yet adequately described in quantum field theory. Also preserved is some weakened version of differential expansion, which is responsible at least for a simple relation between reduced and unreduced Khovanov polynomials. However, in the KR case evolution is incompatible with the mirror symmetry under the change n -→ - n, what can signal about an ambiguity in the KR factorization even for torus knots.

  17. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  18. Biological applications of phase-contrast electron microscopy.

    PubMed

    Nagayama, Kuniaki

    2014-01-01

    Here, I review the principles and applications of phase-contrast electron microscopy using phase plates. First, I develop the principle of phase contrast based on a minimal model of microscopy, introducing a double Fourier-transform process to mathematically formulate the image formation. Next, I explain four phase-contrast (PC) schemes, defocus PC, Zernike PC, Hilbert differential contrast, and schlieren optics, as image-filtering processes in the context of the minimal model, with particular emphases on the Zernike PC and corresponding Zernike phase plates. Finally, I review applications of Zernike PC cryo-electron microscopy to biological systems such as protein molecules, virus particles, and cells, including single-particle analysis to delineate three-dimensional (3D) structures of protein and virus particles and cryo-electron tomography to reconstruct 3D images of complex protein systems and cells.

  19. Topology of Large-Scale Structures of Galaxies in two Dimensions—Systematic Effects

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan

    2017-02-01

    We study the two-dimensional topology of galactic distribution when projected onto two-dimensional spherical shells. Using the latest Horizon Run 4 simulation data, we construct the genus of the two-dimensional field and consider how this statistic is affected by late-time nonlinear effects—principally gravitational collapse and redshift space distortion (RSD). We also consider systematic and numerical artifacts, such as shot noise, galaxy bias, and finite pixel effects. We model the systematics using a Hermite polynomial expansion and perform a comprehensive analysis of known effects on the two-dimensional genus, with a view toward using the statistic for cosmological parameter estimation. We find that the finite pixel effect is dominated by an amplitude drop and can be made less than 1% by adopting pixels smaller than 1/3 of the angular smoothing length. Nonlinear gravitational evolution introduces time-dependent coefficients of the zeroth, first, and second Hermite polynomials, but the genus amplitude changes by less than 1% between z = 1 and z = 0 for smoothing scales {R}{{G}}> 9 {Mpc}/{{h}}. Non-zero terms are measured up to third order in the Hermite polynomial expansion when studying RSD. Differences in the shapes of the genus curves in real and redshift space are small when we adopt thick redshift shells, but the amplitude change remains a significant ˜ { O }(10 % ) effect. The combined effects of galaxy biasing and shot noise produce systematic effects up to the second Hermite polynomial. It is shown that, when sampling, the use of galaxy mass cuts significantly reduces the effect of shot noise relative to random sampling.

  20. Density, Viscosity and Surface Tension of Binary Mixtures of 1-Butyl-1-Methylpyrrolidinium Tricyanomethanide with Benzothiophene.

    PubMed

    Domańska, Urszula; Królikowska, Marta; Walczak, Klaudia

    2014-01-01

    The effects of temperature and composition on the density and viscosity of pure benzothiophene and ionic liquid (IL), and those of the binary mixtures containing the IL 1-butyl-1-methylpyrrolidynium tricyanomethanide ([BMPYR][TCM] + benzothiophene), are reported at six temperatures (308.15, 318.15, 328.15, 338.15, 348.15 and 358.15) K and ambient pressure. The temperature dependences of the density and viscosity were represented by an empirical second-order polynomial and by the Vogel-Fucher-Tammann equation, respectively. The density and viscosity variations with compositions were described by polynomials. Excess molar volumes and viscosity deviations were calculated and correlated by Redlich-Kister polynomial expansions. The surface tensions of benzothiophene, pure IL and binary mixtures of ([BMPYR][TCM] + benzothiophene) were measured at atmospheric pressure at four temperatures (308.15, 318.15, 328.15 and 338.15) K. The surface tension deviations were calculated and correlated by a Redlich-Kister polynomial expansion. The temperature dependence of the interfacial tension was used to evaluate the surface entropy, the surface enthalpy, the critical temperature, the surface energy and the parachor for pure IL. These measurements have been provided to complete information of the influence of temperature and composition on physicochemical properties for the selected IL, which was chosen as a possible new entrainer in the separation of sulfur compounds from fuels. A qualitative analysis on these quantities in terms of molecular interactions is reported. The obtained results indicate that IL interactions with benzothiophene are strongly dependent on packing effects and hydrogen bonding of this IL with the polar solvent.

  1. Study on the mapping of dark matter clustering from real space to redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Song, Yong-Seon, E-mail: yizheng@kasi.re.kr, E-mail: ysong@kasi.re.kr

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown inmore » this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ''one-point' FoG term being independent of the separation vector between two different points, and 2) the ''correlated' FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k ∼ 0.2 Mpc{sup -1}, considering the resolution of future experiments.« less

  2. An efficient algorithm for building locally refined hp - adaptive H-PCFE: Application to uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-12-01

    Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.

  3. On Complicated Expansions of Solutions to ODES

    NASA Astrophysics Data System (ADS)

    Bruno, A. D.

    2018-03-01

    Polynomial ordinary differential equations are studied by asymptotic methods. The truncated equation associated with a vertex or a nonhorizontal edge of their polygon of the initial equation is assumed to have a solution containing the logarithm of the independent variable. It is shown that, under very weak constraints, this nonpower asymptotic form of solutions to the original equation can be extended to an asymptotic expansion of these solutions. This is an expansion in powers of the independent variable with coefficients being Laurent series in decreasing powers of the logarithm. Such expansions are sometimes called psi-series. Algorithms for such computations are described. Six examples are given. Four of them are concern with Painlevé equations. An unexpected property of these expansions is revealed.

  4. A Polarimetric Extension of the van Cittert-Zernike Theorem for Use with Microwave Interferometers

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Simon, N. K.

    2004-01-01

    The van Cittert-Zernike theorem describes the Fourier-transform relationship between an extended source and its visibility function. Developments in classical optics texts use scalar field formulations for the theorem. Here, we develop a polarimetric extension to the van Cittert-Zernike theorem with applications to passive microwave Earth remote sensing. The development provides insight into the mechanics of two-dimensional interferometric imaging, particularly the effects of polarization basis differences between the scene and the observer.

  5. Frits Zernike--life and achievements

    NASA Astrophysics Data System (ADS)

    Ferwerda, Hedzer A.

    1993-12-01

    We present a review of the life and work of Frits Zernike (1888-1966), professor of mathematical and technical physics and theoretical mechanics at Groningen University, The Netherlands, inventor of phase contrast microscopy.

  6. Thermodynamically self-consistent theory for the Blume-Capel model.

    PubMed

    Grollau, S; Kierlik, E; Rosinberg, M L; Tarjus, G

    2001-04-01

    We use a self-consistent Ornstein-Zernike approximation to study the Blume-Capel ferromagnet on three-dimensional lattices. The correlation functions and the thermodynamics are obtained from the solution of two coupled partial differential equations. The theory provides a comprehensive and accurate description of the phase diagram in all regions, including the wing boundaries in a nonzero magnetic field. In particular, the coordinates of the tricritical point are in very good agreement with the best estimates from simulation or series expansion. Numerical and analytical analysis strongly suggest that the theory predicts a universal Ising-like critical behavior along the lambda line and the wing critical lines, and a tricritical behavior governed by mean-field exponents.

  7. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  8. Design of a Test Bench for Intraocular Lens Optical Characterization

    NASA Astrophysics Data System (ADS)

    Alba-Bueno, Francisco; Vega, Fidel; Millán, María S.

    2011-01-01

    The crystalline lens is the responsible for focusing at different distances (accommodation) in the human eye. This organ grows throughout life increasing in size and rigidity. Moreover, due this growth it loses transparency through life, and becomes gradually opacified causing what is known as cataracts. Cataract is the most common cause of visual loss in the world. At present, this visual loss is recoverable by surgery in which the opacified lens is destroyed (phacoemulsification) and replaced by the implantation of an intraocular lens (IOL). If the IOL implanted is mono-focal the patient loses its natural capacity of accommodation, and as a consequence they would depend on an external optic correction to focus at different distances. In order to avoid this dependency, multifocal IOLs designs have been developed. The multi-focality can be achieved by using either, a refractive surface with different radii of curvature (refractive IOLs) or incorporating a diffractive surface (diffractive IOLs). To analyze the optical quality of IOLs it is necessary to test them in an optical bench that agrees with the ISO119679-2 1999 standard (Ophthalmic implants. Intraocular lenses. Part 2. Optical Properties and Test Methods). In addition to analyze the IOLs according to the ISO standard, we have designed an optical bench that allows us to simulate the conditions of a real human eye. To do that, we will use artificial corneas with different amounts of optical aberrations and several illumination sources with different spectral distributions. Moreover, the design of the test bench includes the possibility of testing the IOLs under off-axis conditions as well as in the presence of decentration and/or tilt. Finally, the optical imaging quality of the IOLs is assessed by using common metrics like the Modulation Transfer Function (MTF), the Point Spread Function (PSF) and/or the Strehl ratio (SR), or via registration of the IOL's wavefront with a Hartmann-Shack sensor and its analysis through expansion in Zernike polynomials.

  9. In vivo chromatic aberration in eyes implanted with intraocular lenses.

    PubMed

    Pérez-Merino, Pablo; Dorronsoro, Carlos; Llorente, Lourdes; Durán, Sonia; Jiménez-Alfaro, Ignacio; Marcos, Susana

    2013-04-12

    To measure in vivo and objectively the monochromatic aberrations at different wavelengths, and the chromatic difference of focus between green and infrared wavelengths in eyes implanted with two models of intraocular lenses (IOL). EIGHTEEN EYES PARTICIPATED IN THIS STUDY: nine implanted with Tecnis ZB99 1-Piece acrylic IOL and nine implanted with AcrySof SN60WF IOL. A custom-developed laser ray tracing (LRT) aberrometer was used to measure the optical aberrations, at 532 nm and 785 nm wavelengths. The monochromatic wave aberrations were described using a fifth-order Zernike polynomial expansion. The chromatic difference of focus was estimated as the difference between the equivalent spherical errors corresponding to each wavelength. Wave aberration measurements were highly reproducible. Except for the defocus term, no significant differences in high order aberrations (HOA) were found between wavelengths. The average chromatic difference of focus was 0.46 ± 0.15 diopters (D) in the Tecnis group, and 0.75 ± 0.12 D in the AcrySof group, and the difference was statistically significant (P < 0.05). Chromatic difference of focus in the AcrySof group was not statistically significantly different from the Longitudinal chromatic aberration (LCA) previously reported in a phakic population (0.78 ± 0.16 D). The impact of LCA on retinal image quality (measured in terms of Strehl ratio) was drastically reduced when considering HOA and astigmatism in comparison with a diffraction-limited eye, yielding the differences in retinal image quality between Tecnis and AcrySof IOLs not significant. LRT aberrometry at different wavelengths is a reproducible technique to evaluate the chromatic difference of focus objectively in eyes implanted with IOLs. Replacement of the crystalline lens by the IOL did not increase chromatic difference of focus above that of phakic eyes in any of the groups. The AcrySof group showed chromatic difference of focus values very similar to physiological values in young eyes.

  10. PV O&M Cost Model and Cost Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andy

    This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.

  11. Rational approximation to e to the -x power with negative poles

    NASA Technical Reports Server (NTRS)

    Cuthill, E.

    1977-01-01

    MACSYMA was applied to the generation of an expansion in terms of Laguerre polynomials to obtain approximations to e to the -x power on 0, infinity. These approximations are compared with those developed by Saff, Schonhage, and Varga.

  12. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  13. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George

    2013-01-01

    Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  14. Polynomial chaos representation of databases on manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2017-04-15

    Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less

  15. Quantum Hurwitz numbers and Macdonald polynomials

    NASA Astrophysics Data System (ADS)

    Harnad, J.

    2016-11-01

    Parametric families in the center Z(C[Sn]) of the group algebra of the symmetric group are obtained by identifying the indeterminates in the generating function for Macdonald polynomials as commuting Jucys-Murphy elements. Their eigenvalues provide coefficients in the double Schur function expansion of 2D Toda τ-functions of hypergeometric type. Expressing these in the basis of products of power sum symmetric functions, the coefficients may be interpreted geometrically as parametric families of quantum Hurwitz numbers, enumerating weighted branched coverings of the Riemann sphere. Combinatorially, they give quantum weighted sums over paths in the Cayley graph of Sn generated by transpositions. Dual pairs of bases for the algebra of symmetric functions with respect to the scalar product in which the Macdonald polynomials are orthogonal provide both the geometrical and combinatorial significance of these quantum weighted enumerative invariants.

  16. Differential modal Zernike wavefront sensor employing a computer-generated hologram: a proposal.

    PubMed

    Mishra, Sanjay K; Bhatt, Rahul; Mohan, Devendra; Gupta, Arun Kumar; Sharma, Anurag

    2009-11-20

    The process of Zernike mode detection with a Shack-Hartmann wavefront sensor is computationally extensive. A holographic modal wavefront sensor has therefore evolved to process the data optically by use of the concept of equal and opposite phase bias. Recently, a multiplexed computer-generated hologram (CGH) technique was developed in which the output is in the form of bright dots that specify the presence and strength of a specific Zernike mode. We propose a wavefront sensor using the concept of phase biasing in the latter technique such that the output is a pair of bright dots for each mode to be sensed. A normalized difference signal between the intensities of the two dots is proportional to the amplitude of the sensed Zernike mode. In our method the number of holograms to be multiplexed is decreased, thereby reducing the modal cross talk significantly. We validated the proposed method through simulation studies for several cases. The simulation results demonstrate simultaneous wavefront detection of lower-order Zernike modes with a resolution better than lambda/50 for the wide measurement range of +/-3.5lambda with much reduced cross talk at high speed.

  17. Influence of each Zernike aberration on the propagation of laser beams through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Azarian, Adrian; Gladysz, Szymon

    2014-10-01

    We study the influence of each Zernike mode on the propagation of a laser beam through the atmosphere by two different numerical methods. In the first method, an idealized adaptive optics system is modeled to subtract a certain number of Zernike modes from the beam. The effect of each aberration is quantified using the Strehl ratio of the longterm exposure in target/receiver plane. In the second method, the strength of each Zernike mode is varied using a numerical space-filling design during the generation of the phase screens. The resulting central intensity for each point of the design is then studied by a linear discriminant analysis, which yields to the importance of each Zernike mode. The results of the two methods are consistent. They indicate that, for a focused Gaussian beam and for certain geometries and turbulence strengths, the hypothesis of diminishing gains with correction of each new mode is not true. For such cases, we observe jumps in the calculated criteria, which indicate an increased importance of some particular modes, especially coma. The implications of these results for the design of adaptive optics systems are discussed.

  18. Large-field high-contrast hard x-ray Zernike phase-contrast nano-imaging beamline at Pohang Light Source.

    PubMed

    Lim, Jun; Park, So Yeong; Huang, Jung Yun; Han, Sung Mi; Kim, Hong-Tae

    2013-01-01

    We developed an off-axis-illuminated zone-plate-based hard x-ray Zernike phase-contrast microscope beamline at Pohang Light Source. Owing to condenser optics-free and off-axis illumination, a large field of view was achieved. The pinhole-type Zernike phase plate affords high-contrast images of a cell with minimal artifacts such as the shade-off and halo effects. The setup, including the optics and the alignment, is simple and easy, and allows faster and easier imaging of large bio-samples.

  19. Improving Zernike moments comparison for optimal similarity and rotation angle retrieval.

    PubMed

    Revaud, Jérôme; Lavoué, Guillaume; Baskurt, Atilla

    2009-04-01

    Zernike moments constitute a powerful shape descriptor in terms of robustness and description capability. However the classical way of comparing two Zernike descriptors only takes into account the magnitude of the moments and loses the phase information. The novelty of our approach is to take advantage of the phase information in the comparison process while still preserving the invariance to rotation. This new Zernike comparator provides a more accurate similarity measure together with the optimal rotation angle between the patterns, while keeping the same complexity as the classical approach. This angle information is particularly of interest for many applications, including 3D scene understanding through images. Experiments demonstrate that our comparator outperforms the classical one in terms of similarity measure. In particular the robustness of the retrieval against noise and geometric deformation is greatly improved. Moreover, the rotation angle estimation is also more accurate than state-of-the-art algorithms.

  20. Wavefront measurement of plastic lenses for mobile-phone applications

    NASA Astrophysics Data System (ADS)

    Huang, Li-Ting; Cheng, Yuan-Chieh; Wang, Chung-Yen; Wang, Pei-Jen

    2016-08-01

    In camera lenses for mobile-phone applications, all lens elements have been designed with aspheric surfaces because of the requirements in minimal total track length of the lenses. Due to the diffraction-limited optics design with precision assembly procedures, element inspection and lens performance measurement have become cumbersome in the production of mobile-phone cameras. Recently, wavefront measurements based on Shack-Hartmann sensors have been successfully implemented on injection-molded plastic lens with aspheric surfaces. However, the applications of wavefront measurement on small-sized plastic lenses have yet to be studied both theoretically and experimentally. In this paper, both an in-house-built and a commercial wavefront measurement system configured on two optics structures have been investigated with measurement of wavefront aberrations on two lens elements from a mobile-phone camera. First, the wet-cell method has been employed for verifications of aberrations due to residual birefringence in an injection-molded lens. Then, two lens elements of a mobile-phone camera with large positive and negative power have been measured with aberrations expressed in Zernike polynomial to illustrate the effectiveness in wavefront measurement for troubleshooting defects in optical performance.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Ke; Li Yanqiu; Wang Hai

    Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less

  2. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  3. Model-based phase-shifting interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  4. Wavefront reconstruction using computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Schulze, Christian; Flamm, Daniel; Schmidt, Oliver A.; Duparré, Michael

    2012-02-01

    We propose a new method to determine the wavefront of a laser beam, based on modal decomposition using computer-generated holograms (CGHs). Thereby the beam under test illuminates the CGH with a specific, inscribed transmission function that enables the measurement of modal amplitudes and phases by evaluating the first diffraction order of the hologram. Since we use an angular multiplexing technique, our method is innately capable of real-time measurements of amplitude and phase, yielding the complete information about the optical field. A measurement of the Stokes parameters, respectively of the polarization state, provides the possibility to calculate the Poynting vector. Two wavefront reconstruction possibilities are outlined: reconstruction from the phase for scalar beams and reconstruction from the Poynting vector for inhomogeneously polarized beams. To quantify single aberrations, the reconstructed wavefront is decomposed into Zernike polynomials. Our technique is applied to beams emerging from different kinds of multimode optical fibers, such as step-index, photonic crystal and multicore fibers, whereas in this work results are exemplarily shown for a step-index fiber and compared to a Shack-Hartmann measurement that serves as a reference.

  5. Triangulation-based 3D surveying borescope

    NASA Astrophysics Data System (ADS)

    Pulwer, S.; Steglich, P.; Villringer, C.; Bauer, J.; Burger, M.; Franz, M.; Grieshober, K.; Wirth, F.; Blondeau, J.; Rautenberg, J.; Mouti, S.; Schrader, S.

    2016-04-01

    In this work, a measurement concept based on triangulation was developed for borescopic 3D-surveying of surface defects. The integration of such measurement system into a borescope environment requires excellent space utilization. The triangulation angle, the projected pattern, the numerical apertures of the optical system, and the viewing angle were calculated using partial coherence imaging and geometric optical raytracing methods. Additionally, optical aberrations and defocus were considered by the integration of Zernike polynomial coefficients. The measurement system is able to measure objects with a size of 50 μm in all dimensions with an accuracy of +/- 5 μm. To manage the issue of a low depth of field while using an optical high resolution system, a wavelength dependent aperture was integrated. Thereby, we are able to control depth of field and resolution of the optical system and can use the borescope in measurement mode with high resolution and low depth of field or in inspection mode with low resolution and higher depth of field. First measurements of a demonstrator system are in good agreement with our simulations.

  6. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  7. Canonical partition functions: ideal quantum gases, interacting classical gases, and interacting quantum gases

    NASA Astrophysics Data System (ADS)

    Zhou, Chi-Chun; Dai, Wu-Sheng

    2018-02-01

    In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.

  8. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  9. Recursive formulas for the partial fraction expansion of a rational function with multiple poles.

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.

    1973-01-01

    The coefficients in the partial fraction expansion considered are given by Heaviside's formula. The evaluation of the coefficients involves the differential of a quotient of two polynomials. A simplified approach for the evaluation of the coefficients is discussed. Leibniz rule is applied and a recurrence formula is derived. A coefficient can also be determined from a system of simultaneous equations. Practical methods for the performance of the computational operations involved in both approaches are considered.

  10. New analysis strategies for micro aspheric lens metrology

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon Abebe

    Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.

  11. Terahertz wavefront assessment based on 2D electro-optic imaging

    NASA Astrophysics Data System (ADS)

    Cahyadi, Harsono; Ichikawa, Ryuji; Degert, Jérôme; Freysz, Eric; Yasui, Takeshi; Abraham, Emmanuel

    2015-03-01

    Complete characterization of terahertz (THz) radiation becomes an interesting yet challenging study for many years. In visible optical region, the wavefront assessment has been proved as a powerful tool for the beam profiling and characterization, which consequently requires 2-dimension (2D) single-shot acquisition of the beam cross-section to provide the spatial profile in time- and frequency-domain. In THz region, the main problem is the lack of effective THz cameras to satisfy this need. In this communication, we propose a simple setup based on free-space collinear 2D electrooptic sampling in a ZnTe crystal for the characterization of THz wavefronts. In principle, we map the optically converted, time-resolved data of the THz pulse by changing the time delay between the probe pulse and the generated THz pulse. The temporal waveforms from different lens-ZnTe distances can clearly indicate the evolution of THz beam as it is converged, focused, or diverged. From the Fourier transform of the temporal waveforms, we can obtain the spectral profile of a broadband THz wave, which in this case within the 0.1-2 THz range. The spectral profile also provides the frequency dependency of the THz pulse amplitude. The comparison between experimental and theoretical results at certain frequencies (here we choose 0.285 and 1.035 THz) is in a good agreement suggesting that our system is capable of THz wavefront characterization. Furthermore, the implementation of Hartmann/Shack-Hartmann sensor principle enables the reconstruction of THz wavefront. We demonstrate the reconstruction of THz wavefronts which are changed from planar wave to spherical one due to the insertion of convex THz lens in the THz beam path. We apply and compare two different reconstruction methods: linear integration and Zernike polynomial. Roughly we conclude that the Zernike method provide smoother wavefront shape that can be elaborated later into quantitative-qualitative analysis about the wavefront distortion.

  12. Comparison of progressive addition lenses by direct measurement of surface shape.

    PubMed

    Huang, Ching-Yao; Raasch, Thomas W; Yi, Allen Y; Bullimore, Mark A

    2013-06-01

    To compare the optical properties of five state-of-the-art progressive addition lenses (PALs) by direct physical measurement of surface shape. Five contemporary freeform PALs (Varilux Comfort Enhanced, Varilux Physio Enhanced, Hoya Lifestyle, Shamir Autograph, and Zeiss Individual) with plano distance power and a +2.00-diopter add were measured with a coordinate measuring machine. The front and back surface heights were physically measured, and the optical properties of each surface, and their combination, were calculated with custom MATLAB routines. Surface shape was described as the sum of Zernike polynomials. Progressive addition lenses were represented as contour plots of spherical equivalent power, cylindrical power, and higher order aberrations (HOAs). Maximum power rate, minimum 1.00-DC corridor width, percentage of lens area with less than 1.00 DC, and root mean square of HOAs were also compared. Comfort Enhanced and Physio Enhanced have freeform front surfaces, Shamir Autograph and Zeiss Individual have freeform back surfaces, and Hoya Lifestyle has freeform properties on both surfaces. However, the overall optical properties are similar, regardless of the lens design. The maximum power rate is between 0.08 and 0.12 diopters per millimeter and the minimum corridor width is between 8 and 11 mm. For a 40-mm lens diameter, the percentage of lens area with less than 1.00 DC is between 64 and 76%. The third-order Zernike terms are the dominant high-order terms in HOAs (78 to 93% of overall shape variance). Higher order aberrations are higher along the corridor area and around the near zone. The maximum root mean square of HOAs based on a 4.5-mm pupil size around the corridor area is between 0.05 and 0.06 µm. This nonoptical method using a coordinate measuring machine can be used to evaluate a PAL by surface height measurements, with the optical properties directly related to its front and back surface designs.

  13. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  14. Convergence of moment expansions for expectation values with embedded random matrix ensembles and quantum chaos

    NASA Astrophysics Data System (ADS)

    Kota, V. K. B.

    2003-07-01

    Smoothed forms for expectation values < K> E of positive definite operators K follow from the K-density moments either directly or in many other ways each giving a series expansion (involving polynomials in E). In large spectroscopic spaces one has to partition the many particle spaces into subspaces. Partitioning leads to new expansions for expectation values. It is shown that all the expansions converge to compact forms depending on the nature of the operator K and the operation of embedded random matrix ensembles and quantum chaos in many particle spaces. Explicit results are given for occupancies < ni> E, spin-cutoff factors < JZ2> E and strength sums < O†O> E, where O is a one-body transition operator.

  15. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  16. Classical Dynamics of Fullerenes

    NASA Astrophysics Data System (ADS)

    Sławianowski, Jan J.; Kotowski, Romuald K.

    2017-06-01

    The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.

  17. CLASSICAL AREAS OF PHENOMENOLOGY: Study on the design and Zernike aberrations of a segmented mirror telescope

    NASA Astrophysics Data System (ADS)

    Jiang, Zhen-Yu; Li, Lin; Huang, Yi-Fan

    2009-07-01

    The segmented mirror telescope is widely used. The aberrations of segmented mirror systems are different from single mirror systems. This paper uses the Fourier optics theory to analyse the Zernike aberrations of segmented mirror systems. It concludes that the Zernike aberrations of segmented mirror systems obey the linearity theorem. The design of a segmented space telescope and segmented schemes are discussed, and its optical model is constructed. The computer simulation experiment is performed with this optical model to verify the suppositions. The experimental results confirm the correctness of the model.

  18. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to eliminate the surface measurement noise or measurement errors can also suffer from aliasing effects. During re-sampling of a surface map, this software preserves the low spatial-frequency characteristic of a given surface map through the use of Zernike-polynomial fit coefficients, and maintains mid- and high-spatial-frequency characteristics of the given surface map by the use of a PSD model derived from the two-dimensional PSD data of the mid- and high-spatial-frequency components of the original surface map. Because this new method creates the new surface map in the desired sampling format from analytical expressions only, it does not encounter any aliasing effects and does not cause any discontinuity in the resultant surface map.

  19. On the Gibbs phenomenon 3: Recovering exponential accuracy in a sub-interval from a spectral partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1993-01-01

    The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.

  20. From r-spin intersection numbers to Hodge integrals

    NASA Astrophysics Data System (ADS)

    Ding, Xiang-Mao; Li, Yuping; Meng, Lingxian

    2016-01-01

    Generalized Kontsevich Matrix Model (GKMM) with a certain given potential is the partition function of r-spin intersection numbers. We represent this GKMM in terms of fermions and expand it in terms of the Schur polynomials by boson-fermion correspondence, and link it with a Hurwitz partition function and a Hodge partition by operators in a widehat{GL}(∞) group. Then, from a W 1+∞ constraint of the partition function of r-spin intersection numbers, we get a W 1+∞ constraint for the Hodge partition function. The W 1+∞ constraint completely determines the Schur polynomials expansion of the Hodge partition function.

  1. Sparse polynomial space approach to dissipative quantum systems: application to the sub-ohmic spin-boson model.

    PubMed

    Alvermann, A; Fehske, H

    2009-04-17

    We propose a general numerical approach to open quantum systems with a coupling to bath degrees of freedom. The technique combines the methodology of polynomial expansions of spectral functions with the sparse grid concept from interpolation theory. Thereby we construct a Hilbert space of moderate dimension to represent the bath degrees of freedom, which allows us to perform highly accurate and efficient calculations of static, spectral, and dynamic quantities using standard exact diagonalization algorithms. The strength of the approach is demonstrated for the phase transition, critical behavior, and dissipative spin dynamics in the spin-boson model.

  2. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  3. A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.

    PubMed

    Langley, Jason; Zhao, Qun

    2009-09-07

    The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.

  4. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  5. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  6. A stochastic approach to online vehicle state and parameter estimation, with application to inertia estimation for rollover prevention and battery charge/health estimation.

    DOT National Transportation Integrated Search

    2013-08-01

    This report summarizes research conducted at Penn State, Virginia Tech, and West Virginia University on the development of algorithms based on the generalized polynomial chaos (gpc) expansion for the online estimation of automotive and transportation...

  7. A range-free method to determine antoine vapor-pressure heat transfer-related equation coefficients using the Boubaker polynomial expansion scheme

    NASA Astrophysics Data System (ADS)

    Koçak, H.; Dahong, Z.; Yildirim, A.

    2011-05-01

    In this study, a range-free method is proposed in order to determine the Antoine constants for a given material (salicylic acid). The advantage of this method is mainly yielding analytical expressions which fit different temperature ranges.

  8. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alipour, N.; Safari, H.; Innes, D. E.

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separatedmore » using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.« less

  9. Fock expansion of multimode pure Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it

    2015-12-15

    The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less

  10. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  11. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  12. Compact representation of continuous energy surfaces for more efficient protein design

    PubMed Central

    Hallen, Mark A.; Gainza, Pablo; Donald, Bruce R.

    2015-01-01

    In macromolecular design, conformational energies are sensitive to small changes in atom coordinates, so modeling the small, continuous motions of atoms around low-energy wells confers a substantial advantage in structural accuracy; however, modeling these motions comes at the cost of a very large number of energy function calls, which form the bottleneck in the design calculation. In this work, we remove this bottleneck by consolidating all conformational energy evaluations into the precomputation of a local polynomial expansion of the energy about the “ideal” conformation for each low-energy, “rotameric” state of each residue pair. This expansion is called Energy as Polynomials in Internal Coordinates (EPIC), where the internal coordinates can be sidechain dihedrals, backrub angles, and/or any other continuous degrees of freedom of a macromolecule, and any energy function can be used without adding any asymptotic complexity to the design. We demonstrate that EPIC efficiently represents the energy surface for both molecular-mechanics and quantum-mechanical energy functions, and apply it specifically to protein design to model both sidechain and backbone degrees of freedom. PMID:26089744

  13. An Efficient numerical method to calculate the conductivity tensor for disordered topological matter

    NASA Astrophysics Data System (ADS)

    Garcia, Jose H.; Covaci, Lucian; Rappoport, Tatiana G.

    2015-03-01

    We propose a new efficient numerical approach to calculate the conductivity tensor in solids. We use a real-space implementation of the Kubo formalism where both diagonal and off-diagonal conductivities are treated in the same footing. We adopt a formulation of the Kubo theory that is known as Bastin formula and expand the Green's functions involved in terms of Chebyshev polynomials using the kernel polynomial method. Within this method, all the computational effort is on the calculation of the expansion coefficients. It also has the advantage of obtaining both conductivities in a single calculation step and for various values of temperature and chemical potential, capturing the topology of the band-structure. Our numerical technique is very general and is suitable for the calculation of transport properties of disordered systems. We analyze how the method's accuracy varies with the number of moments used in the expansion and illustrate our approach by calculating the transverse conductivity of different topological systems. T.G.R, J.H.G and L.C. acknowledge Brazilian agencies CNPq, FAPERJ and INCT de Nanoestruturas de Carbono, Flemish Science Foundation for financial support.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beaucage, Timothy R; Beenfeldt, Eric P; Speakman, Scott A

    Among the langasite family of crystals (LGX), the three most popular materials are langasite (LGS, La3Ga5SiO14), langatate (LGT, La3Ga5.5Ta0.5O14) and langanite (LGN, La3Ga5.5Nb0.5O14). The LGX crystals have received significant attention for acoustic wave (AW) device applications due to several properties, which include: (1) piezoelectric constants about two and a half times those of quartz, thus allowing the design of larger bandwidth filters; (2) existence of temperature compensated orientations; (3) high density, with potential for reduced vibration and acceleration sensitivity; and (4) possibility of operation at high temperatures, since the LGX crystals do not present phase changes up to their meltingmore » point above 1400degC. The LGX crystals' capability to operate at elevated temperatures calls for an investigation on the growth quality and the consistency of these materials' properties at high temperature. One of the fundamental crystal properties is the thermal expansion coefficients in the entire temperature range where the material is operational. This work focuses on the measurement of the LGT thermal expansion coefficients from room temperature (25degC) to 1200degC. Two methods of extracting the thermal expansion coefficients have been used and compared: (a) dual push-rod dilatometry, which provides the bulk expansion; and (b) x-ray powder diffraction, which provides the lattice expansion. Both methods were performed over the entire temperature range and considered multiple samples taken from <001> Czochralski grown LGT material. The thermal coefficients of expansion were extracted by approximating each expansion data set to a third order polynomial fit over three temperature ranges reported in this work: 25degC to 400degC, 400degC to 900degC, 900degC to 1200degC. An accuracy of fit better than 35ppm for the bulk expansion and better than 10ppm for the lattice expansion have been obtained with the aforementioned polynomial fitting. The percentage difference between the bulk and the lattice fitted expansion responses over the entire temperature range of 25degC to 1200degC is less than 2% for the three crystalline axes, which indicates the high quality and growth consistency of the LGT crystal measured« less

  15. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  16. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less

  17. An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LI, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less

  18. Processing short-term and long-term information with a combination of polynomial approximation techniques and time-delay neural networks.

    PubMed

    Fuchs, Erich; Gruber, Christian; Reitmaier, Tobias; Sick, Bernhard

    2009-09-01

    Neural networks are often used to process temporal information, i.e., any kind of information related to time series. In many cases, time series contain short-term and long-term trends or behavior. This paper presents a new approach to capture temporal information with various reference periods simultaneously. A least squares approximation of the time series with orthogonal polynomials will be used to describe short-term trends contained in a signal (average, increase, curvature, etc.). Long-term behavior will be modeled with the tapped delay lines of a time-delay neural network (TDNN). This network takes the coefficients of the orthogonal expansion of the approximating polynomial as inputs such considering short-term and long-term information efficiently. The advantages of the method will be demonstrated by means of artificial data and two real-world application examples, the prediction of the user number in a computer network and online tool wear classification in turning.

  19. An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients

    NASA Technical Reports Server (NTRS)

    Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas

    1994-01-01

    We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

  20. Bilateral Symmetry before and Six Months after Aberration-Free™ Correction with the SCHWIND AMARIS TotalTech Laser: Clinical Outcomes

    PubMed Central

    Arbelaez, Maria Clara; Vidal, Camila; Arba-Mosquera, Samuel

    2010-01-01

    Purpose To compare the preoperative and postoperative bilateral symmetry between OD and OS eyes that have undergone femto-LASIK using the Ziemer LDV femtosecond laser system, the SCHWIND AMARIS Excimer Laser and the Aberrationfree™ profiles implemented in the SCHWIND Custom Ablation Manager software. Methods A total of 25 LASIK patients were bilaterally evaluated at the six-month follow-up visit. In all cases standard examinations, pre- and postoperative analysis with corneal wavefront topography (OPTIKON Scout) were performed. Aberration-free™ aspheric treatments were devised using the Custom Ablation Manager software and ablations were performed by means of the SCHWIND AMARIS flying-spot excimer laser system (both SCHWIND eyetech- solutions). In all cases LASIK flaps were created using an LDV femtosecond laser (Ziemer Group). The OD/OS bilateral symmetry was evaluated in terms of corneal wavefront aberration. Results Preoperatively, 11 Zernike terms showed significant bilateral (OS-vs.-OD) symmetry, and only 6 Zernike terms were significantly different. Overall, 23 out of the 25 patients showed significant bilateral symmetry, and only 2 out of 25 patients showed significant differences. None of the aberration metrics changed from pre- to postoperative values by a clinically relevant amount. At the 6-month postoperative visit, 12 Zernike terms showed significant symmetry, and 8 terms were significantly different. Overall, 22 out of 25 patients showed significant bilateral symmetry (OS vs. OD), and only 3 out of 25 patients showed significant differences. Also, this postoperative examination revealed that 6 Zernike terms lost significant OS-vs.-OD symmetry, but 4 Zernike terms gained significant symmetry. Finally, 4 patients lost significant bilaterality, and 2 patients gained significant bilaterality: bilateral symmetry between eyes was better maintained in those patients with a clear preoperative bilateral symmetry. Conclusions Aberration-Free Treatments with the SCHWIND AMARIS did not induce clinically significant aberrations, maintained the global OD-vs.-OS bilateral symmetry, as well as the bilateral symmetry between corresponding Zernike terms (which influences binocular summation).

  1. Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.

    PubMed

    Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R

    2008-02-14

    The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.

  2. Comparison of permutationally invariant polynomials, neural networks, and Gaussian approximation potentials in representing water interactions through many-body expansions

    NASA Astrophysics Data System (ADS)

    Nguyen, Thuong T.; Székely, Eszter; Imbalzano, Giulio; Behler, Jörg; Csányi, Gábor; Ceriotti, Michele; Götz, Andreas W.; Paesani, Francesco

    2018-06-01

    The accurate representation of multidimensional potential energy surfaces is a necessary requirement for realistic computer simulations of molecular systems. The continued increase in computer power accompanied by advances in correlated electronic structure methods nowadays enables routine calculations of accurate interaction energies for small systems, which can then be used as references for the development of analytical potential energy functions (PEFs) rigorously derived from many-body (MB) expansions. Building on the accuracy of the MB-pol many-body PEF, we investigate here the performance of permutationally invariant polynomials (PIPs), neural networks, and Gaussian approximation potentials (GAPs) in representing water two-body and three-body interaction energies, denoting the resulting potentials PIP-MB-pol, Behler-Parrinello neural network-MB-pol, and GAP-MB-pol, respectively. Our analysis shows that all three analytical representations exhibit similar levels of accuracy in reproducing both two-body and three-body reference data as well as interaction energies of small water clusters obtained from calculations carried out at the coupled cluster level of theory, the current gold standard for chemical accuracy. These results demonstrate the synergy between interatomic potentials formulated in terms of a many-body expansion, such as MB-pol, that are physically sound and transferable, and machine-learning techniques that provide a flexible framework to approximate the short-range interaction energy terms.

  3. OCT 3-D surface topography of isolated human crystalline lenses

    PubMed Central

    Sun, Mengchan; Birkenfeld, Judith; de Castro, Alberto; Ortiz, Sergio; Marcos, Susana

    2014-01-01

    Quantitative 3-D Optical Coherence Tomography was used to measure surface topography of 36 isolated human lenses, and to evaluate the relationship between anterior and posterior lens surface shape and their changes with age. All lens surfaces were fitted to 6th order Zernike polynomials. Astigmatism was the predominant surface aberration in anterior and posterior lens surfaces (accounting for ~55% and ~63% of the variance respectively), followed by spherical terms, coma, trefoil and tetrafoil. The amount of anterior and posterior surface astigmatism did not vary significantly with age. The relative angle between anterior and posterior surface astigmatism axes was on average 36.5 deg, tended to decrease with age, and was >45 deg in 36.1% lenses. The anterior surface RMS spherical term, RMS coma and 3rd order RMS decreased significantly with age. In general, there was a statistically significant correlation between the 3rd and 4th order terms of the anterior and posterior surfaces. Understanding the coordination of anterior and posterior lens surface geometries and their topographical changes with age sheds light into the role of the lens in the optical properties of the eye and the lens aging mechanism. PMID:25360371

  4. OCT 3-D surface topography of isolated human crystalline lenses.

    PubMed

    Sun, Mengchan; Birkenfeld, Judith; de Castro, Alberto; Ortiz, Sergio; Marcos, Susana

    2014-10-01

    Quantitative 3-D Optical Coherence Tomography was used to measure surface topography of 36 isolated human lenses, and to evaluate the relationship between anterior and posterior lens surface shape and their changes with age. All lens surfaces were fitted to 6th order Zernike polynomials. Astigmatism was the predominant surface aberration in anterior and posterior lens surfaces (accounting for ~55% and ~63% of the variance respectively), followed by spherical terms, coma, trefoil and tetrafoil. The amount of anterior and posterior surface astigmatism did not vary significantly with age. The relative angle between anterior and posterior surface astigmatism axes was on average 36.5 deg, tended to decrease with age, and was >45 deg in 36.1% lenses. The anterior surface RMS spherical term, RMS coma and 3rd order RMS decreased significantly with age. In general, there was a statistically significant correlation between the 3rd and 4th order terms of the anterior and posterior surfaces. Understanding the coordination of anterior and posterior lens surface geometries and their topographical changes with age sheds light into the role of the lens in the optical properties of the eye and the lens aging mechanism.

  5. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection.

    PubMed

    Nguyen, Thanh; Bui, Vy; Lam, Van; Raub, Christopher B; Chang, Lin-Ching; Nehmetallah, George

    2017-06-26

    We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.

  6. Study of 3D printing method for GRIN micro-optics devices

    NASA Astrophysics Data System (ADS)

    Wang, P. J.; Yeh, J. A.; Hsu, W. Y.; Cheng, Y. C.; Lee, W.; Wu, N. H.; Wu, C. Y.

    2016-03-01

    Conventional optical elements are based on either refractive or reflective optics theory to fulfill the design specifications via optics performance data. In refractive optical lenses, the refractive index of materials and radius of curvature of element surfaces determine the optical power and wavefront aberrations so that optical performance can be further optimized iteratively. Although gradient index (GRIN) phenomenon in optical materials is well studied for more than a half century, the optics theory in lens design via GRIN materials is still yet to be comprehensively investigated before realistic GRIN lenses are manufactured. In this paper, 3D printing method for manufacture of micro-optics devices with special features has been studied based on methods reported in the literatures. Due to the additive nature of the method, GRIN lenses in micro-optics devices seem to be readily achievable if a design methodology is available. First, derivation of ray-tracing formulae is introduced for all possible structures in GRIN lenses. Optics simulation program is employed for characterization of GRIN lenses with performance data given by aberration coefficients in Zernike polynomial. Finally, a proposed structure of 3D printing machine is described with conceptual illustration.

  7. Development of a 0.5m clear aperture Cassegrain type collimator telescope

    NASA Astrophysics Data System (ADS)

    Ekinci, Mustafa; Selimoǧlu, Özgür

    2016-07-01

    Collimator is an optical instrument used to evaluate performance of high precision instruments, especially space-born high resolution telescopes. Optical quality of the collimator telescope needs to be better than the instrument to be measured. This requirement leads collimator telescope to be a very precise instrument with high quality mirrors and a stable structure to keep it operational under specified conditions. In order to achieve precision requirements and to ensure repeatability of the mounts for polishing and metrology, opto-mechanical principles are applied to mirror mounts. Finite Element Method is utilized to simulate gravity effects, integration errors and temperature variations. Finite element analyses results of deformed optical surfaces are imported to optical domain by using Zernike polynomials to evaluate the design against specified WFE requirements. Both mirrors are aspheric and made from Zerodur for its stability and near zero CTE, M1 is further light-weighted. Optical quality measurements of the mirrors are achieved by using custom made CGHs on an interferometric test setup. Spider of the Cassegrain collimator telescope has a flexural adjustment mechanism driven by precise micrometers to overcome tilt errors originating from finite stiffness of the structure and integration errors. Collimator telescope is assembled and alignment methods are proposed.

  8. Detection of Copy-Rotate-Move Forgery Using Zernike Moments

    NASA Astrophysics Data System (ADS)

    Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu

    As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.

  9. Combined invariants to similarity transformation and to blur using orthogonal Zernike moments

    PubMed Central

    Beijing, Chen; Shu, Huazhong; Zhang, Hui; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2011-01-01

    The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better. PMID:20679028

  10. Range image segmentation using Zernike moment-based generalized edge detector

    NASA Technical Reports Server (NTRS)

    Ghosal, S.; Mehrotra, R.

    1992-01-01

    The authors proposed a novel Zernike moment-based generalized step edge detection method which can be used for segmenting range and intensity images. A generalized step edge detector is developed to identify different kinds of edges in range images. These edge maps are thinned and linked to provide final segmentation. A generalized edge is modeled in terms of five parameters: orientation, two slopes, one step jump at the location of the edge, and the background gray level. Two complex and two real Zernike moment-based masks are required to determine all these parameters of the edge model. Theoretical noise analysis is performed to show that these operators are quite noise tolerant. Experimental results are included to demonstrate edge-based segmentation technique.

  11. On computing the geoelastic response to a disk load

    NASA Astrophysics Data System (ADS)

    Bevis, M.; Melini, D.; Spada, G.

    2016-06-01

    We review the theory of the Earth's elastic and gravitational response to a surface disk load. The solutions for displacement of the surface and the geoid are developed using expansions of Legendre polynomials, their derivatives and the load Love numbers. We provide a MATLAB function called diskload that computes the solutions for both uncompensated and compensated disk loads. In order to numerically implement the Legendre expansions, it is necessary to choose a harmonic degree, nmax, at which to truncate the series used to construct the solutions. We present a rule of thumb (ROT) for choosing an appropriate value of nmax, describe the consequences of truncating the expansions prematurely and provide a means to judiciously violate the ROT when that becomes a practical necessity.

  12. Wavefront reconstruction from non-modulated pyramid wavefront sensor data using a singular value type expansion

    NASA Astrophysics Data System (ADS)

    Hutterer, Victoria; Ramlau, Ronny

    2018-03-01

    The new generation of extremely large telescopes includes adaptive optics systems to correct for atmospheric blurring. In this paper, we present a new method of wavefront reconstruction from non-modulated pyramid wavefront sensor data. The approach is based on a simplified sensor model represented as the finite Hilbert transform of the incoming phase. Due to the non-compactness of the finite Hilbert transform operator the classical theory for singular systems is not applicable. Nevertheless, we can express the Moore-Penrose inverse as a singular value type expansion with weighted Chebychev polynomials.

  13. Solution of Einsteins Equation for Deformation of a Magnetized Neutron Star

    NASA Astrophysics Data System (ADS)

    Rizaldy, R.; Sulaksono, A.

    2018-04-01

    We studied the effect of very large and non-uniform magnetic field existed in the neutron star on the deformation of the neutron star. We used in our analytical calculation, multipole expansion of the tensor metric and the momentum-energy tensor in Legendre polynomial expansion up to the quadrupole order. In this way we obtain the solutions of Einstein’s equation with the correction factors due to the magnetic field are taken into account. We obtain from our numerical calculation that the degree of deformation (ellipticity) is increased when the the mass is decreased.

  14. Combinatorics of γ-structures.

    PubMed

    Han, Hillary S W; Li, Thomas J X; Reidys, Christian M

    2014-08-01

    In this article we study canonical γ-structures, a class of RNA pseudoknot structures that plays a key role in the context of polynomial time folding of RNA pseudoknot structures. A γ-structure is composed of specific building blocks that have topological genus less than or equal to γ, where composition means concatenation and nesting of such blocks. Our main result is the derivation of the generating function of γ-structures via symbolic enumeration using so called irreducible shadows. We furthermore recursively compute the generating polynomials of irreducible shadows of genus ≤ γ. The γ-structures are constructed via γ-matchings. For 1 ≤ γ ≤ 10, we compute Puiseux expansions at the unique, dominant singularities, allowing us to derive simple asymptotic formulas for the number of γ-structures.

  15. Calculating Angle Lambda (λ) Using Zernike Tilt Measurements in Specular Reflection Corneal Topography

    PubMed Central

    Braaf, Boy; van de Watering, Thomas Christiaan; Spruijt, Kees; van der Heijde, Rob G.L.; Sicam, Victor Arni D.P.

    2010-01-01

    Purpose To develop a method to calculate the angle λ of the human eye using Zernike tilt measurements in specular reflection corneal topography. Methods The meaning of Zernike tilt in specular reflection corneal topography is demonstrated by measurements on translated artificial surfaces using the VU Topographer. The relationship derived from the translation experiments is used to determine the angle λ. Corneal surfaces are measured for a set of eight different fixation points, for which tilt angles ρ are obtained from the Zernike tilt coefficients. The angles ρ are used with respect to the fixation target angles to determine angle λ by fitting a geometrical model. This method is validated with Orbscan II's angle-κ measurements in 9 eyes. Results The translation experiments show that the Zernike tilt coefficient is directly related to an angle ρ, which describes a tilt orientation of the cornea and can therefore be used to derive a value for angle λ. A significant correlation exists between measured values for angle λ with the VU Topographer and the angle κ with the Orbscan II (r=0.95, P<0.001). A Bland-Altman plot indicates a mean difference of -0.52 degrees between the two instruments, but this is not statistically significant as indicated by a matched-pairs Wilcoxon signed-rank test (P≤0.1748). The mean precision for measuring angle λ using the VU topographer is 0.6±0.3 degrees. Conclusion The method described above to determine angle λ is sufficiently repeatable and performs similarly to the angle-κ measurements made with the Orbscan II.

  16. Particle identification with neural networks using a rotational invariant moment representation

    NASA Astrophysics Data System (ADS)

    Sinkus, Ralph; Voss, Thomas

    1997-02-01

    A feed-forward neural network is used to identify electromagnetic particles based upon their showering properties within a segmented calorimeter. A preprocessing procedure is applied to the spatial energy distribution of the particle shower in order to account for the varying geometry of the calorimeter. The novel feature is the expansion of the energy distribution in terms of moments of the so-called Zernike functions which are invariant under rotation. The distributions of moments exhibit very different scales, thus the multidimensional input distribution for the neural network is transformed via a principal component analysis and rescaled by its respective variances to ensure input values of the order of one. This increases the sensitivity of the network and thus results in better performance in identifying and separating electromagnetic from hadronic particles, especially at low energies.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choun, Yoon Seok, E-mail: ychoun@gmail.com

    The Heun function generalizes all well-known special functions such as Spheroidal Wave, Lame, Mathieu, and hypergeometric {sub 2}F{sub 1}, {sub 1}F{sub 1} and {sub 0}F{sub 1} functions. Heun functions are applicable to diverse areas such as theory of black holes, lattice systems in statistical mechanics, solution of the Schrödinger equation of quantum mechanics, and addition of three quantum spins. In this paper I will apply three term recurrence formula (Y.S. Choun, (arXiv:1303.0806), 2013) to the power series expansion in closed forms of Heun function (infinite series and polynomial) including all higher terms of A{sub n}’s. Section 3 contains my analysismore » on applying the power series expansions of Heun function to a recent paper (R.S. Maier, Math. Comp. 33 (2007) 811–843). Due to space restriction final equations for the 192 Heun functions are not included in the paper, but feel free to contact me for the final solutions. Section 4 contains two additional examples using the power series expansions of Heun function. This paper is 3rd out of 10 in series “Special functions and three term recurrence formula (3TRF)”. See Section 5 for all the papers in the series. The previous paper in series deals with three term recurrence formula (3TRF). The next paper in the series describes the integral forms of Heun function and its asymptotic behaviors analytically. -- Highlights: •Power series expansion for infinite series of Heun function using 3 term rec. form. •Power series for polynomial which makes B{sub n} term terminated of Heun function. •Applicable to areas such as the Teukolsky equation in Kerr–Newman–de Sitter geometries.« less

  18. Evaluation of a global algorithm for wavefront reconstruction for Shack-Hartmann wave-front sensors and thick fundus reflectors.

    PubMed

    Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha

    2014-01-01

    Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  19. Alternatives to the stochastic "noise vector" approach

    NASA Astrophysics Data System (ADS)

    de Forcrand, Philippe; Jäger, Benjamin

    2018-03-01

    Several important observables, like the quark condensate and the Taylor coefficients of the expansion of the QCD pressure with respect to the chemical potential, are based on the trace of the inverse Dirac operator and of its powers. Such traces are traditionally estimated with "noise vectors" sandwiching the operator. We explore alternative approaches based on polynomial approximations of the inverse Dirac operator.

  20. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  1. A polynomial chaos expansion based molecular dynamics study for probabilistic strength analysis of nano-twinned copper

    NASA Astrophysics Data System (ADS)

    Mahata, Avik; Mukhopadhyay, Tanmoy; Adhikari, Sondipon

    2016-03-01

    Nano-twinned structures are mechanically stronger, ductile and stable than its non-twinned form. We have investigated the effect of varying twin spacing and twin boundary width (TBW) on the yield strength of the nano-twinned copper in a probabilistic framework. An efficient surrogate modelling approach based on polynomial chaos expansion has been proposed for the analysis. Effectively utilising 15 sets of expensive molecular dynamics simulations, thousands of outputs have been obtained corresponding to different sets of twin spacing and twin width using virtual experiments based on the surrogates. One of the major outcomes of this work is that there exists an optimal combination of twin boundary spacing and twin width until which the strength can be increased and after that critical point the nanowires weaken. This study also reveals that the yield strength of nano-twinned copper is more sensitive to TBW than twin spacing. Such robust inferences have been possible to be drawn only because of applying the surrogate modelling approach, which makes it feasible to obtain results corresponding to 40 000 combinations of different twin boundary spacing and twin width in a computationally efficient framework.

  2. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  3. High-order regularization in lattice-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.

    2017-04-01

    A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.

  4. Analytic Evolution of Singular Distribution Amplitudes in QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandogan Kunkel, Asli

    2014-08-01

    Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less

  5. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, D. N.

    1973-01-01

    Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.

  6. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  7. More Zernike modes' open-loop measurement in the sub-aperture of the Shack-Hartmann wavefront sensor.

    PubMed

    Zhu, Zhaoyi; Mu, Quanquan; Li, Dayu; Yang, Chengliang; Cao, Zhaoliang; Hu, Lifa; Xuan, Li

    2016-10-17

    The centroid-based Shack-Hartmann wavefront sensor (SHWFS) treats the sampled wavefronts in the sub-apertures as planes, and the slopes of the sub-wavefronts are used to reconstruct the whole pupil wavefront. The problem is that the centroid method may fail to sense the high-order modes for strong turbulences, decreasing the precision of the whole pupil wavefront reconstruction. To solve this problem, we propose a sub-wavefront estimation method for SHWFS based on the focal plane sensing technique, by which more Zernike modes than the two slopes can be sensed in each sub-aperture. In this paper, the effects on the sub-wavefront estimation method of the related parameters, such as the spot size, the phase offset with its set amplitude and the pixels number in each sub-aperture, are analyzed and these parameters are optimized to achieve high efficiency. After the optimization, open-loop measurement is realized. For the sub-wavefront sensing, we achieve a large linearity range of 3.0 rad RMS for Zernike modes Z2 and Z3, and 2.0 rad RMS for Zernike modes Z4 to Z6 when the pixel number does not exceed 8 × 8 in each sub-aperture. The whole pupil wavefront reconstruction with the modified SHWFS is realized to analyze the improvements brought by the optimized sub-wavefront estimation method. Sixty-five Zernike modes can be reconstructed with a modified SHWFS containing only 7 × 7 sub-apertures, which could reconstruct only 35 modes by the centroid method, and the mean RMS errors of the residual phases are less than 0.2 rad2, which is lower than the 0.35 rad2 by the centroid method.

  8. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  9. Pattern placement errors: application of in-situ interferometer-determined Zernike coefficients in determining printed image deviations

    NASA Astrophysics Data System (ADS)

    Roberts, William R.; Gould, Christopher J.; Smith, Adlai H.; Rebitz, Ken

    2000-08-01

    Several ideas have recently been presented which attempt to measure and predict lens aberrations for new low k1 imaging systems. Abbreviated sets of Zernike coefficients have been produced and used to predict Across Chip Linewidth Variation. Empirical use of the wavefront aberrations can now be used in commercially available lithography simulators to predict pattern distortion and placement errors. Measurement and Determination of Zernike coefficients has been a significant effort of many. However the use of this data has generally been limited to matching lenses or picking best fit lense pairs. We will use wavefront aberration data collected using the Litel InspecStep in-situ Interferometer as input data for Prolith/3D to model and predict pattern placement errors and intrafield overlay variation. Experiment data will be collected and compared to the simulated predictions.

  10. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  11. Polynomial Chaos decomposition applied to stochastic dosimetry: study of the influence of the magnetic field orientation on the pregnant woman exposure at 50 Hz.

    PubMed

    Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P

    2014-01-01

    Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.

  12. pyZELDA: Python code for Zernike wavefront sensors

    NASA Astrophysics Data System (ADS)

    Vigan, A.; N'Diaye, M.

    2018-06-01

    pyZELDA analyzes data from Zernike wavefront sensors dedicated to high-contrast imaging applications. This modular software was originally designed to analyze data from the ZELDA wavefront sensor prototype installed in VLT/SPHERE; simple configuration files allow it to be extended to support several other instruments and testbeds. pyZELDA also includes simple simulation tools to measure the theoretical sensitivity of a sensor and to compare it to other sensors.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashkooli, Ali Ghorbani; Foreman, Evan; Farhad, Siamak

    In this study, synchrotron X-ray computed tomography has been utilized using two different imaging modes, absorption and Zernike phase contrast, to reconstruct the real three-dimensional (3D) morphology of nanostructured Li 4Ti 5O 12 (LTO) electrodes. The morphology of the high atomic number active material has been obtained using the absorption contrast mode, whereas the percolated solid network composed of active material and carbon-doped polymer binder domain (CBD) has been obtained using the Zernike phase contrast mode. The 3D absorption contrast image revealed that some LTO nano-particles tend to agglomerate and form secondary micro-sized particles with varying degrees of sphericity. Themore » tortuosity of electrode’s pore and solid phases were found to have directional dependence, different from Bruggeman’s tortuosity commonly used in macro-homogeneous models. The electrode’s heterogeneous structure was investigated by developing a numerical model to simulate galvanostatic discharge process using the Zernike phase contrast mode. The inclusion of CBD in the Zernike phase contrast results in an integrated percolated network of active material and CBD that is highly suited for continuum modeling. As a result, the simulation results highlight the importance of using the real 3D geometry since the spatial distribution of physical and electrochemical properties have a strong non-uniformity due to microstructural heterogeneities.« less

  14. HOMFLY for twist knots and exclusive Racah matrices in representation [333

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2018-03-01

    Next step is reported in the program of Racah matrices extraction from the differential expansion of HOMFLY polynomials for twist knots: from the double-column rectangular representations R = [ rr ] to a triple-column and triple-hook R = [ 333 ]. The main new phenomenon is the deviation of the particular coefficient f[ 332 ][ 21 ] from the corresponding skew dimension, what opens a way to further generalizations.

  15. New template family for the detection of gravitational waves from comparable-mass black hole binaries

    NASA Astrophysics Data System (ADS)

    Porter, Edward K.

    2007-11-01

    In order to improve the phasing of the comparable-mass waveform as we approach the last stable orbit for a system, various resummation methods have been used to improve the standard post-Newtonian waveforms. In this work we present a new family of templates for the detection of gravitational waves from the inspiral of two comparable-mass black hole binaries. These new adiabatic templates are based on reexpressing the derivative of the binding energy and the gravitational wave flux functions in terms of shifted Chebyshev polynomials. The Chebyshev polynomials are a useful tool in numerical methods as they display the fastest convergence of any of the orthogonal polynomials. In this case they are also particularly useful as they eliminate one of the features that plagues the post-Newtonian expansion. The Chebyshev binding energy now has information at all post-Newtonian orders, compared to the post-Newtonian templates which only have information at full integer orders. In this work, we compare both the post-Newtonian and Chebyshev templates against a fiducially exact waveform. This waveform is constructed from a hybrid method of using the test-mass results combined with the mass dependent parts of the post-Newtonian expansions for the binding energy and flux functions. Our results show that the Chebyshev templates achieve extremely high fitting factors at all post-Newtonian orders and provide excellent parameter extraction. We also show that this new template family has a faster Cauchy convergence, gives a better prediction of the position of the last stable orbit and in general recovers higher Signal-to-Noise ratios than the post-Newtonian templates.

  16. Hermite Polynomials and the Inverse Problem for Collisionless Equilibria

    NASA Astrophysics Data System (ADS)

    Allanson, O.; Neukirch, T.; Troscheit, S.; Wilson, F.

    2017-12-01

    It is long established that Hermite polynomial expansions in either velocity or momentum space can elegantly encode the non-Maxwellian velocity-space structure of a collisionless plasma distribution function (DF). In particular, Hermite polynomials in the canonical momenta naturally arise in the consideration of the 'inverse problem in collisionless equilibria' (IPCE): "for a given macroscopic/fluid equilibrium, what are the self-consistent Vlasov-Maxwell equilibrium DFs?". This question is of particular interest for the equilibrium and stability properties of a given macroscopic configuration, e.g. a current sheet. It can be relatively straightforward to construct a formal solution to IPCE by a Hermite expansion method, but several important questions remain regarding the use of this method. We present recent work that considers the necessary conditions of non-negativity, convergence, and the existence of all moments of an equilibrium DF solution found for IPCE. We also establish meaningful analogies between the equations that link the microscopic and macrosopic descriptions of the Vlasov-Maxwell equilibrium, and those that solve the initial value problem for the heat equation. In the language of the heat equation, IPCE poses the pressure tensor as the 'present' heat distribution over an infinite domain, and the non-Maxwellian features of the DF as the 'past' distribution. We find sufficient conditions for the convergence of the Hermite series representation of the DF, and prove that the non-negativity of the DF can be dependent on the magnetisation of the plasma. For DFs that decay at least as quickly as exp(-v^2/4), we show non-negativity is guaranteed for at least a finite range of magnetisation values, as parameterised by the ratio of the Larmor radius to the gradient length scale. 1. O. Allanson, T. Neukirch, S. Troscheit & F. Wilson: From one-dimensional fields to Vlasov equilibria: theory and application of Hermite polynomials, Journal of Plasma Physics, 82, 905820306, 2016 2. O. Allanson, S. Troscheit & T. Neukirch: The inverse problem for collisionless plasma equilibria (invited paper for IMA Journal of Applied Mathematics, under review)

  17. 3D scene reconstruction based on multi-view distributed video coding in the Zernike domain for mobile applications

    NASA Astrophysics Data System (ADS)

    Palma, V.; Carli, M.; Neri, A.

    2011-02-01

    In this paper a Multi-view Distributed Video Coding scheme for mobile applications is presented. Specifically a new fusion technique between temporal and spatial side information in Zernike Moments domain is proposed. Distributed video coding introduces a flexible architecture that enables the design of very low complex video encoders compared to its traditional counterparts. The main goal of our work is to generate at the decoder the side information that optimally blends temporal and interview data. Multi-view distributed coding performance strongly depends on the side information quality built at the decoder. At this aim for improving its quality a spatial view compensation/prediction in Zernike moments domain is applied. Spatial and temporal motion activity have been fused together to obtain the overall side-information. The proposed method has been evaluated by rate-distortion performances for different inter-view and temporal estimation quality conditions.

  18. Zernike phase-contrast electron cryotomography applied to marine cyanobacteria infected with cyanophages.

    PubMed

    Dai, Wei; Fu, Caroline; Khant, Htet A; Ludtke, Steven J; Schmid, Michael F; Chiu, Wah

    2014-11-01

    Advances in electron cryotomography have provided new opportunities to visualize the internal 3D structures of a bacterium. An electron microscope equipped with Zernike phase-contrast optics produces images with markedly increased contrast compared with images obtained by conventional electron microscopy. Here we describe a protocol to apply Zernike phase plate technology for acquiring electron tomographic tilt series of cyanophage-infected cyanobacterial cells embedded in ice, without staining or chemical fixation. We detail the procedures for aligning and assessing phase plates for data collection, and methods for obtaining 3D structures of cyanophage assembly intermediates in the host by subtomogram alignment, classification and averaging. Acquiring three or four tomographic tilt series takes ∼12 h on a JEM2200FS electron microscope. We expect this time requirement to decrease substantially as the technique matures. The time required for annotation and subtomogram averaging varies widely depending on the project goals and data volume.

  19. Non-null annular subaperture stitching interferometry for aspheric test

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A non-null annular subaperture stitching interferometry (NASSI), combining the subaperture stitching idea and non-null test method, is proposed for steep aspheric testing. Compared with standard annular subaperture stitching interferometry (ASSI), a partial null lens (PNL) is employed as an alternative to the transmission sphere, to generate different aspherical wavefronts as the references. The coverage subaperture number would thus be reduced greatly for the better performance of aspherical wavefronts in matching the local slope of aspheric surfaces. Instead of various mathematical stitching algorithms, a simultaneous reverse optimizing reconstruction (SROR) method based on system modeling and ray tracing is proposed for full aperture figure error reconstruction. All the subaperture measurements are simulated simultaneously with a multi-configuration model in a ray-tracing program, including the interferometric system modeling and subaperture misalignments modeling. With the multi-configuration model, full aperture figure error would be extracted in form of Zernike polynomials from subapertures wavefront data by the SROR method. This method concurrently accomplishes subaperture retrace error and misalignment correction, requiring neither complex mathematical algorithms nor subaperture overlaps. A numerical simulation exhibits the comparison of the performance of the NASSI and standard ASSI, which demonstrates the high accuracy of the NASSI in testing steep aspheric. Experimental results of NASSI are shown to be in good agreement with that of Zygo® VerifireTM Asphere interferometer.

  20. Horizontal Line-of-Sight Turbulence Over Near-Ground Paths and Implications for Adaptive Optics Corrections in Laser Communications.

    PubMed

    Levine, B M; Martinsen, E A; Wirth, A; Jankevics, A; Toledo-Quinones, M; Landers, F; Bruno, T L

    1998-07-20

    Atmospheric turbulence over long horizontal paths perturbs phase and can also cause severe intensity scintillation in the pupil of an optical communications receiver, which limits the data rate over which intensity-based modulation schemes can operate. The feasibility of using low-order adaptive optics by applying phase-only corrections over horizontal propagation paths is investigated. A Shack-Hartmann wave-front sensor was built and data were gathered on paths 1 m above ground and between a 1- and 2.5-km range. Both intensity fluctuations and optical path fluctuation statistics were gathered within a single frame, and the wave-front reconstructor was modified to allow for scintillated data. The temporal power spectral density for various Zernike polynomial modes was used to determine the effects of the expected corrections by adaptive optics. The slopes of the inertial subrange of turbulence were found to be less than predicted by Kolmogorov theory with an infinite outer scale, and the distribution of variance explained by increasing order was also found to be different. Statistical analysis of these data in the 1-km range indicates that at communications wavelengths of 1.3 mum, a significant improvement in transmitted beam quality could be expected most of the time, to a performance of 10% Strehl ratio or better.

  1. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  2. Optimization of lightweight structure and supporting bipod flexure for a space mirror.

    PubMed

    Chen, Yi-Cheng; Huang, Bo-Kai; You, Zhen-Ting; Chan, Chia-Yen; Huang, Ting-Ming

    2016-12-20

    This article presents an optimization process for integrated optomechanical design. The proposed optimization process for integrated optomechanical design comprises computer-aided drafting, finite element analysis (FEA), optomechanical transfer codes, and an optimization solver. The FEA was conducted to determine mirror surface deformation; then, deformed surface nodal data were transferred into Zernike polynomials through MATLAB optomechanical transfer codes to calculate the resulting optical path difference (OPD) and optical aberrations. To achieve an optimum design, the optimization iterations of the FEA, optomechanical transfer codes, and optimization solver were automatically connected through a self-developed Tcl script. Two examples of optimization design were illustrated in this research, namely, an optimum lightweight design of a Zerodur primary mirror with an outer diameter of 566 mm that is used in a spaceborne telescope and an optimum bipod flexure design that supports the optimum lightweight primary mirror. Finally, optimum designs were successfully accomplished in both examples, achieving a minimum peak-to-valley (PV) value for the OPD of the deformed optical surface. The simulated optimization results showed that (1) the lightweight ratio of the primary mirror increased from 56% to 66%; and (2) the PV value of the mirror supported by optimum bipod flexures in the horizontal position effectively decreased from 228 to 61 nm.

  3. Pepsi-SAXS: an adaptive method for rapid and accurate computation of small-angle X-ray scattering profiles.

    PubMed

    Grudinin, Sergei; Garkavenko, Maria; Kazennov, Andrei

    2017-05-01

    A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist-Shannon-Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion order, this method has the same quadratic dependence on the number of atoms in the model as the Debye-based approach, but with a much smaller prefactor in the computational complexity. The method has been systematically validated on a large set of over 50 models collected from the BioIsis and SASBDB databases. Using a laptop, it was demonstrated that Pepsi-SAXS is about seven, 29 and 36 times faster compared with CRYSOL, FoXS and the three-dimensional Zernike method in SAStbx, respectively, when tested on data from the BioIsis database, and is about five, 21 and 25 times faster compared with CRYSOL, FoXS and SAStbx, respectively, when tested on data from SASBDB. On average, Pepsi-SAXS demonstrates comparable accuracy in terms of χ 2 to CRYSOL and FoXS when tested on BioIsis and SASBDB profiles. Together with a small allowed variation of adjustable parameters, this demonstrates the effectiveness of the method. Pepsi-SAXS is available at http://team.inria.fr/nano-d/software/pepsi-saxs.

  4. Transfer matrix computation of critical polynomials for two-dimensional Potts models

    DOE PAGES

    Jacobsen, Jesper Lykke; Scullard, Christian R.

    2013-02-04

    We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial P B(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = e K — 1 of P B(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasingmore » the size of B in an appropriate way. In earlier work, P B(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of P B(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 8 2), kagome, and (3, 12 2) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures v c obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain v c(4, 8 2) = 3.742 489 (4), v c(kagome) = 1.876 459 7 (2), and v c(3, 12 2) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

  5. Causal properties of nonlinear gravitational waves in modified gravity

    NASA Astrophysics Data System (ADS)

    Suvorov, Arthur George; Melatos, Andrew

    2017-09-01

    Some exact, nonlinear, vacuum gravitational wave solutions are derived for certain polynomial f (R ) gravities. We show that the boundaries of the gravitational domain of dependence, associated with events in polynomial f (R ) gravity, are not null as they are in general relativity. The implication is that electromagnetic and gravitational causality separate into distinct notions in modified gravity, which may have observable astrophysical consequences. The linear theory predicts that tachyonic instabilities occur, when the quadratic coefficient a2 of the Taylor expansion of f (R ) is negative, while the exact, nonlinear, cylindrical wave solutions presented here can be superluminal for all values of a2. Anisotropic solutions are found, whose wave fronts trace out time- or spacelike hypersurfaces with complicated geometric properties. We show that the solutions exist in f (R ) theories that are consistent with Solar System and pulsar timing experiments.

  6. A class of reduced-order models in the theory of waves and stability.

    PubMed

    Chapman, C J; Sorokin, S V

    2016-02-01

    This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.

  7. Accurate spectral solutions for the parabolic and elliptic partial differential equations by the ultraspherical tau method

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.

    2005-09-01

    We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.

  8. Optical analysis of thermal induced structural distortions

    NASA Technical Reports Server (NTRS)

    Weinswig, Shepard; Hookman, Robert A.

    1991-01-01

    The techniques used for the analysis of thermally induced structural distortions of optical components such as scanning mirrors and telescope optics are outlined. Particular attention is given to the methodology used in the thermal and structural analysis of the GOES scan mirror, the optical analysis using Zernike coefficients, and the optical system performance evaluation. It is pointed out that the use of Zernike coefficients allows an accurate, effective, and simple linkage between thermal/mechanical effects and the optical design.

  9. Astigmatism and defocus wavefront correction via Zernike modes produced with fluidic lenses

    PubMed Central

    Marks, Randall; Mathine, David L.; Schwiegerling, Jim; Peyman, Gholam; Peyghambarian, Nasser

    2010-01-01

    Fluidic lenses have been developed for ophthalmic applications with continuously varying optical powers for second order Zernike modes. Continuously varying corrections for both myopic and hyperopic defocus have been demonstrated over a range of three diopters using a fluidic lens with a circular retaining aperture. Likewise, a six diopter range of astigmatism has been continuously corrected using fluidic lenses with rectangular apertures. Imaging results have been characterized using a model eye. PMID:19571912

  10. Morphological and Electrochemical Characterization of Nanostructured Li 4Ti 5O 12 Electrodes Using Multiple Imaging Mode Synchrotron X-ray Computed Tomography

    DOE PAGES

    Kashkooli, Ali Ghorbani; Foreman, Evan; Farhad, Siamak; ...

    2017-09-21

    In this study, synchrotron X-ray computed tomography has been utilized using two different imaging modes, absorption and Zernike phase contrast, to reconstruct the real three-dimensional (3D) morphology of nanostructured Li 4Ti 5O 12 (LTO) electrodes. The morphology of the high atomic number active material has been obtained using the absorption contrast mode, whereas the percolated solid network composed of active material and carbon-doped polymer binder domain (CBD) has been obtained using the Zernike phase contrast mode. The 3D absorption contrast image revealed that some LTO nano-particles tend to agglomerate and form secondary micro-sized particles with varying degrees of sphericity. Themore » tortuosity of electrode’s pore and solid phases were found to have directional dependence, different from Bruggeman’s tortuosity commonly used in macro-homogeneous models. The electrode’s heterogeneous structure was investigated by developing a numerical model to simulate galvanostatic discharge process using the Zernike phase contrast mode. The inclusion of CBD in the Zernike phase contrast results in an integrated percolated network of active material and CBD that is highly suited for continuum modeling. As a result, the simulation results highlight the importance of using the real 3D geometry since the spatial distribution of physical and electrochemical properties have a strong non-uniformity due to microstructural heterogeneities.« less

  11. Light field creating and imaging with different order intensity derivatives

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Jiang, Huan

    2014-10-01

    Microscopic image restoration and reconstruction is a challenging topic in the image processing and computer vision, which can be widely applied to life science, biology and medicine etc. A microscopic light field creating and three dimensional (3D) reconstruction method is proposed for transparent or partially transparent microscopic samples, which is based on the Taylor expansion theorem and polynomial fitting. Firstly the image stack of the specimen is divided into several groups in an overlapping or non-overlapping way along the optical axis, and the first image of every group is regarded as reference image. Then different order intensity derivatives are calculated using all the images of every group and polynomial fitting method based on the assumption that the structure of the specimen contained by the image stack in a small range along the optical axis are possessed of smooth and linear property. Subsequently, new images located any position from which to reference image the distance is Δz along the optical axis can be generated by means of Taylor expansion theorem and the calculated different order intensity derivatives. Finally, the microscopic specimen can be reconstructed in 3D form using deconvolution technology and all the images including both the observed images and the generated images. The experimental results show the effectiveness and feasibility of our method.

  12. Precision measurement of the η → π + π - π 0 Dalitz plot distribution with the KLOE detector

    NASA Astrophysics Data System (ADS)

    Anastasi, A.; Babusci, D.; Bencivenni, G.; Berlowski, M.; Bloise, C.; Bossi, F.; Branchini, P.; Budano, A.; Caldeira Balkeståhl, L.; Cao, B.; Ceradini, F.; Ciambrone, P.; Curciarello, F.; Czerwinski, E.; D'Agostini, G.; Danè, E.; De Leo, V.; De Lucia, E.; De Santis, A.; De Simone, P.; Di Cicco, A.; Di Domenico, A.; Di Salvo, R.; Domenici, D.; D'Uffizi, A.; Fantini, A.; Felici, G.; Fiore, S.; Gajos, A.; Gauzzi, P.; Giardina, G.; Giovannella, S.; Graziani, E.; Happacher, F.; Heijkenskjöld, L.; Ikegami Andersson, W.; Johansson, T.; Kaminska, D.; Krzemien, W.; Kupsc, A.; Loffredo, S.; Mandaglio, G.; Martini, M.; Mascolo, M.; Messi, R.; Miscetti, S.; Morello, G.; Moricciani, D.; Moskal, P.; Papenbrock, M.; Passeri, A.; Patera, V.; Perez del Rio, E.; Ranieri, A.; Santangelo, P.; Sarra, I.; Schioppa, M.; Silarski, M.; Sirghi, F.; Tortora, L.; Venanzoni, G.; Wislicki, W.; Wolke, M.

    2016-05-01

    Using 1.6 fb-1 of e + e - → ϕ → ηγ data collected with the KLOE detector at DAΦNE, the Dalitz plot distribution for the η → π + π - π 0 decay is studied with the world's largest sample of ˜ 4 .7 · 106 events. The Dalitz plot density is parametrized as a polynomial expansion up to cubic terms in the normalized dimensionless variables X and Y . The experiment is sensitive to all charge conjugation conserving terms of the expansion, including a gX 2 Y term. The statistical uncertainty of all parameters is improved by a factor two with respect to earlier measurements.

  13. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  14. Correlations between corneal and total wavefront aberrations

    NASA Astrophysics Data System (ADS)

    Mrochen, Michael; Jankov, Mirko; Bueeler, Michael; Seiler, Theo

    2002-06-01

    Purpose: Corneal topography data expressed as corneal aberrations are frequently used to report corneal laser surgery results. However, the optical image quality at the retina depends on all optical elements of the eye such as the human lens. Thus, the aim of this study was to investigate the correlations between the corneal and total wavefront aberrations and to discuss the importance of corneal aberrations for representing corneal laser surgery results. Methods: Thirty three eyes of 22 myopic subjects were measured with a corneal topography system and a Tschernig-type wavefront analyzer after the pupils were dilated to at least 6 mm in diameter. All measurements were centered with respect to the line of sight. Corneal and total wavefront aberrations were calculated up to the 6th Zernike order in the same reference plane. Results: Statistically significant correlations (p < 0.05) between the corneal and total wavefront aberrations were found for the astigmatism (C3,C5) and all 3rd Zernike order coefficients such as coma (C7,C8). No statistically significant correlations were found for all 4th to 6th order Zernike coefficients except for the 5th order horizontal coma C18 (p equals 0.003). On average, all Zernike coefficients for the corneal aberrations were found to be larger compared to Zernike coefficients for the total wavefront aberrations. Conclusions: Corneal aberrations are only of limited use for representing the optical quality of the human eye after corneal laser surgery. This is due to the lack of correlation between corneal and total wavefront aberrations in most of the higher order aberrations. Besides this, the data present in this study yield towards an aberration balancing between corneal aberrations and the optical elements within the eye that reduces the aberration from the cornea by a certain degree. Consequently, ideal customized ablations have to take both, corneal and total wavefront aberrations, into consideration.

  15. PIZZA: a phase-induced zonal Zernike apodization designed for stellar coronagraphy

    NASA Astrophysics Data System (ADS)

    Martinache, Frantz

    2004-08-01

    I explore here the possibilities offered by the general formalism of coronagraphy for the very special case of phase contrast. This technique, invented by Zernike, is commonly used in microscopy, to see phase objects such as micro-organisms, and in strioscopy, to control the quality of optics polishing. It may find application in telescope pupil apodization with significant advantages over classical pupil apodization techniques, including high throughput and no off-axis resolution loss, which is essential for exoplanet imaging.

  16. Zernike Phase Contrast Electron Cryo-Tomography Applied to Marine Cyanobacteria Infected with Cyanophages

    PubMed Central

    Dai, Wei; Fu, Caroline; Khant, Htet A.; Ludtke, Steven J.; Schmid, Michael F.; Chiu, Wah

    2015-01-01

    Advances in electron cryo-tomography have provided a new opportunity to visualize the internal 3D structures of a bacterium. An electron microscope equipped with Zernike phase contrast optics produces images with dramatically increased contrast compared to images obtained by conventional electron microscopy. Here we describe a protocol to apply Zernike phase plate technology for acquiring electron tomographic tilt series of cyanophage-infected cyanobacterial cells embedded in ice, without staining or chemical fixation. We detail the procedures for aligning and assessing phase plates for data collection, and methods to obtain 3D structures of cyanophage assembly intermediates in the host, by subtomogram alignment, classification and averaging. Acquiring three to four tomographic tilt series takes approximately 12 h on a JEM2200FS electron microscope. We expect this time requirement to decrease substantially as the technique matures. Time required for annotation and subtomogram averaging varies widely depending on the project goals and data volume. PMID:25321408

  17. Hyperbolic Cross Truncations for Stochastic Fourier Cosine Series

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    Based on our decomposition of stochastic processes and our asymptotic representations of Fourier cosine coefficients, we deduce an asymptotic formula of approximation errors of hyperbolic cross truncations for bivariate stochastic Fourier cosine series. Moreover we propose a kind of Fourier cosine expansions with polynomials factors such that the corresponding Fourier cosine coefficients decay very fast. Although our research is in the setting of stochastic processes, our results are also new for deterministic functions. PMID:25147842

  18. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  19. Computer-aided diagnosis of malignant mammograms using Zernike moments and SVM.

    PubMed

    Sharma, Shubhi; Khanna, Pritee

    2015-02-01

    This work is directed toward the development of a computer-aided diagnosis (CAD) system to detect abnormalities or suspicious areas in digital mammograms and classify them as malignant or nonmalignant. Original mammogram is preprocessed to separate the breast region from its background. To work on the suspicious area of the breast, region of interest (ROI) patches of a fixed size of 128×128 are extracted from the original large-sized digital mammograms. For training, patches are extracted manually from a preprocessed mammogram. For testing, patches are extracted from a highly dense area identified by clustering technique. For all extracted patches corresponding to a mammogram, Zernike moments of different orders are computed and stored as a feature vector. A support vector machine (SVM) is used to classify extracted ROI patches. The experimental study shows that the use of Zernike moments with order 20 and SVM classifier gives better results among other studies. The proposed system is tested on Image Retrieval In Medical Application (IRMA) reference dataset and Digital Database for Screening Mammography (DDSM) mammogram database. On IRMA reference dataset, it attains 99% sensitivity and 99% specificity, and on DDSM mammogram database, it obtained 97% sensitivity and 96% specificity. To verify the applicability of Zernike moments as a fitting texture descriptor, the performance of the proposed CAD system is compared with the other well-known texture descriptors namely gray-level co-occurrence matrix (GLCM) and discrete cosine transform (DCT).

  20. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  1. Numerical study of the stress-strain state of reinforced plate on an elastic foundation by the Bubnov-Galerkin method

    NASA Astrophysics Data System (ADS)

    Beskopylny, Alexey; Kadomtseva, Elena; Strelnikov, Grigory

    2017-10-01

    The stress-strain state of a rectangular slab resting on an elastic foundation is considered. The slab material is isotropic. The slab has stiffening ribs that directed parallel to both sides of the plate. Solving equations are obtained for determining the deflection for various mechanical and geometric characteristics of the stiffening ribs which are parallel to different sides of the plate, having different rigidity for bending and torsion. The calculation scheme assumes an orthotropic slab having different cylindrical stiffness in two mutually perpendicular directions parallel to the reinforcing ribs. An elastic foundation is adopted by Winkler model. To determine the deflection the Bubnov-Galerkin method is used. The deflection is taken in the form of an expansion in a series with unknown coefficients by special polynomials, which are a combination of Legendre polynomials.

  2. Simultaneous stochastic inversion for geomagnetic main field and secular variation. I - A large-scale inverse problem

    NASA Technical Reports Server (NTRS)

    Bloxham, Jeremy

    1987-01-01

    The method of stochastic inversion is extended to the simultaneous inversion of both main field and secular variation. In the present method, the time dependency is represented by an expansion in Legendre polynomials, resulting in a simple diagonal form for the a priori covariance matrix. The efficient preconditioned Broyden-Fletcher-Goldfarb-Shanno algorithm is used to solve the large system of equations resulting from expansion of the field spatially to spherical harmonic degree 14 and temporally to degree 8. Application of the method to observatory data spanning the 1900-1980 period results in a data fit of better than 30 nT, while providing temporally and spatially smoothly varying models of the magnetic field at the core-mantle boundary.

  3. Limit Cycle Bifurcations by Perturbing a Piecewise Hamiltonian System with a Double Homoclinic Loop

    NASA Astrophysics Data System (ADS)

    Xiong, Yanqin

    2016-06-01

    This paper is concerned with the bifurcation problem of limit cycles by perturbing a piecewise Hamiltonian system with a double homoclinic loop. First, the derivative of the first Melnikov function is provided. Then, we use it, together with the analytic method, to derive the asymptotic expansion of the first Melnikov function near the loop. Meanwhile, we present the first coefficients in the expansion, which can be applied to study the limit cycle bifurcation near the loop. We give sufficient conditions for this system to have 14 limit cycles in the neighborhood of the loop. As an application, a piecewise polynomial Liénard system is investigated, finding six limit cycles with the help of the obtained method.

  4. Scattering of electromagnetic plane wave from a perfect electric conducting strip placed at interface of topological insulator-chiral medium

    NASA Astrophysics Data System (ADS)

    Shoukat, Sobia; Naqvi, Qaisar A.

    2016-12-01

    In this manuscript, scattering from a perfect electric conducting strip located at planar interface of topological insulator (TI)-chiral medium is investigated using the Kobayashi Potential method. Longitudinal components of electric and magnetic vector potential in terms of unknown weighting function are considered. Use of related set of boundary conditions yields two algebraic equations and four dual integral equations (DIEs). Integrand of two DIEs are expanded in terms of the characteristic functions with expansion coefficients which must satisfy, simultaneously, the discontinuous property of the Weber-Schafheitlin integrals, required edge and boundary conditions. The resulting expressions are then combined with algebraic equations to express the weighting function in terms of expansion coefficients, these expansion coefficients are then substituted in remaining DIEs. The projection is applied using the Jacobi polynomials. This treatment yields matrix equation for expansion coefficients which is solved numerically. These unknown expansion coefficients are used to find the scattered field. The far zone scattering width is investigated with respect to different parameters of the geometry, i.e, chirality of chiral medium, angle of incidence, size of the strip. Significant effects of different parameters including TI parameter on the scattering width are noted.

  5. Eye micromotions influence on an error of Zernike coefficients reconstruction in the one-ray refractometry of an eye

    NASA Astrophysics Data System (ADS)

    Osipova, Irina Y.; Chyzh, Igor H.

    2001-06-01

    The influence of eye jumps on the accuracy of estimation of Zernike coefficients from eye transverse aberration measurements was investigated. By computer modeling the ametropy and astigmatism have been examined. The standard deviation of the wave aberration function was calculated. It was determined that the standard deviation of the wave aberration function achieves the minimum value if the number of scanning points is equal to the number of eye jumps in scanning period. The recommendations for duration of measurement were worked out.

  6. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  7. Chaotic Expansions of Elements of the Universal Enveloping Superalgebra Associated with a Z2-graded Quantum Stochastic Calculus

    NASA Astrophysics Data System (ADS)

    Eyre, T. M. W.

    Given a polynomial function f of classical stochastic integrator processes whose differentials satisfy a closed Ito multiplication table, we can express the stochastic derivative of f as We establish an analogue of this formula in the form of a chaotic decomposition for Z2-graded theories of quantum stochastic calculus based on the natural coalgebra structure of the universal enveloping superalgebra.

  8. An algorithm for the numerical evaluation of the associated Legendre functions that runs in time independent of degree and order

    NASA Astrophysics Data System (ADS)

    Bremer, James

    2018-05-01

    We describe a method for the numerical evaluation of normalized versions of the associated Legendre functions Pν- μ and Qν- μ of degrees 0 ≤ ν ≤ 1, 000, 000 and orders - ν ≤ μ ≤ ν for arguments in the interval (- 1 , 1). Our algorithm, which runs in time independent of ν and μ, is based on the fact that while the associated Legendre functions themselves are extremely expensive to represent via polynomial expansions, the logarithms of certain solutions of the differential equation defining them are not. We exploit this by numerically precomputing the logarithms of carefully chosen solutions of the associated Legendre differential equation and representing them via piecewise trivariate Chebyshev expansions. These precomputed expansions, which allow for the rapid evaluation of the associated Legendre functions over a large swath of parameter domain mentioned above, are supplemented with asymptotic and series expansions in order to cover it entirely. The results of numerical experiments demonstrating the efficacy of our approach are presented, and our code for evaluating the associated Legendre functions is publicly available.

  9. A well-posed and stable stochastic Galerkin formulation of the incompressible Navier–Stokes equations with random data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2016-02-01

    We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less

  10. In vivo human crystalline lens topography.

    PubMed

    Ortiz, Sergio; Pérez-Merino, Pablo; Gambra, Enrique; de Castro, Alberto; Marcos, Susana

    2012-10-01

    Custom high-resolution high-speed anterior segment spectral domain optical coherence tomography (OCT) was used to characterize three-dimensionally (3-D) the human crystalline lens in vivo. The system was provided with custom algorithms for denoising and segmentation of the images, as well as for fan (scanning) and optical (refraction) distortion correction, to provide fully quantitative images of the anterior and posterior crystalline lens surfaces. The method was tested on an artificial eye with known surfaces geometry and on a human lens in vitro, and demonstrated on three human lenses in vivo. Not correcting for distortion overestimated the anterior lens radius by 25% and the posterior lens radius by more than 65%. In vivo lens surfaces were fitted by biconicoids and Zernike polynomials after distortion correction. The anterior lens radii of curvature ranged from 10.27 to 14.14 mm, and the posterior lens radii of curvature ranged from 6.12 to 7.54 mm. Surface asphericities ranged from -0.04 to -1.96. The lens surfaces were well fitted by quadrics (with variation smaller than 2%, for 5-mm pupils), with low amounts of high order terms. Surface lens astigmatism was significant, with the anterior lens typically showing horizontal astigmatism ([Formula: see text] ranging from -11 to -1 µm) and the posterior lens showing vertical astigmatism ([Formula: see text] ranging from 6 to 10 µm).

  11. Miniature electrically tunable rotary dual-focus lenses

    NASA Astrophysics Data System (ADS)

    Zou, Yongchao; Zhang, Wei; Lin, Tong; Chau, Fook Siong; Zhou, Guangya

    2016-03-01

    The emerging dual-focus lenses are drawing increasing attention recently due to their wide applications in both academia and industries, including laser cutting systems, microscopy systems, and interferometer-based surface profilers. In this paper, a miniature electrically tunable rotary dual-focus lens is developed. Such a lens consists of two optical elements, each having an optical flat surface and one freeform surface. The two freeform surfaces are initialized with the governing equation Ar2θ (A is the constant to be determined, r and θ denote the radii and angles in the polar coordinate system) and then optimized by ray tracing technique with additional Zernike polynomial terms for aberration correction. The freeform surfaces are achieved by a single-point diamond turning technique and then a PDMS-based replication process is utilized to materialize the final lens elements. To drive the two coaxial elements to rotate independently, two MEMS thermal rotary actuators are developed and fabricated by a standard MUMPs process. The experimental results show that the MEMS thermal actuator provides a maximum rotation angle of about 8.2 degrees with an input DC voltage of 6.5 V, leading to a wide tuning range for both the two focal lengths of the lens. Specifically, one focal length can be tuned from about 30 mm to 20 mm while the other one can be adjusted from about 30 mm to 60 mm.

  12. Measurement methods to build up the digital optical twin

    NASA Astrophysics Data System (ADS)

    Prochnau, Marcel; Holzbrink, Michael; Wang, Wenxin; Holters, Martin; Stollenwerk, Jochen; Loosen, Peter

    2018-02-01

    The realization of the Digital Optical Twin (DOT), which is in short the digital representation of the physical state of an optical system, is particularly useful in the context of an automated assembly process of optical systems. During the assembly process, the physical system status of the optical system is continuously measured and compared with the digital model. In case of deviations between physical state and the digital model, the latter one is adapted to match the physical state. To reach the goal described above, in a first step measurement/characterization technologies concerning their suitability to generate a precise digital twin of an existing optical system have to be identified and evaluated. This paper gives an overview of possible characterization methods and, finally, shows first results of evaluated, compared methods (e.g. spot-radius, MTF, Zernike-polynomials), to create a DOT. The focus initially lies on the unequivocalness of the optimization results as well as on the computational time required for the optimization to reach the characterized system state. Possible sources of error are the measurement accuracy (to characterize the system) , execution time of the measurement, time needed to map the digital to the physical world (optimization step) as well as interface possibilities to integrate the measurement tool into an assembly cell. Moreover, it is to be discussed whether the used measurement methods are suitable for a `seamless' integration into an assembly cell.

  13. In vivo human crystalline lens topography

    PubMed Central

    Ortiz, Sergio; Pérez-Merino, Pablo; Gambra, Enrique; de Castro, Alberto; Marcos, Susana

    2012-01-01

    Custom high-resolution high-speed anterior segment spectral domain optical coherence tomography (OCT) was used to characterize three-dimensionally (3-D) the human crystalline lens in vivo. The system was provided with custom algorithms for denoising and segmentation of the images, as well as for fan (scanning) and optical (refraction) distortion correction, to provide fully quantitative images of the anterior and posterior crystalline lens surfaces. The method was tested on an artificial eye with known surfaces geometry and on a human lens in vitro, and demonstrated on three human lenses in vivo. Not correcting for distortion overestimated the anterior lens radius by 25% and the posterior lens radius by more than 65%. In vivo lens surfaces were fitted by biconicoids and Zernike polynomials after distortion correction. The anterior lens radii of curvature ranged from 10.27 to 14.14 mm, and the posterior lens radii of curvature ranged from 6.12 to 7.54 mm. Surface asphericities ranged from −0.04 to −1.96. The lens surfaces were well fitted by quadrics (with variation smaller than 2%, for 5-mm pupils), with low amounts of high order terms. Surface lens astigmatism was significant, with the anterior lens typically showing horizontal astigmatism (Z22 ranging from −11 to −1 µm) and the posterior lens showing vertical astigmatism (Z22 ranging from 6 to 10 µm). PMID:23082289

  14. Experimental study of an off-axis three mirror anastigmatic system with wavefront coding technology.

    PubMed

    Yan, Feng; Tao, Xiaoping

    2012-04-10

    Wavefront coding (WFC) is a kind of computational imaging technique that controls defocus and defocus related aberrations of optical systems by introducing a specially designed phase distribution to the pupil function. This technology has been applied in many imaging systems to improve performance and/or reduce cost. The application of WFC technology in an off-axis three mirror anastigmatic (TMA) system has been proposed, and the design and optimization of optics, the restoration of degraded images, and the manufacturing of wavefront coded elements have been researched in our previous work. In this paper, we describe the alignment, the imaging experiment, and the image restoration of the off-axis TMA system with WFC technology. The ideal wavefront map is set to be the system error of the interferometer to simplify the assembly, and the coefficients of certain Zernike polynomials are monitored to verify the result in the alignment process. A pinhole of 20 μm diameter and the third plate of WT1005-62 resolution patterns are selected as the targets in the imaging experiment. The comparison of the tail lengths of point spread functions is represented to show the invariance of the image quality in the extended depth of focus. The structure similarity is applied to estimate the relationship among the captured images with varying defocus. We conclude that the experiment results agree with the earlier theoretical analysis.

  15. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    PubMed

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  16. Opto-mechanical design of optical window for aero-optics effect simulation instruments

    NASA Astrophysics Data System (ADS)

    Wang, Guo-ming; Dong, Dengfeng; Zhou, Weihu; Ming, Xing; Zhang, Yan

    2016-10-01

    A complete theory is established for opto-mechanical systems design of the window in this paper, which can make the design more rigorous .There are three steps about the design. First, the universal model of aerodynamic environment is established based on the theory of Computational Fluid Dynamics, and the pneumatic pressure distribution and temperature data of optical window surface is obtained when aircraft flies in 5-30km altitude, 0.5-3Ma speed and 0-30°angle of attack. The temperature and pressure distribution values for the maximum constraint is selected as the initial value of external conditions on the optical window surface. Then, the optical window and mechanical structure are designed, which is also divided into two parts: First, mechanical structure which meet requirements of the security and tightness is designed. Finally, rigorous analysis and evaluation are given about the structure of optics and mechanics we have designed. There are two parts to be analyzed. First, the Fluid-Solid-Heat Coupled Model is given based on finite element analysis. And the deformation of the glass and structure can be obtained by the model, which can assess the feasibility of the designed optical windows and ancillary structure; Second, the new optical surface is fitted by Zernike polynomials according to the deformation of the surface of the optical window, which can evaluate imaging quality impact of spectral camera by the deformation of window.

  17. Design and simulation of the surface shape control system for membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhang, Gengsheng; Tang, Minxue

    2009-11-01

    The surface shape control is one of the key technologies for the manufacture of membrane mirror. This paper presents a design of membrane mirror's surface shape control system on the basis of fuzzy logic control. The system contains such function modules as surface shape design, surface shape control, surface shape analysis, and etc. The system functions are realized by using hybrid programming technology of Visual C# and MATLAB. The finite element method is adopted to simulate the surface shape control of membrane mirror. The finite element analysis model is established through ANSYS Parametric Design Language (APDL). ANSYS software kernel is called by the system in background running mode when doing the simulation. The controller is designed by means of controlling the sag of the mirror's central crosssection. The surface shape of the membrane mirror and its optical aberration are obtained by applying Zernike polynomial fitting. The analysis of surface shape control and the simulation of disturbance response are performed for a membrane mirror with 300mm aperture and F/2.7. The result of the simulation shows that by using the designed control system, the RMS wavefront error of the mirror can reach to 142λ (λ=632.8nm), which is consistent to the surface accuracy of the membrane mirror obtained by the large deformation theory of membrane under the same condition.

  18. Research on automatic Hartmann test of membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhong, Xing; Jin, Guang; Liu, Chunyu; Zhang, Peng

    2010-10-01

    Electrostatic membrane mirror is ultra-lightweight and easy to acquire a large diameter comparing with traditional optical elements, so its development and usage is the trend of future large mirrors. In order to research the control method of the static stretching membrane mirror, the surface configuration must be tested. However, membrane mirror's shape is always changed by variable voltages on the electrodes, and the optical properties of membrane materials using in our experiment are poor, so it is difficult to test membrane mirror by interferometer and null compensator method. To solve this problem, an automatic optical test procedure for membrane mirror is designed based on Hartmann screen method. The optical path includes point light source, CCD camera, splitter and diffuse transmittance screen. The spots' positions on the diffuse transmittance screen are pictured by CCD camera connected with computer, and image segmentation and centroid solving is auto processed. The CCD camera's lens distortion is measured, and fixing coefficients are given to eliminate the spots' positions recording error caused by lens distortion. To process the low sampling Hartmann test results, Zernike polynomial fitting method is applied to smooth the wave front. So low frequency error of the membrane mirror can be measured then. Errors affecting the test accuracy are also analyzed in this paper. The method proposed in this paper provides a reference for surface shape detection in membrane mirror research.

  19. Power maps and wavefront for progressive addition lenses in eyeglass frames.

    PubMed

    Mejía, Yobani; Mora, David A; Díaz, Daniel E

    2014-10-01

    To evaluate a method for measuring the cylinder, sphere, and wavefront of progressive addition lenses (PALs) in eyeglass frames. We examine the contour maps of cylinder, sphere, and wavefront of a PAL assembled in an eyeglass frame using an optical system based on a Hartmann test. To reduce the data noise, particularly in the border of the eyeglass frame, we implement a method based on the Fourier analysis to extrapolate spots outside the eyeglass frame. The spots are extrapolated up to a circular pupil that circumscribes the eyeglass frame and compared with data obtained from a circular uncut PAL. By using the Fourier analysis to extrapolate spots outside the eyeglass frame, we can remove the edge artifacts of the PAL within its frame and implement the modal method to fit wavefront data with Zernike polynomials within a circular aperture that circumscribes the frame. The extrapolated modal maps from framed PALs accurately reflect maps obtained from uncut PALs and provide smoothed maps for the cylinder and sphere inside the eyeglass frame. The proposed method for extrapolating spots outside the eyeglass frame removes edge artifacts of the contour maps (wavefront, cylinder, and sphere), which may be useful to facilitate measurements such as the length and width of the progressive corridor for a PAL in its frame. The method can be applied to any shape of eyeglass frame.

  20. Image inversion analysis of the HST OTA (Hubble Space Telescope Optical Telescope Assembly), phase A

    NASA Technical Reports Server (NTRS)

    Litvak, M. M.

    1991-01-01

    Technical work during September-December 1990 consisted of: (1) analyzing HST point source images obtained from JPL; (2) retrieving phase information from the images by a direct (noniterative) technique; and (3) characterizing the wavefront aberration due to the errors in the Hubble Space Telescope (HST) mirrors, in a preliminary manner. This work was in support of JPL design of compensating optics for the next generation wide-field planetary camera on HST. This digital technique for phase retrieval from pairs of defocused images, is based on the energy transport equation between these image planes. In addition, an end-to-end wave optics routine, based on the JPL Code 5 prescription of the unaberrated HST and WFPC, was derived for output of the reference phase front when mirror error is absent. Also, the Roddier routine unwrapped the retrieved phase by inserting the required jumps of +/- 2(pi) radians for the sake of smoothness. A least-squares fitting routine, insensitive to phase unwrapping, but nonlinear, was used to obtain estimates of the Zernike polynomial coefficients that describe the aberration. The phase results were close to, but higher than, the expected error in conic constant of the primary mirror suggested by the fossil evidence. The analysis of aberration contributed by the camera itself could be responsible for the small discrepancy, but was not verified by analysis.

  1. Sensitivity analysis for future space missions with segmented telescopes for high-contrast imaging

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; Pueyo, Laurent; Sauvage, Jean-François; Mazoyer, Johan; Soummer, Remi; Fusco, Thierry; Sivaramakrishnan, Anand

    2018-01-01

    The detection and analysis of biomarkers on earth-like planets using direct-imaging will require both high-contrast imaging and spectroscopy at very close angular separation (10^10 star to planet flux ratio at a few 0.1”). This goal can only be achieved with large telescopes in space to overcome atmospheric turbulence, often combined with a coronagraphic instrument with wavefront control. Large segmented space telescopes such as studied for the LUVOIR mission will generate segment-level instabilities and cophasing errors in addition to local mirror surface errors and other aberrations of the overall optical system. These effects contribute directly to the degradation of the final image quality and contrast. We present an analytical model that produces coronagraphic images of a segmented pupil telescope in the presence of segment phasing aberrations expressed as Zernike polynomials. This model relies on a pair-based projection of the segmented pupil and provides results that match an end-to-end simulation with an rms error on the final contrast of ~3%. This analytical model can be applied both to static and dynamic modes, and either in monochromatic or broadband light. It retires the need for end-to-end Monte-Carlo simulations that are otherwise needed to build a rigorous error budget, by enabling quasi-instantaneous analytical evaluations. The ability to invert directly the analytical model provides direct constraints and tolerances on all segments-level phasing and aberrations.

  2. Corneal topography measurements for biometric applications

    NASA Astrophysics Data System (ADS)

    Lewis, Nathan D.

    The term biometrics is used to describe the process of analyzing biological and behavioral traits that are unique to an individual in order to confirm or determine his or her identity. Many biometric modalities are currently being researched and implemented including, fingerprints, hand and facial geometry, iris recognition, vein structure recognition, gait, voice recognition, etc... This project explores the possibility of using corneal topography measurements as a trait for biometric identification. Two new corneal topographers were developed for this study. The first was designed to function as an operator-free device that will allow a user to approach the device and have his or her corneal topography measured. Human subject topography data were collected with this device and compared to measurements made with the commercially available Keratron Piccolo topographer (Optikon, Rome, Italy). A third topographer that departs from the standard Placido disk technology allows for arbitrary pattern illumination through the use of LCD monitors. This topographer was built and tested to be used in future research studies. Topography data was collected from 59 subjects and modeled using Zernike polynomials, which provide for a simple method of compressing topography data and comparing one topographical measurement with a database for biometric identification. The data were analyzed to determine the biometric error rates associated with corneal topography measurements. Reasonably accurate results, between three to eight percent simultaneous false match and false non-match rates, were achieved.

  3. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  4. A Demonstration of a Versatile Low-order Wavefront Sensor Tested on Multiple Coronographs

    NASA Astrophysics Data System (ADS)

    Singh, Garima; Lozi, Julien; Jovanovic, Nemanja; Guyon, Olivier; Baudoz, Pierre; Martinache, Frantz; Kudo, Tomoyuki

    2017-09-01

    Detecting faint companions in close proximity to stars is one of the major goals of current/planned ground- and space-based high-contrast imaging instruments. High-performance coronagraphs can suppress the diffraction features and gain access to companions at small angular separation. However, the uncontrolled pointing errors degrade the coronagraphic performance by leaking starlight around the coronagraphic focal-plane mask, preventing the detection of companions at small separations. A Lyot-stop low-order wavefront sensor (LLOWFS) was therefore introduced to calibrate and measure these aberrations for focal-plane phase mask coronagraphs. This sensor quantifies the variations in wavefront error decomposed into a few Zernike modes by reimaging the diffracted starlight rejected by a reflective Lyot stop. The technique was tested with several coronagraphs on the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system at the Subaru Telescope. The wavefront was decomposed into 15 and 35 Zernike modes with an occulting and focal-plane phase mask coronagraph, respectively, which were used to drive a closed-loop correction in the laboratory. Using a 2000-actuator deformable mirror, a closed-loop pointing stability between 10-3-10-4 λ/D was achieved in the laboratory in H-band, with sub nanometer residuals for the other Zernike modes (Noll index > 4). On-sky, the low-order control of 10+ Zernike modes for the phase-induced amplitude apodization and the vector vortex coronagraphs was demonstrated, with a closed-loop pointing stability of {10}-4λ /D under good seeing and {10}-3λ /D under moderate seeing conditions readily achievable.

  5. Algorithms for the computation of solutions of the Ornstein-Zernike equation.

    PubMed

    Peplow, A T; Beardmore, R E; Bresme, F

    2006-10-01

    We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.

  6. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  7. Low Order Wavefront Sensing and Control for WFIRST-AFTA Coronagraph

    NASA Technical Reports Server (NTRS)

    Shi, Fang; Balasubramanian, Kunjithapatha; Bartos, Randall; Hien, Randall; Kern, Brian; Krist, John; Lam, Raymond; Moore, Douglas; Moore, James; Patterson, Keith; hide

    2015-01-01

    To maintain the required WFIRST Coronagraph performance in a realistic space environment, a low order wavefront sensing and control (LOWFS/C) subsystem is necessary. The LOWFS/C use s the rejected stellar light from coronagraph to sense and suppress the telescope pointing drift and jitter as well as the low order wavefront errors due to changes in thermal loading of the telescope and the rest of the observatory. In this paper we will present an overview of the low order wavefront sensing and control subsystem for the WFIRST -AFTA Coronagraph. We will describe LOWFS/C's Zernike wavefront sensor concept and WFIRST LOWFS/C control design. We will present an overview of our analysis and modeling results on the Zernike wavefront sensor, the line -of-sight jitter suppression loop performance, as well as the low order wavefront error correction with the coronagraph's deformable mirror. In this paper we will also report the LOWFS/C testbed design and the preliminary in-air test results, which show a very promising performance of the Zernike wavefront sensor and FSM feedback loop.

  8. Orientation-dependent integral equation theory for a two-dimensional model of water

    NASA Astrophysics Data System (ADS)

    Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.

    2003-03-01

    We develop an integral equation theory that applies to strongly associating orientation-dependent liquids, such as water. In an earlier treatment, we developed a Wertheim integral equation theory (IET) that we tested against NPT Monte Carlo simulations of the two-dimensional Mercedes Benz model of water. The main approximation in the earlier calculation was an orientational averaging in the multidensity Ornstein-Zernike equation. Here we improve the theory by explicit introduction of an orientation dependence in the IET, based upon expanding the two-particle angular correlation function in orthogonal basis functions. We find that the new orientation-dependent IET (ODIET) yields a considerable improvement of the predicted structure of water, when compared to the Monte Carlo simulations. In particular, ODIET predicts more long-range order than the original IET, with hexagonal symmetry, as expected for the hydrogen bonded ice in this model. The new theoretical approximation still errs in some subtle properties; for example, it does not predict liquid water's density maximum with temperature or the negative thermal expansion coefficient.

  9. A two-dimensional model of water: Theory and computer simulations

    NASA Astrophysics Data System (ADS)

    Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Southall, N. T.; Dill, K. A.

    2000-02-01

    We develop an analytical theory for a simple model of liquid water. We apply Wertheim's thermodynamic perturbation theory (TPT) and integral equation theory (IET) for associative liquids to the MB model, which is among the simplest models of water. Water molecules are modeled as 2-dimensional Lennard-Jones disks with three hydrogen bonding arms arranged symmetrically, resembling the Mercedes-Benz (MB) logo. The MB model qualitatively predicts both the anomalous properties of pure water and the anomalous solvation thermodynamics of nonpolar molecules. IET is based on the orientationally averaged version of the Ornstein-Zernike equation. This is one of the main approximations in the present work. IET correctly predicts the pair correlation function of the model water at high temperatures. Both TPT and IET are in semi-quantitative agreement with the Monte Carlo values of the molar volume, isothermal compressibility, thermal expansion coefficient, and heat capacity. A major advantage of these theories is that they require orders of magnitude less computer time than the Monte Carlo simulations.

  10. Structure of the BBGKY hierarchy near phase transition

    NASA Astrophysics Data System (ADS)

    Ramanathan, G. V.; Jedrzejek, C.

    1980-06-01

    A nonperturbative method, as opposed to diagrammatic expansions, is used to study critical phenomena in a fluid with a small hard core and a weak, long-ranged attractive potential. Using the natural small parameter related to the inverse of the range of the attractive potential, spatially uniformly valid asymptotic estimates are made for the magnitudes of all correlations (which are defined as the excess from the generalized superposition approximation) in a region near phase transition in arbitrary number of dimensions. It is shown that if the dimension of the space is larger than four, the correlation hierarchy truncates at the three-body level. The pair correlation satisfies a linear equation. The solution is precisely of Ornstein-Zernike form. For dimensions smaller than four, the hierarchy is still an infinite chain, but considerably simpler than the BBGKY hierarchy. In this case, at the critical point, the correlations are shown to satisfy a scaling law which is the same as that for S4 spin systems.

  11. Membrane triangles with corner drilling freedoms. I - The EFF element

    NASA Technical Reports Server (NTRS)

    Alvin, Ken; De La Fuente, Horacio M.; Haugen, Bjorn; Felippa, Carlos A.

    1992-01-01

    The formulation of 3-node 9-DOF membrane elements with normal-to-element-plane rotations (drilling freedoms) is examined in the context of parametrized variational principles. In particular, attention is given to the application of the extended free formulation (EFF) to the construction of a triangular membrane element with drilling freedoms that initially has complete quadratic polynomial expansions in each displacement component. The main advantage of the EFF over the free formulation triangle is that an explicit form is obtained for the higher-order stiffness.

  12. Switching probability of all-perpendicular spin valve nanopillars

    NASA Astrophysics Data System (ADS)

    Tzoufras, M.

    2018-05-01

    In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.

  13. Aided target recognition processing of MUDSS sonar data

    NASA Astrophysics Data System (ADS)

    Lau, Brian; Chao, Tien-Hsin

    1998-09-01

    The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.

  14. KARHUNEN-LOÈVE Basis Functions of Kolmogorov Turbulence in the Sphere

    NASA Astrophysics Data System (ADS)

    Mathar, Richard J.

    In support of modeling atmospheric turbulence, the statistically independent Karhunen-Loève modes of refractive indices with isotropic Kolmogorov spectrum of the covariance are calculated inside a sphere of fixed radius, rendered as series of 3D Zernike functions. Many of the symmetry arguments of the well-known associated 2D problem for the circular input pupil remain valid. The technique of efficient diagonalization of the eigenvalue problem in wavenumber space is founded on the Fourier representation of the 3D Zernike basis, and extensible to the von-Kármán power spectrum.

  15. Neutron diffraction determination of the cell dimensions and thermal expansion of the fluoroperovskite KMgF3 from 293 to 3.6 K

    NASA Astrophysics Data System (ADS)

    Mitchell, Roger H.; Cranswick, Lachlan M. D.; Swainson, Ian

    2006-11-01

    The cell dimensions of the fluoroperovskite KMgF3 synthesized by solid state methods have been determined by powder neutron diffraction and Rietveld refinement over the temperature range 293 3.6 K using Pt metal as an internal standard for calibration of the neutron wavelength. These data demonstrate conclusively that cubic Pmoverline{3} m KMgF3 does not undergo any phase transitions to structures of lower symmetry with decreasing temperature. Cell dimensions range from 3.9924(2) Å at 293 K to 3.9800(2) Å at 3.6 K, and are essentially constant within experimental error from 50 to 3.6 K. The thermal expansion data are described using a fourth order polynomial function.

  16. Stress-strain state on non-thin plates and shells. Generalized theory (survey)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nemish, Yu.N.; Khoma, I.Yu.

    1994-05-01

    In the first part of this survey, we examined exact and approximate analytic solutions of specific problems for thick shells and plates obtained on the basis of three-dimensional equations of the mathematical theory of elasticity. The second part of the survey, presented here, is devoted to systematization and analysis of studies made in regard to a generalized theory of plates and shells based on expansion of the sought functions into Fourier series in Legendre polynomials of the thickness coordinate. Methods are described for constructing systems of differential equations in the coefficients of the expansions (as functions of two independent variablesmore » and time), along with the corresponding boundary and initial conditions. Matters relating to substantiation of the given approach and its generalizations are also discussed.« less

  17. Temperature dependent lattice constant of InSb above room temperature

    NASA Astrophysics Data System (ADS)

    Breivik, Magnus; Nilsen, Tron Arne; Fimland, Bjørn-Ove

    2013-10-01

    Using temperature dependent X-ray diffraction on two InSb single crystalline substrates, the bulk lattice constant of InSb was determined between 32 and 325 °C. A polynomial function was fitted to the data: a(T)=6.4791+3.28×10-5×T+1.02×10-8×T2 Å (T in °C), which gives slightly higher values than previously published (which go up to 62 °C). From the fit, the thermal expansion of InSb was calculated to be α(T)=5.062×10-6+3.15×10-9×T K-1 (T in °C). We found that the thermal expansion coefficient is higher than previously published values above 100 °C (more than 10% higher at 325 °C).

  18. Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-03-01

    Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.

  19. Advantages of intermediate X-ray energies in Zernike phase contrast X-ray microscopy.

    PubMed

    Wang, Zhili; Gao, Kun; Chen, Jian; Hong, Youli; Ge, Xin; Wang, Dajiang; Pan, Zhiyun; Zhu, Peiping; Yun, Wenbing; Jacobsen, Chris; Wu, Ziyu

    2013-01-01

    Understanding the hierarchical organizations of molecules and organelles within the interior of large eukaryotic cells is a challenge of fundamental interest in cell biology. Light microscopy is a powerful tool for observations of the dynamics of live cells, its resolution attainable is limited and insufficient. While electron microscopy can produce images with astonishing resolution and clarity of ultra-thin (<1 μm thick) sections of biological specimens, many questions involve the three-dimensional organization of a cell or the interconnectivity of cells. X-ray microscopy offers superior imaging resolution compared to light microscopy, and unique capability of nondestructive three-dimensional imaging of hydrated unstained biological cells, complementary to existing light and electron microscopy. Until now, X-ray microscopes operating in the "water window" energy range between carbon and oxygen k-shell absorption edges have produced outstanding 3D images of cryo-preserved cells. The relatively low X-ray energy (<540 eV) of the water window imposes two important limitations: limited penetration (<10 μm) not suitable for imaging larger cells or tissues, and small depth of focus (DoF) for high resolution 3D imaging (e.g., ~1 μm DoF for 20 nm resolution). An X-ray microscope operating at intermediate energy around 2.5 keV using Zernike phase contrast can overcome the above limitations and reduces radiation dose to the specimen. Using a hydrated model cell with an average chemical composition reported in literature, we calculated the image contrast and the radiation dose for absorption and Zernike phase contrast, respectively. The results show that an X-ray microscope operating at ~2.5 keV using Zernike phase contrast offers substantial advantages in terms of specimen size, radiation dose and depth-of-focus. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Fourier-Legendre expansion of the one-electron density matrix of ground-state two-electron atoms.

    PubMed

    Ragot, Sébastien; Ruiz, María Belén

    2008-09-28

    The density matrix rho(r,r(')) of a spherically symmetric system can be expanded as a Fourier-Legendre series of Legendre polynomials P(l)(cos theta=rr(')rr(')). Application is here made to harmonically trapped electron pairs (i.e., Moshinsky's and Hooke's atoms), for which exact wavefunctions are known, and to the helium atom, using a near-exact wavefunction. In the present approach, generic closed form expressions are derived for the series coefficients of rho(r,r(')). The series expansions are shown to converge rapidly in each case, with respect to both the electron number and the kinetic energy. In practice, a two-term expansion accounts for most of the correlation effects, so that the correlated density matrices of the atoms at issue are essentially a linear functions of P(l)(cos theta)=cos theta. For example, in the case of Hooke's atom, a two-term expansion takes in 99.9% of the electrons and 99.6% of the kinetic energy. The correlated density matrices obtained are finally compared to their determinantal counterparts, using a simplified representation of the density matrix rho(r,r(')), suggested by the Legendre expansion. Interestingly, two-particle correlation is shown to impact the angular delocalization of each electron, in the one-particle space spanned by the r and r(') variables.

  1. Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory

    NASA Astrophysics Data System (ADS)

    Usselman, Austin

    We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one and two particles to be created by the new operator and converged to the Fock state expansion results. This showed the LFCC method to be a reliable replacement method for solving quantum field theory problems.

  2. Nonlinear channel equalization for QAM signal constellation using artificial neural networks.

    PubMed

    Patra, J C; Pal, R N; Baliarsingh, R; Panda, G

    1999-01-01

    Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.

  3. Free vibration of rectangular plates with a small initial curvature

    NASA Technical Reports Server (NTRS)

    Adeniji-Fashola, A. A.; Oyediran, A. A.

    1988-01-01

    The method of matched asymptotic expansions is used to solve the transverse free vibration of a slightly curved, thin rectangular plate. Analytical results for natural frequencies and mode shapes are presented in the limit when the dimensionless bending rigidity, epsilon, is small compared with in-plane forces. Results for different boundary conditions are obtained when the initial deflection is: (1) a polynomial in both directions, and (2) the product of a polynomial and a trigonometric function, and arbitrary. For the arbitrary initial deflection case, the Fourier series technique is used to define the initial deflection. The results obtained show that the natural frequencies of vibration of slightly curved plates are coincident with those of perfectly flat, prestressed rectangular plates. However, the eigenmodes are very different from those of initially flat prestressed rectangular plates. The total deflection is found to be the sum of the initial deflection, the deflection resulting from the solution of the flat plate problem, and the deflection resulting from the static problem.

  4. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  5. Breathers, quasi-periodic and travelling waves for a generalized ?-dimensional Yu-Toda-Sasa-Fukayama equation in fluids

    NASA Astrophysics Data System (ADS)

    Hu, Wen-Qiang; Gao, Yi-Tian; Zhao, Chen; Jia, Shu-Liang; Lan, Zhong-Zhou

    2017-07-01

    Under investigation in this paper is a generalized ?-dimensional Yu-Toda-Sasa-Fukayama equation for the interfacial wave in a two-layer fluid or the elastic quasi-plane wave in a liquid lattice. By virtue of the binary Bell polynomials, bilinear form of this equation is obtained. With the help of the bilinear form, N-soliton solutions are obtained via the Hirota method, and a bilinear Bäcklund transformation is derived to verify the integrability. Homoclinic breather waves are obtained according to the homoclinic test approach, which is not only the space-periodic breather but also the time-periodic breather via the graphic analysis. Via the Riemann theta function, quasi one-periodic waves are constructed, which can be viewed as a superposition of the overlapping solitary waves, placed one period apart. Finally, soliton-like, periodical triangle-type, rational-type and solitary bell-type travelling waves are obtained by means of the polynomial expansion method.

  6. Galaxy halo expansions: a new biorthogonal family of potential-density pairs

    NASA Astrophysics Data System (ADS)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn; Erkal, Denis

    2018-05-01

    Efficient expansions of the gravitational field of (dark) haloes have two main uses in the modelling of galaxies: first, they provide a compact representation of numerically constructed (or real) cosmological haloes, incorporating the effects of triaxiality, lopsidedness or other distortion. Secondly, they provide the basis functions for self-consistent field expansion algorithms used in the evolution of N-body systems. We present a new family of biorthogonal potential-density pairs constructed using the Hankel transform of the Laguerre polynomials. The lowest order density basis functions are double-power-law profiles cusped like ρ ˜ r-2+1/α at small radii with asymptotic density fall-off like ρ ˜ r-3-1/(2α). Here, α is a parameter satisfying α ≥ 1/2. The family therefore spans the range of inner density cusps found in numerical simulations, but has much shallower - and hence more realistic - outer slopes than the corresponding members of the only previously known family deduced by Zhao and exemplified by Hernquist & Ostriker. When α = 1, the lowest order density profile has an inner density cusp of ρ ˜ r-1 and an outer density slope of ρ ˜ r-3.5, similar to the famous Navarro, Frenk & White (NFW) model. For this reason, we demonstrate that our new expansion provides a more accurate representation of flattened NFW haloes than the competing Hernquist-Ostriker expansion. We utilize our new expansion by analysing a suite of numerically constructed haloes and providing the distributions of the expansion coefficients.

  7. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  8. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  9. Response analysis of holography-based modal wavefront sensor.

    PubMed

    Dong, Shihao; Haist, Tobias; Osten, Wolfgang; Ruppel, Thomas; Sawodny, Oliver

    2012-03-20

    The crosstalk problem of holography-based modal wavefront sensing (HMWS) becomes more severe with increasing aberration. In this paper, crosstalk effects on the sensor response are analyzed statistically for typical aberrations due to atmospheric turbulence. For specific turbulence strength, we optimized the sensor by adjusting the detector radius and the encoded phase bias for each Zernike mode. Calibrated response curves of low-order Zernike modes were further utilized to improve the sensor accuracy. The simulation results validated our strategy. The number of iterations for obtaining a residual RMS wavefront error of 0.1λ is reduced from 18 to 3. © 2012 Optical Society of America

  10. Corridor of existence of thermodynamically consistent solution of the Ornstein-Zernike equation.

    PubMed

    Vorob'ev, V S; Martynov, G A

    2007-07-14

    We obtain the exact equation for a correction to the Ornstein-Zernike (OZ) equation based on the assumption of the uniqueness of thermodynamical functions. We show that this equation is reduced to a differential equation with one arbitrary parameter for the hard sphere model. The compressibility factor within narrow limits of this parameter variation can either coincide with one of the formulas obtained on the basis of analytical solutions of the OZ equation or assume all intermediate values lying in a corridor between these solutions. In particular, we find the value of this parameter when the thermodynamically consistent compressibility factor corresponds to the Carnahan-Stirling formula.

  11. Image object recognition based on the Zernike moment and neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Jianwei; Wang, Ling; Huang, Fukan; Zhou, Liangzhu

    1998-03-01

    This paper first give a comprehensive discussion about the concept of artificial neural network its research methods and the relations with information processing. On the basis of such a discussion, we expound the mathematical similarity of artificial neural network and information processing. Then, the paper presents a new method of image recognition based on invariant features and neural network by using image Zernike transform. The method not only has the invariant properties for rotation, shift and scale of image object, but also has good fault tolerance and robustness. Meanwhile, it is also compared with statistical classifier and invariant moments recognition method.

  12. Alzheimer's Disease Detection by Pseudo Zernike Moment and Linear Regression Classification.

    PubMed

    Wang, Shui-Hua; Du, Sidan; Zhang, Yin; Phillips, Preetha; Wu, Le-Nan; Chen, Xian-Qing; Zhang, Yu-Dong

    2017-01-01

    This study presents an improved method based on "Gorji et al. Neuroscience. 2015" by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Effect of aberrations in human eye on contrast sensitivity function

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Wang, Feng-lin; Wang, Zhao-qi

    2011-06-01

    The quantitative analysis of the effect of aberrations in human eye on vision has important clinical value in the correction of aberrations. The wave-front aberrations of human eyes were measured with the Hartmann-Shack wave-front sensor and modulation transfer function (MTF) was computed from the wave-front aberrations. Contrast sensitivity function (CSF) was obtained from MTF and the retinal aerial image modulation (AIM). It is shown that the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations deteriorate contrast sensitivity function. When the 2nd, 3rd, 4th, 5th, 6th Zernike aberrations are corrected high contrast sensitivity function can be obtained.

  14. Instantaneous phase mapping deflectometry for dynamic deformable mirror characterization

    NASA Astrophysics Data System (ADS)

    Trumper, Isaac; Choi, Heejoo

    2017-09-01

    We present an instantaneous phase mapping deflectometry (PMD) system in the context of measuring a continuous surface deformable mirror (DM). Deflectometry has a high dynamic range, enabling the full range of surfaces generated by the DM to be measured. The recent development of an instantaneous PMD system leverages the simple setup of the PMD system to measure dynamic objects with accuracy similar to an interferometer. To demonstrate the capabilities of this technology, we perform a linearity measurement of the actuator motion in a continuous surface DM, which is critical for closed loop control in adaptive optics applications. We measure the entire set of actuators across the DM as they traverse their full range of motion with a Shack-Hartman wavefront sensor, thereby obtaining the influence function. Given the influence function of each actuator, the DM can produce specific Zernike terms on its surface. We then measure the linearity of the Zernike modes available in the DM software using the instantaneous PMD system. By obtaining the relationship between modes, we can more accurately generate surface profiles composed of Zernike terms. This ability is useful for other dynamic freeform metrology applications that utilize the DM as a null component.

  15. Effect of fringe-artifact correction on sub-tomogram averaging from Zernike phase-plate cryo-TEM

    PubMed Central

    Kishchenko, Gregory P.; Danev, Radostin; Fisher, Rebecca; He, Jie; Hsieh, Chyongere; Marko, Michael; Sui, Haixin

    2015-01-01

    Zernike phase-plate (ZPP) imaging greatly increases contrast in cryo-electron microscopy, however fringe artifacts appear in the images. A computational de-fringing method has been proposed, but it has not been widely employed, perhaps because the importance of de-fringing has not been clearly demonstrated. For testing purposes, we employed Zernike phase-plate imaging in a cryo-electron tomographic study of radial-spoke complexes attached to microtubule doublets. We found that the contrast enhancement by ZPP imaging made nonlinear denoising insensitive to the filtering parameters, such that simple low-frequency band-pass filtering made the same improvement in map quality. We employed sub-tomogram averaging, which compensates for the effect of the “missing wedge” and considerably improves map quality. We found that fringes (caused by the abrupt cut-on of the central hole in the phase plate) can lead to incorrect representation of a structure that is well-known from the literature. The expected structure was restored by amplitude scaling, as proposed in the literature. Our results show that de-fringing is an important part of image-processing for cryo-electron tomography of macromolecular complexes with ZPP imaging. PMID:26210582

  16. Sum-of-squares of polynomials approach to nonlinear stability of fluid flows: an example of application

    PubMed Central

    Tutty, O.

    2015-01-01

    With the goal of providing the first example of application of a recently proposed method, thus demonstrating its ability to give results in principle, global stability of a version of the rotating Couette flow is examined. The flow depends on the Reynolds number and a parameter characterizing the magnitude of the Coriolis force. By converting the original Navier–Stokes equations to a finite-dimensional uncertain dynamical system using a partial Galerkin expansion, high-degree polynomial Lyapunov functionals were found by sum-of-squares of polynomials optimization. It is demonstrated that the proposed method allows obtaining the exact global stability limit for this flow in a range of values of the parameter characterizing the Coriolis force. Outside this range a lower bound for the global stability limit was obtained, which is still better than the energy stability limit. In the course of the study, several results meaningful in the context of the method used were also obtained. Overall, the results obtained demonstrate the applicability of the recently proposed approach to global stability of the fluid flows. To the best of our knowledge, it is the first case in which global stability of a fluid flow has been proved by a generic method for the value of a Reynolds number greater than that which could be achieved with the energy stability approach. PMID:26730219

  17. A polynomial chaos ensemble hydrologic prediction system for efficient parameter inference and robust uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.

    2015-11-01

    This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.

  18. Rational approximations of f(R) cosmography through Pad'e polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.

  19. Removal of daytime thermal deformations in the GBT active surface via out-of-focus holography

    NASA Astrophysics Data System (ADS)

    Hunter, T. R.; Mello, M.; Nikolic, B.; Mason, B.; Schwab, F.; Ghigo, F.; Dicker, S.

    2009-01-01

    The 100-m diameter Green Bank Telescope (GBT) was built with an active surface of 2209 actuators in order to achieve and maintain an accurate paraboloidal shape. While much of the large-scale gravitational deformation of the surface can be described by a finite element model, a significant uncompensated gravitational deformation exists. In recent years, the elevation-dependence of this residual deformation has been successfully measured during benign nighttime conditions using the out-of-focus (OOF) holography technique (Nikolic et al, 2007, A&A 465, 685). Parametrized by a set of Zernike polynomials, the OOF model correction was implemented into the active surface and has been applied during all high-frequency observations since Fall 2006, yielding a consistent gain curve that is flat with elevation. However, large-scale thermal deformation of the surface has remained a problem for daytime high-frequency observations. OOF holography maps taken throughout a clear winter day indicate that surface deformations become significant whenever the Sun is above 10 degrees elevation, but that they change slowly while tracking a single source. In this paper, we describe a further improvement to the GBT active surface that allows an observer to measure and compensate for the thermal surface deformation using the OOF technique. In order to support high-frequency observers, "AutoOOF" is a new GBT Astrid procedure that acquires a quick set of in-focus and out-of-focus on-the-fly continuum maps on a quasar using the currently active receiver. Upon completion of the maps, the data analysis software is launched automatically which produces and displays the surface map along with a set of Zernike coefficients. These coefficients are then sent to the active surface manager which combines them with the existing gravitational Zernike terms and FEM in order to compute the total active surface correction. The end-to-end functionality has been tested on the sky at Q-Band and Ka-band during several mornings and afternoons. The telescope beam profiles on a bright quasar typically change from slightly asymmetric to Gaussian, the peak antenna temperature increases, and significant sidelobes (when present) are eliminated. This technique has the potential to bring the daytime GBT aperture efficiency at high frequencies closer to its nighttime level. The total time to run the procedure and apply the corrections is about 20 minutes. The time interval over which the solutions remain valid and helpful will likely vary with the weather conditions and program of observations, and can be better evaluated once a larger dataset has been acquired. We are presently researching the OOF technique using MUSTANG, the first 90 GHz instrument on the GBT. MUSTANG is 64-pixel bolometer camera, presently operating as a shared-risk science instrument. The use of multi-pixel MUSTANG maps has the potential to significantly speed the process of measuring and correcting thermal deformations to the surface during 90 GHz observations. Of course, the efficiency of 90 GHz observations with the GBT is also limited by the small-scale surface roughness due to errors in the initial setting of the actuator zero points and the individual panel corners. We are planning to measure these errors in detail with traditional holography in the near future.

  20. Removal of daytime thermal deformations in the GBT active surface via out-of-focus holography

    NASA Astrophysics Data System (ADS)

    Hunter, T. R.; Mello, M.; Nikolic, B.; Mason, B. S.; Schwab, F. R.; Ghigo, F. D.; Dicker, S. R.

    2009-01-01

    The 100-m diameter Green Bank Telescope (GBT) was built with an active surface of 2209 actuators in order to achieve and maintain an accurate paraboloidal shape. While much of the large-scale gravitational deformation of the surface can be described by a finite element model, a significant uncompensated gravitational deformation exists. In recent years, the elevation-dependence of this residual deformation has been successfully measured during benign nighttime conditions using the out-of-focus (OOF) holography technique (Nikolic et al, 2007, A&A 465, 685). Parametrized by a set of Zernike polynomials, the OOF model correction was implemented into the active surface and has been applied during all high frequency observations since Fall 2006, yielding a consistent gain curve that is constant with elevation. However, large-scale thermal deformation of the surface has remained a problem for daytime high-frequency observations. OOF holography maps taken throughout a clear winter day indicate that surface deformations become significant whenever the Sun is above 10 degrees elevation, but that they change slowly while tracking a single source. In this paper, we describe a further improvement to the GBT active surface that allows an observer to measure and compensate for the thermal surface deformation using the OOF technique. In order to support high-frequency observers, "AutoOOF" is a new GBT Astrid procedure that acquires a quick set of in-focus and out-of-focus on-the-fly continuum maps on a quasar using the currently active receiver. Upon completion of the maps, the data analysis software is launched automatically which produces and displays the surface map along with a set of Zernike coefficients. These coefficients are then sent to the active surface manager which combines them with the existing gravitational Zernike terms and FEM in order to compute the total active surface correction. The end-to-end functionality has been tested on the sky at Q-Band and Ka-band during several mornings and afternoons. The telescope beam profiles on a bright quasar typically change from slightly asymmetric to Gaussian, the peak antenna temperature increases, and signicant sidelobes (when present) are eliminated. This technique has the potential to bring the daytime GBT aperture efficiency at high frequencies closer to its nighttime level. The total time to run the procedure and apply the corrections is about 20 minutes. The time interval over which the solutions remain valid and helpful will likely vary with the weather conditions and program of observations, and can be better evaluated once a larger dataset has been acquired. We are presently researching the OOF technique using MUSTANG, the first 90 GHz instrument on the GBT. MUSTANG is 64-pixel bolometer camera, presently operating as a shared-risk science instrument. The use of multi-pixel MUSTANG maps has the potential to signicantly speed the process of measuring and correcting thermal deformations to the surface during 90 GHz observations. Of course, th efficiency of 90 GHz observations with the GBT is also limited by the small-scale surface roughness due to errors in the initial setting of the actuator zero points and the individual panel corners. We are planning to measure these errors in detail with traditional holography in the near future.

  1. Spectral solver for multi-scale plasma physics simulations with dynamically adaptive number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec

    2015-06-01

    A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less

  2. Measurement of the n-p elastic scattering angular distribution at E{sub n}=14.9 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boukharouba, N.; Bateman, F. B.; Carlson, A. D.

    2010-07-15

    The relative differential cross section for the elastic scattering of neutrons by protons was measured at an incident neutron energy E{sub n}=14.9 MeV and for center-of-mass scattering angles ranging from about 60 deg. to 180 deg. Angular distribution values were obtained from the normalization of the integrated data to the n-p total elastic scattering cross section. Comparisons of the normalized data to the predictions of the Arndt et al. phase-shift analysis, those of the Nijmegen group, and with the ENDF/B-VII.0 evaluation are sensitive to the value of the total elastic scattering cross section used to normalize the data. The resultsmore » of a fit to a first-order Legendre polynomial expansion are in good agreement in the backward scattering hemisphere with the predictions of the Arndt et al. phase-shift analysis, those of the Nijmegen group, and to a lesser extent, with the ENDF/B-VII.0 evaluation. A fit to a second-order expansion is in better agreement with the ENDF/B-VII.0 evaluation than with the other predictions, in particular when the total elastic scattering cross section given by Arndt et al. and the Nijmegen group is used to normalize the data. A Legendre polynomial fit to the existing n-p scattering data in the 14 MeV energy region, excluding the present measurement, showed that a best fit is obtained for a second-order expansion. Furthermore, the Kolmogorov-Smirnov test confirms the general agreement in the backward scattering hemisphere and shows that significant differences between the database and the predictions occur in the angular range between 60 deg. and 120 deg. and below 20 deg. Although there is good overall agreement in the backward scattering hemisphere, more precision small-angle scattering data and a better definition of the total elastic cross section are needed for an accurate determination of the shape and magnitude of the angular distribution.« less

  3. Unconditionally stable WLP-FDTD method for the modeling of electromagnetic wave propagation in gyrotropic materials.

    PubMed

    Li, Zheng-Wei; Xi, Xiao-Li; Zhang, Jin-Sheng; Liu, Jiang-fan

    2015-12-14

    The unconditional stable finite-difference time-domain (FDTD) method based on field expansion with weighted Laguerre polynomials (WLPs) is applied to model electromagnetic wave propagation in gyrotropic materials. The conventional Yee cell is modified to have the tightly coupled current density components located at the same spatial position. The perfectly matched layer (PML) is formulated in a stretched-coordinate (SC) system with the complex-frequency-shifted (CFS) factor to achieve good absorption performance. Numerical examples are shown to validate the accuracy and efficiency of the proposed method.

  4. Effect of internal lubricating agents of disposable soft contact lenses on higher-order aberrations after blinking.

    PubMed

    Koh, Shizuka; Maeda, Naoyuki; Hamano, Takashi; Hirohara, Yoko; Mihashi, Toshifumi; Hori, Yuichi; Hosohata, Jun; Fujikado, Takashi; Tano, Yasuo

    2008-03-01

    To investigate whether the polymer composition of disposable soft contact lenses affects sequential changes in higher-order aberrations (HOAs). Fifteen subjects who wore disposable soft contact lenses with dryness-related symptoms and 15 non-contact lens wearers were enrolled in this study. Ocular HOAs were measured for 60 seconds in each subject wearing a disposable etafilcon A lens (conventional lens) or a disposable etafilcon A lens with polyvinyl pyrrolidone (PVP) (lens with PVP) after 1 hour of contact lens wear. During the measurement, subjects were forced to blink every 10 seconds. The aberration data were analyzed in the central 4-mm diameter up to the sixth-order Zernike polynomials. Total HOAs, the fluctuation index (FI), and the stability index (SI) of the total HOAs over time were compared between the two groups. The subjective ocular dryness also was scored. In symptomatic wearers of disposable soft contact lenses, the total HOAs, the FI, and the SI with the lens with PVP were significantly (P=0.013, P=0.014, P=0.019, respectively) lower than with the conventional lens, whereas a significant (P=0.018) difference between the two lenses was observed only in the FI in non-contact lens wearers. Subjective ocular dryness with the lens with PVP significantly decreased compared with the conventional lens in both groups. Sequential measurement of HOAs may be a useful objective method to evaluate the effect of internal lubricating agents of disposable soft contact lenses on optical quality.

  5. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens

    PubMed Central

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.

    2015-01-01

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169

  6. Wavefront metrology for coherent hard X-rays by scanning a microsphere.

    PubMed

    Skjønsfjell, Eirik Torbjørn Bakken; Chushkin, Yuriy; Zontone, Federico; Patil, Nilesh; Gibaud, Alain; Breiby, Dag W

    2016-05-16

    Characterization of the wavefront of an X-ray beam is of primary importance for all applications where coherence plays a major role. Imaging techniques based on numerically retrieving the phase from interference patterns are often relying on an a-priori assumption of the wavefront shape. In Coherent X-ray Diffraction Imaging (CXDI) a planar incoming wave field is often assumed for the inversion of the measured diffraction pattern, which allows retrieving the real space image via simple Fourier transformation. It is therefore important to know how reliable the plane wave approximation is to describe the real wavefront. Here, we demonstrate that the quantitative wavefront shape and flux distribution of an X-ray beam used for CXDI can be measured by using a micrometer size metal-coated polymer sphere serving in a similar way as the hole array in a Hartmann wavefront sensor. The method relies on monitoring the shape and center of the scattered intensity distribution in the far field using a 2D area detector while raster-scanning the microsphere with respect to the incoming beam. The reconstructed X-ray wavefront was found to have a well-defined central region of approximately 16 µm diameter and a weaker, asymmetric, intensity distribution extending 30 µm from the beam center. The phase front distortion was primarily spherical with an effective radius of 0.55 m which matches the distance to the last upstream beam-defining slit, and could be accurately represented by Zernike polynomials.

  7. Large field of view quantitative phase imaging of induced pluripotent stem cells and optical pathlength reference materials

    NASA Astrophysics Data System (ADS)

    Kwee, Edward; Peterson, Alexander; Stinson, Jeffrey; Halter, Michael; Yu, Liya; Majurski, Michael; Chalfoun, Joe; Bajcsy, Peter; Elliott, John

    2018-02-01

    Induced pluripotent stem cells (iPSCs) are reprogrammed cells that can have heterogeneous biological potential. Quality assurance metrics of reprogrammed iPSCs will be critical to ensure reliable use in cell therapies and personalized diagnostic tests. We present a quantitative phase imaging (QPI) workflow which includes acquisition, processing, and stitching multiple adjacent image tiles across a large field of view (LFOV) of a culture vessel. Low magnification image tiles (10x) were acquired with a Phasics SID4BIO camera on a Zeiss microscope. iPSC cultures were maintained using a custom stage incubator on an automated stage. We implement an image acquisition strategy that compensates for non-flat illumination wavefronts to enable imaging of an entire well plate, including the meniscus region normally obscured in Zernike phase contrast imaging. Polynomial fitting and background mode correction was implemented to enable comparability and stitching between multiple tiles. LFOV imaging of reference materials indicated that image acquisition and processing strategies did not affect quantitative phase measurements across the LFOV. Analysis of iPSC colony images demonstrated mass doubling time was significantly different than area doubling time. These measurements were benchmarked with prototype microsphere beads and etched-glass gratings with specified spatial dimensions designed to be QPI reference materials with optical pathlength shifts suitable for cell microscopy. This QPI workflow and the use of reference materials can provide non-destructive traceable imaging method for novel iPSC heterogeneity characterization.

  8. Computational adaptive optics for broadband optical interferometric tomography of biological tissue

    NASA Astrophysics Data System (ADS)

    Boppart, Stephen A.

    2015-03-01

    High-resolution real-time tomography of biological tissues is important for many areas of biological investigations and medical applications. Cellular level optical tomography, however, has been challenging because of the compromise between transverse imaging resolution and depth-of-field, the system and sample aberrations that may be present, and the low imaging sensitivity deep in scattering tissues. The use of computed optical imaging techniques has the potential to address several of these long-standing limitations and challenges. Two related techniques are interferometric synthetic aperture microscopy (ISAM) and computational adaptive optics (CAO). Through three-dimensional Fourierdomain resampling, in combination with high-speed OCT, ISAM can be used to achieve high-resolution in vivo tomography with enhanced depth sensitivity over a depth-of-field extended by more than an order-of-magnitude, in realtime. Subsequently, aberration correction with CAO can be performed in a tomogram, rather than to the optical beam of a broadband optical interferometry system. Based on principles of Fourier optics, aberration correction with CAO is performed on a virtual pupil using Zernike polynomials, offering the potential to augment or even replace the more complicated and expensive adaptive optics hardware with algorithms implemented on a standard desktop computer. Interferometric tomographic reconstructions are characterized with tissue phantoms containing sub-resolution scattering particles, and in both ex vivo and in vivo biological tissue. This review will collectively establish the foundation for high-speed volumetric cellular-level optical interferometric tomography in living tissues.

  9. Computational adaptive optics for broadband interferometric tomography of tissues and cells

    NASA Astrophysics Data System (ADS)

    Adie, Steven G.; Mulligan, Jeffrey A.

    2016-03-01

    Adaptive optics (AO) can shape aberrated optical wavefronts to physically restore the constructive interference needed for high-resolution imaging. With access to the complex optical field, however, many functions of optical hardware can be achieved computationally, including focusing and the compensation of optical aberrations to restore the constructive interference required for diffraction-limited imaging performance. Holography, which employs interferometric detection of the complex optical field, was developed based on this connection between hardware and computational image formation, although this link has only recently been exploited for 3D tomographic imaging in scattering biological tissues. This talk will present the underlying imaging science behind computational image formation with optical coherence tomography (OCT) -- a beam-scanned version of broadband digital holography. Analogous to hardware AO (HAO), we demonstrate computational adaptive optics (CAO) and optimization of the computed pupil correction in 'sensorless mode' (Zernike polynomial corrections with feedback from image metrics) or with the use of 'guide-stars' in the sample. We discuss the concept of an 'isotomic volume' as the volumetric extension of the 'isoplanatic patch' introduced in astronomical AO. Recent CAO results and ongoing work is highlighted to point to the potential biomedical impact of computed broadband interferometric tomography. We also discuss the advantages and disadvantages of HAO vs. CAO for the effective shaping of optical wavefronts, and highlight opportunities for hybrid approaches that synergistically combine the unique advantages of hardware and computational methods for rapid volumetric tomography with cellular resolution.

  10. 3D refraction correction and extraction of clinical parameters from spectral domain optical coherence tomography of the cornea.

    PubMed

    Zhao, Mingtao; Kuo, Anthony N; Izatt, Joseph A

    2010-04-26

    Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.

  11. The development of a revised version of multi-center molecular Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kido, Kentaro; Yokogawa, Daisuke; Sato, Hirofumi

    2012-04-01

    Ornstein-Zernike (OZ)-type theory is a powerful tool to obtain 3-dimensional solvent distribution around solute molecule. Recently, we proposed multi-center molecular OZ method, which is suitable for parallel computing of 3D solvation structure. The distribution function in this method consists of two components, namely reference and residue parts. Several types of the function were examined as the reference part to investigate the numerical robustness of the method. As the benchmark, the method is applied to water, benzene in aqueous solution and single-walled carbon nanotube in chloroform solution. The results indicate that fully-parallelization is achieved by utilizing the newly proposed reference functions.

  12. 2D/3D registration using a rotation-invariant cost function based on Zernike moments

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Yang, Xinhui; Burgstaller, Wolfgang; Baumann, Bernard; Jacob, Augustinus L.; Niederer, Peter F.; Regazzoni, Pietro; Messmer, Peter

    2004-05-01

    We present a novel in-plane rotation invariant cost function for 2D/3D registration utilizing projection-invariant transformation properties and the decomposition of the X-ray nad the DRR under comparision into orhogonal Zernike moments. As a result, only five dof have to be optimized, and the number of iteration necessary for registration can be significantly reduced. Results in a phantom study show that an accuracy of approximately 0.7° and 2 mm can be achieved using this method. We conclude that reduction of coupled dof and usage of linear independent coefficients for cost function evaluation provide intersting new perspectives for the field of 2D/3D registration.

  13. Calculation of the second term of the exact Green's function of the diffusion equation for diffusion-controlled chemical reactions

    NASA Astrophysics Data System (ADS)

    Plante, Ianik

    2016-01-01

    The exact Green's function of the diffusion equation (GFDE) is often considered to be the gold standard for the simulation of partially diffusion-controlled reactions. As the GFDE with angular dependency is quite complex, the radial GFDE is more often used. Indeed, the exact GFDE is expressed as a Legendre expansion, the coefficients of which are given in terms of an integral comprising Bessel functions. This integral does not seem to have been evaluated analytically in existing literature. While the integral can be evaluated numerically, the Bessel functions make the integral oscillate and convergence is difficult to obtain. Therefore it would be of great interest to evaluate the integral analytically. The first term was evaluated previously, and was found to be equal to the radial GFDE. In this work, the second term of this expansion was evaluated. As this work has shown that the first two terms of the Legendre polynomial expansion can be calculated analytically, it raises the question of the possibility that an analytical solution exists for the other terms.

  14. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  15. Application of the aberration ring test (ARTEMIS) to determine lens quality and predict its lithographic performance

    NASA Astrophysics Data System (ADS)

    Moers, Marco H. P.; van der Laan, Hans; Zellenrath, Mark; de Boeij, Wim; Beaudry, Neil A.; Cummings, Kevin D.; van Zwol, Adriaan; Brecht, Arthur; Willekers, Rob

    2001-09-01

    ARTEMISTM (Aberration Ring Test Exposed at Multiple Illumination Settings) is a technique to determine in-situ, full-field, low and high order lens aberrations. In this paper we are analyzing the ARTEMISTM data of PAS5500/750TM DUV Step & Scan systems and its use as a lithographic prediction tool. ARTEMISTM is capable of determining Zernike coefficients up to Z25 with a 3(sigma) reproducibility range from 1.5 to 4.5 nm depending on the aberration type. 3D electric field simulations, that take the extended geometry of the phase shift feature into account, have been used for an improved treatment of the extraction of the spherical Zernike coefficients. Knowledge of the extracted Zernike coefficients allows an accurate prediction of the lithographic performance of the scanner system. This ability is demonstrated for a two bar pattern and an isolation pattern. The RMS difference between the ARTEMISTM-based lithographic prediction and the lithographic measurement is 2.5 nm for the two bar pattern and 3 nm for the isolation pattern. The 3(sigma) reproducibility of the prediction for the two bar pattern is 2.5 nm and 1 nm for the isolation pattern. This is better than the reproducibility of the lithographic measurements themselves.

  16. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  17. Comparison of organs' shapes with geometric and Zernike 3D moments.

    PubMed

    Broggio, D; Moignier, A; Ben Brahim, K; Gardumi, A; Grandgirard, N; Pierrat, N; Chea, M; Derreumaux, S; Desbrée, A; Boisserie, G; Aubert, B; Mazeron, J-J; Franck, D

    2013-09-01

    The morphological similarity of organs is studied with feature vectors based on geometric and Zernike 3D moments. It is particularly investigated if outliers and average models can be identified. For this purpose, the relative proximity to the mean feature vector is defined, principal coordinate and clustering analyses are also performed. To study the consistency and usefulness of this approach, 17 livers and 76 hearts voxel models from several sources are considered. In the liver case, models with similar morphological feature are identified. For the limited amount of studied cases, the liver of the ICRP male voxel model is identified as a better surrogate than the female one. For hearts, the clustering analysis shows that three heart shapes represent about 80% of the morphological variations. The relative proximity and clustering analysis rather consistently identify outliers and average models. For the two cases, identification of outliers and surrogate of average models is rather robust. However, deeper classification of morphological feature is subject to caution and can only be performed after cross analysis of at least two kinds of feature vectors. Finally, the Zernike moments contain all the information needed to re-construct the studied objects and thus appear as a promising tool to derive statistical organ shapes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. PSF and MTF comparison of two different surface ablation techniques for laser visual correction

    NASA Astrophysics Data System (ADS)

    Cruz Félix, Angel Sinue; López Olazagasti, Estela; Rosales, Marco A.; Ibarra, Jorge; Tepichín Rodríguez, Eduardo

    2009-08-01

    It is well known that the Zernike expansion of the wavefront aberrations has been extensively used to evaluate the performance of image forming optical systems. Recently, these techniques were adopted in the field of Ophthalmology to evaluate the objective performance of the human ocular system. We have been working in the characterization and evaluation of the performance of normal human eyes; i.e., eyes which do not require any refractive correction (20/20 visual acuity). These data provide us a reference model to analyze Pre- and Post-Operated results from eyes that have been subjected to laser refractive surgery. Two different ablation techniques are analyzed in this work. These techniques were designed to correct the typical refractive errors known as myopia, hyperopia, and presbyopia. When applied to the corneal surface, these techniques provide a focal shift and, in principle, an improvement of the visual performance. These features can be suitably described in terms of the PSF and MTF of the corresponding Pre- and Post-Operated wavefront aberrations. We show the preliminary results of our comparison.

  19. The determination of the elastodynamic fields of an ellipsoidal inhomogeneity

    NASA Technical Reports Server (NTRS)

    Fu, L. S.; Mura, T.

    1983-01-01

    The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.

  20. Atomic Gaussian type orbitals and their Fourier transforms via the Rayleigh expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yükçü, Niyazi

    Gaussian type orbitals (GTOs), which are one of the types of exponential type orbitals (ETOs), are used usually as basis functions in the multi-center atomic and molecular integrals to better understand physical and chemical properties of matter. In the Fourier transform method (FTM), basis functions have not simplicity to make mathematical operations, but their Fourier transforms are easier to use. In this work, with the help of FTM, Rayleigh expansion and some properties of unnormalized GTOs, we present new mathematical results for the Fourier transform of GTOs in terms of Laguerre polynomials, hypergeometric and Whittaker functions. Physical and analytical propertiesmore » of GTOs are discussed and some numerical results have been given in a table. Finally, we compare our mathematical results with the other known literature results by using a computer program and details of evaluation are presented.« less

  1. On a quadrature formula of Gori and Micchelli

    NASA Astrophysics Data System (ADS)

    Yang, Shijun

    2005-04-01

    Sparked by Bojanov (J. Comput. Appl. Math. 70 (1996) 349), we provide an alternate approach to quadrature formulas based on the zeros of the Chebyshev polynomial of the first kind for any weight function w introduced and studied in Gori and Micchelli (Math. Comp. 65 (1996) 1567), thereby improving on their observations. Upon expansion of the divided differences, we obtain explicit expressions for the corresponding Cotes coefficients in Gauss-Turan quadrature formulas for and I(fTn;w) for a Gori-Micchelli weight function. It is also interesting to mention what has been neglected for about 30 years by the literature is that, as a consequence of expansion of the divided differences in the special case when , the solution of the famous Turan's Problem 26 raised in 1980 was in fact implied by a result of Micchelli and Rivlin (IBM J. Res. Develop. 16 (1972) 372) in 1972. Some concluding comments are made in the final section.

  2. Quasi-periodic Solutions of the Kaup-Kupershmidt Hierarchy

    NASA Astrophysics Data System (ADS)

    Geng, Xianguo; Wu, Lihua; He, Guoliang

    2013-08-01

    Based on solving the Lenard recursion equations and the zero-curvature equation, we derive the Kaup-Kupershmidt hierarchy associated with a 3×3 matrix spectral problem. Resorting to the characteristic polynomial of the Lax matrix for the Kaup-Kupershmidt hierarchy, we introduce a trigonal curve {K}_{m-1} and present the corresponding Baker-Akhiezer function and meromorphic function on it. The Abel map is introduced to straighten out the Kaup-Kupershmidt flows. With the aid of the properties of the Baker-Akhiezer function and the meromorphic function and their asymptotic expansions, we arrive at their explicit Riemann theta function representations. The Riemann-Jacobi inversion problem is achieved by comparing the asymptotic expansion of the Baker-Akhiezer function and its Riemann theta function representation, from which quasi-periodic solutions of the entire Kaup-Kupershmidt hierarchy are obtained in terms of the Riemann theta functions.

  3. Nonperturbative Series Expansion of Green's Functions: The Anatomy of Resonant Inelastic X-Ray Scattering in the Doped Hubbard Model

    NASA Astrophysics Data System (ADS)

    Lu, Yi; Haverkort, Maurits W.

    2017-12-01

    We present a nonperturbative, divergence-free series expansion of Green's functions using effective operators. The method is especially suited for computing correlators of complex operators as a series of correlation functions of simpler forms. We apply the method to study low-energy excitations in resonant inelastic x-ray scattering (RIXS) in doped one- and two-dimensional single-band Hubbard models. The RIXS operator is expanded into polynomials of spin, density, and current operators weighted by fundamental x-ray spectral functions. These operators couple to different polarization channels resulting in simple selection rules. The incident photon energy dependent coefficients help to pinpoint main RIXS contributions from different degrees of freedom. We show in particular that, with parameters pertaining to cuprate superconductors, local spin excitation dominates the RIXS spectral weight over a wide doping range in the cross-polarization channel.

  4. Expansion into lattice harmonics in cubic symmetries

    NASA Astrophysics Data System (ADS)

    Kontrym-Sznajd, G.

    2018-05-01

    On the example of a few sets of sampling directions in the Brillouin zone, this work shows how important the choice of the cubic harmonics is on the quality of approximation of some quantities by a series of such harmonics. These studies led to the following questions: (1) In the case that for a given l there are several independent harmonics, can one use in the expansion only one harmonic with a given l?; (2) How should harmonics be ordered: according to l or, after writing them in terms of (x4 + y4 + z4)n (x2y2z2)m, according to their degree q = n + m? To enable practical applications of such harmonics, they are constructed in terms of the associated Legendre polynomials up to l = 26. It is shown that electron momentum densities, reconstructed from experimental data for ErGa3 and InGa3, are described much better by harmonics ordered with q.

  5. Efficient modeling of photonic crystals with local Hermite polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucher, C. R.; Li, Zehao; Albrecht, J. D.

    2014-04-21

    Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (planemore » wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits.« less

  6. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  7. Protein-protein docking using region-based 3D Zernike descriptors

    PubMed Central

    2009-01-01

    Background Protein-protein interactions are a pivotal component of many biological processes and mediate a variety of functions. Knowing the tertiary structure of a protein complex is therefore essential for understanding the interaction mechanism. However, experimental techniques to solve the structure of the complex are often found to be difficult. To this end, computational protein-protein docking approaches can provide a useful alternative to address this issue. Prediction of docking conformations relies on methods that effectively capture shape features of the participating proteins while giving due consideration to conformational changes that may occur. Results We present a novel protein docking algorithm based on the use of 3D Zernike descriptors as regional features of molecular shape. The key motivation of using these descriptors is their invariance to transformation, in addition to a compact representation of local surface shape characteristics. Docking decoys are generated using geometric hashing, which are then ranked by a scoring function that incorporates a buried surface area and a novel geometric complementarity term based on normals associated with the 3D Zernike shape description. Our docking algorithm was tested on both bound and unbound cases in the ZDOCK benchmark 2.0 dataset. In 74% of the bound docking predictions, our method was able to find a near-native solution (interface C-αRMSD ≤ 2.5 Å) within the top 1000 ranks. For unbound docking, among the 60 complexes for which our algorithm returned at least one hit, 60% of the cases were ranked within the top 2000. Comparison with existing shape-based docking algorithms shows that our method has a better performance than the others in unbound docking while remaining competitive for bound docking cases. Conclusion We show for the first time that the 3D Zernike descriptors are adept in capturing shape complementarity at the protein-protein interface and useful for protein docking prediction. Rigorous benchmark studies show that our docking approach has a superior performance compared to existing methods. PMID:20003235

  8. Protein-protein docking using region-based 3D Zernike descriptors.

    PubMed

    Venkatraman, Vishwesh; Yang, Yifeng D; Sael, Lee; Kihara, Daisuke

    2009-12-09

    Protein-protein interactions are a pivotal component of many biological processes and mediate a variety of functions. Knowing the tertiary structure of a protein complex is therefore essential for understanding the interaction mechanism. However, experimental techniques to solve the structure of the complex are often found to be difficult. To this end, computational protein-protein docking approaches can provide a useful alternative to address this issue. Prediction of docking conformations relies on methods that effectively capture shape features of the participating proteins while giving due consideration to conformational changes that may occur. We present a novel protein docking algorithm based on the use of 3D Zernike descriptors as regional features of molecular shape. The key motivation of using these descriptors is their invariance to transformation, in addition to a compact representation of local surface shape characteristics. Docking decoys are generated using geometric hashing, which are then ranked by a scoring function that incorporates a buried surface area and a novel geometric complementarity term based on normals associated with the 3D Zernike shape description. Our docking algorithm was tested on both bound and unbound cases in the ZDOCK benchmark 2.0 dataset. In 74% of the bound docking predictions, our method was able to find a near-native solution (interface C-alphaRMSD < or = 2.5 A) within the top 1000 ranks. For unbound docking, among the 60 complexes for which our algorithm returned at least one hit, 60% of the cases were ranked within the top 2000. Comparison with existing shape-based docking algorithms shows that our method has a better performance than the others in unbound docking while remaining competitive for bound docking cases. We show for the first time that the 3D Zernike descriptors are adept in capturing shape complementarity at the protein-protein interface and useful for protein docking prediction. Rigorous benchmark studies show that our docking approach has a superior performance compared to existing methods.

  9. Dynamics of one-dimensional self-gravitating systems using Hermite-Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Barnes, Eric I.; Ragan, Robert J.

    2014-01-01

    The current paradigm for understanding galaxy formation in the Universe depends on the existence of self-gravitating collisionless dark matter. Modelling such dark matter systems has been a major focus of astrophysicists, with much of that effort directed at computational techniques. Not surprisingly, a comprehensive understanding of the evolution of these self-gravitating systems still eludes us, since it involves the collective non-linear dynamics of many particle systems interacting via long-range forces described by the Vlasov equation. As a step towards developing a clearer picture of collisionless self-gravitating relaxation, we analyse the linearized dynamics of isolated one-dimensional systems near thermal equilibrium by expanding their phase-space distribution functions f(x, v) in terms of Hermite functions in the velocity variable, and Legendre functions involving the position variable. This approach produces a picture of phase-space evolution in terms of expansion coefficients, rather than spatial and velocity variables. We obtain equations of motion for the expansion coefficients for both test-particle distributions and self-gravitating linear perturbations of thermal equilibrium. N-body simulations of perturbed equilibria are performed and found to be in excellent agreement with the expansion coefficient approach over a time duration that depends on the size of the expansion series used.

  10. Efficient Computation of Sparse Matrix Functions for Large-Scale Electronic Structure Calculations: The CheSS Library.

    PubMed

    Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi

    2017-10-10

    We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.

  11. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  12. Periodic wave, breather wave and travelling wave solutions of a (2 + 1)-dimensional B-type Kadomtsev-Petviashvili equation in fluids or plasmas

    NASA Astrophysics Data System (ADS)

    Hu, Wen-Qiang; Gao, Yi-Tian; Jia, Shu-Liang; Huang, Qian-Min; Lan, Zhong-Zhou

    2016-11-01

    In this paper, a (2 + 1)-dimensional B-type Kadomtsev-Petviashvili equation is investigated, which has been presented as a model for the shallow water wave in fluids or the electrostatic wave potential in plasmas. By virtue of the binary Bell polynomials, the bilinear form of this equation is obtained. With the aid of the bilinear form, N -soliton solutions are obtained by the Hirota method, periodic wave solutions are constructed via the Riemann theta function, and breather wave solutions are obtained according to the extended homoclinic test approach. Travelling waves are constructed by the polynomial expansion method as well. Then, the relations between soliton solutions and periodic wave solutions are strictly established, which implies the asymptotic behaviors of the periodic waves under a limited procedure. Furthermore, we obtain some new solutions of this equation by the standard extended homoclinic test approach. Finally, we give a generalized form of this equation, and find that similar analytical solutions can be obtained from the generalized equation with arbitrary coefficients.

  13. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less

  14. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cieplak, Agnieszka M.; Slosar, Anze

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  15. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cieplak, Agnieszka M.; Slosar, Anže, E-mail: acieplak@bnl.gov, E-mail: anze@bnl.gov

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n -th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisationmore » over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  16. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    DOE PAGES

    Cieplak, Agnieszka M.; Slosar, Anze

    2017-10-12

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation overmore » mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. In conclusion, we find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.« less

  17. Modified homotopy perturbation method for solving hypersingular integral equations of the first kind.

    PubMed

    Eshkuvatov, Z K; Zulkarnain, F S; Nik Long, N M A; Muminov, Z

    2016-01-01

    Modified homotopy perturbation method (HPM) was used to solve the hypersingular integral equations (HSIEs) of the first kind on the interval [-1,1] with the assumption that the kernel of the hypersingular integral is constant on the diagonal of the domain. Existence of inverse of hypersingular integral operator leads to the convergence of HPM in certain cases. Modified HPM and its norm convergence are obtained in Hilbert space. Comparisons between modified HPM, standard HPM, Bernstein polynomials approach Mandal and Bhattacharya (Appl Math Comput 190:1707-1716, 2007), Chebyshev expansion method Mahiub et al. (Int J Pure Appl Math 69(3):265-274, 2011) and reproducing kernel Chen and Zhou (Appl Math Lett 24:636-641, 2011) are made by solving five examples. Theoretical and practical examples revealed that the modified HPM dominates the standard HPM and others. Finally, it is found that the modified HPM is exact, if the solution of the problem is a product of weights and polynomial functions. For rational solution the absolute error decreases very fast by increasing the number of collocation points.

  18. Characterizing the Lyα forest flux probability distribution function using Legendre polynomials

    NASA Astrophysics Data System (ADS)

    Cieplak, Agnieszka M.; Slosar, Anže

    2017-10-01

    The Lyman-α forest is a highly non-linear field with considerable information available in the data beyond the power spectrum. The flux probability distribution function (PDF) has been used as a successful probe of small-scale physics. In this paper we argue that measuring coefficients of the Legendre polynomial expansion of the PDF offers several advantages over measuring the binned values as is commonly done. In particular, the n-th Legendre coefficient can be expressed as a linear combination of the first n moments, allowing these coefficients to be measured in the presence of noise and allowing a clear route for marginalisation over mean flux. Moreover, in the presence of noise, our numerical work shows that a finite number of coefficients are well measured with a very sharp transition into noise dominance. This compresses the available information into a small number of well-measured quantities. We find that the amount of recoverable information is a very non-linear function of spectral noise that strongly favors fewer quasars measured at better signal to noise.

  19. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  20. Path-integral and Ornstein-Zernike study of quantum fluid structures on the crystallization line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesé, Luis M., E-mail: msese@ccia.uned.es

    2016-03-07

    Liquid neon, liquid para-hydrogen, and the quantum hard-sphere fluid are studied with path integral Monte Carlo simulations and the Ornstein-Zernike pair equation on their respective crystallization lines. The results cover the whole sets of structures in the r-space and the k-space and, for completeness, the internal energies, pressures and isothermal compressibilities. Comparison with experiment is made wherever possible, and the possibilities of establishing k-space criteria for quantum crystallization based on the path-integral centroids are discussed. In this regard, the results show that the centroid structure factor contains two significant parameters related to its main peak features (amplitude and shape) thatmore » can be useful to characterize freezing.« less

  1. On performing of interference technique based on self-adjusting Zernike filters (SA-AVT method) to investigate flows and validate 3D flow numerical simulations

    NASA Astrophysics Data System (ADS)

    Pavlov, Al. A.; Shevchenko, A. M.; Khotyanovsky, D. V.; Pavlov, A. A.; Shmakov, A. S.; Golubev, M. P.

    2017-10-01

    We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.

  2. Zernike phase contrast cryo-electron tomography of whole bacterial cells

    PubMed Central

    Guerrero-Ferreira, Ricardo C.; Wright, Elizabeth R.

    2014-01-01

    Cryo-electron tomography (cryo-ET) provides three-dimensional (3D) structural information of bacteria preserved in a native, frozen-hydrated state. The typical low contrast of tilt-series images, a result of both the need for a low electron dose and the use of conventional defocus phase-contrast imaging, is a challenge for high-quality tomograms. We show that Zernike phase-contrast imaging allows the electron dose to be reduced. This limits movement of gold fiducials during the tilt series, which leads to better alignment and a higher-resolution reconstruction. Contrast is also enhanced, improving visibility of weak features. The reduced electron dose also means that more images at more tilt angles could be recorded, further increasing resolution. PMID:24075950

  3. Nonclassical models of the theory of plates and shells

    NASA Astrophysics Data System (ADS)

    Annin, Boris D.; Volchkov, Yuri M.

    2017-11-01

    Publications dealing with the study of methods of reducing a three-dimensional problem of the elasticity theory to a two-dimensional problem of the theory of plates and shells are reviewed. Two approaches are considered: the use of kinematic and force hypotheses and expansion of solutions of the three-dimensional elasticity theory in terms of the complete system of functions. Papers where a three-dimensional problem is reduced to a two-dimensional problem with the use of several approximations of each of the unknown functions (stresses and displacements) by segments of the Legendre polynomials are also reviewed.

  4. An atlas of Rapp's 180-th order geopotential.

    NASA Astrophysics Data System (ADS)

    Melvin, P. J.

    1986-08-01

    Deprit's 1979 approach to the summation of the spherical harmonic expansion of the geopotential has been modified to spherical components and normalized Legendre polynomials. An algorithm has been developed which produces ten fields at the users option: the undulations of the geoid, three anomalous components of the gravity vector, or six components of the Hessian of the geopotential (gravity gradient). The algorithm is stable to high orders in single precision and does not treat the polar regions as a special case. Eleven contour maps of components of the anomalous geopotential on the surface of the ellipsoid are presented to validate the algorithm.

  5. Quantum calculus of classical vortex images, integrable models and quantum states

    NASA Astrophysics Data System (ADS)

    Pashaev, Oktay K.

    2016-10-01

    From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.

  6. A MULTISCALE FRAMEWORK FOR THE STOCHASTIC ASSIMILATION AND MODELING OF UNCERTAINTY ASSOCIATED NCF COMPOSITE MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin

    multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.

  7. Effect of load introduction on graphite epoxy compression specimens

    NASA Technical Reports Server (NTRS)

    Reiss, R.; Yao, T. M.

    1981-01-01

    Compression testing of modern composite materials is affected by the manner in which the compressive load is introduced. Two such effects are investigated: (1) the constrained edge effect which prevents transverse expansion and is common to all compression testing in which the specimen is gripped in the fixture; and (2) nonuniform gripping which induces bending into the specimen. An analytical model capable of quantifying these foregoing effects was developed which is based upon the principle of minimum complementary energy. For pure compression, the stresses are approximated by Fourier series. For pure bending, the stresses are approximated by Legendre polynomials.

  8. A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities

    DTIC Science & Technology

    2007-01-01

    the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling

  9. Biophysical applications of neutron Compton scattering

    NASA Astrophysics Data System (ADS)

    Wanderlingh, U. N.; Albergamo, F.; Hayward, R. L.; Middendorf, H. D.

    Neutron Compton scattering (NCS) can be applied to measuring nuclear momentum distributions and potential parameters in molecules of biophysical interest. We discuss the analysis of NCS spectra from peptide models, focusing on the characterisation of the amide proton dynamics in terms of the width of the H-bond potential well, its Laplacian, and the mean kinetic energy of the proton. The Sears expansion is used to quantify deviations from the high-Q limit (impulse approximation), and line-shape asymmetry parameters are evaluated in terms of Hermite polynomials. Results on NCS from selectively deuterated acetanilide are used to illustrate this approach.

  10. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    NASA Astrophysics Data System (ADS)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  11. Myopic aberrations: Simulation based comparison of curvature and Hartmann Shack wavefront sensors

    NASA Astrophysics Data System (ADS)

    Basavaraju, Roopashree M.; Akondi, Vyas; Weddell, Stephen J.; Budihal, Raghavendra Prasad

    2014-02-01

    In comparison with a Hartmann Shack wavefront sensor, the curvature wavefront sensor is known for its higher sensitivity and greater dynamic range. The aim of this study is to numerically investigate the merits of using a curvature wavefront sensor, in comparison with a Hartmann Shack (HS) wavefront sensor, to analyze aberrations of the myopic eye. Aberrations were statistically generated using Zernike coefficient data of 41 myopic subjects obtained from the literature. The curvature sensor is relatively simple to implement, and the processing of extra- and intra-focal images was linearly resolved using the Radon transform to provide Zernike modes corresponding to statistically generated aberrations. Simulations of the HS wavefront sensor involve the evaluation of the focal spot pattern from simulated aberrations. Optical wavefronts were reconstructed using the slope geometry of Southwell. Monte Carlo simulation was used to find critical parameters for accurate wavefront sensing and to investigate the performance of HS and curvature sensors. The performance of the HS sensor is highly dependent on the number of subapertures and the curvature sensor is largely dependent on the number of Zernike modes used to represent the aberration and the effective propagation distance. It is shown that in order to achieve high wavefront sensing accuracy while measuring aberrations of the myopic eye, a simpler and cost effective curvature wavefront sensor is a reliable alternative to a high resolution HS wavefront sensor with a large number of subapertures.

  12. Simple analysis of scattering data with the Ornstein-Zernike equation

    NASA Astrophysics Data System (ADS)

    Kats, E. I.; Muratov, A. R.

    2018-01-01

    In this paper we propose and explore a method of analysis of the scattering experimental data for uniform liquidlike systems. In our pragmatic approach we are not trying to introduce by hands an artificial small parameter to work out a perturbation theory with respect to the known results, e.g., for hard spheres or sticky hard spheres (all the more that in the agreement with the notorious Landau statement, there is no physical small parameter for liquids). Instead of it being guided by the experimental data we are solving the Ornstein-Zernike equation with a trial (variational) form of the interparticle interaction potential. To find all needed correlation functions this variational input is iterated numerically to satisfy the Ornstein-Zernike equation supplemented by a closure relation. Our method is developed for spherically symmetric scattering objects, and our numeric code is written for such a case. However, it can be extended (at the expense of more involved computations and a larger amount of required experimental input information) for nonspherical particles. What is important for our approach is that it is sufficient to know experimental data in a relatively narrow range of the scattering wave vectors (q ) to compute the static structure factor in a much broader range of q . We illustrate by a few model and real experimental examples of the x-ray and neutron scattering data how the approach works.

  13. Phase-Shifting Zernike Interferometer Wavefront Sensor

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Rao, Shanti; Jensen-Clemb, Rebecca M.; Serabyn, Gene

    2011-01-01

    The canonical Zernike phase-contrast technique1,2,3,4 transforms a phase object in one plane into an intensity object in the conjugate plane. This is done by applying a static pi/2 phase shift to the central core (approx. lambda/D) of the PSF which is intermediate between the input and output planes. Here we present a new architecture for this sensor. First, the optical system is simple and all reflective. Second, the phase shift in the central core of the PSF is dynamic and or arbitrary size. This common-path, all-reflective design makes it minimally sensitive to vibration, polarization and wavelength. We review the theory of operation, describe the optical system, summarize numerical simulations and sensitivities and review results from a laboratory demonstration of this novel instrument

  14. Phase-Shifting Zernike Interferometer Wavefront Sensor

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Rao, Shanti; Jensen-Clem, Rebecca M.

    2011-01-01

    The canonical Zernike phase-contrast technique transforms a phase object in one plane into an intensity object in the conjugate plane. This is done by applying a static pi/2 phase shift to the central core (approx. lambda/diameter) of the PSF which is intermediate between the input and output plane. Here we present a new architecture for this sensor. First, the optical system is simple and all reflective, and second the phase shift in the central core of the PSF is dynamic and can be made arbitrarily large. This common-path, all-reflective design makes it minimally sensitive to vibration, polarization and wavelength. We review the theory of operation, describe the optical system, summarize numerical simulations and sensitivities and review results from a laboratory demonstration of this novel instrument.

  15. Target recognition of ladar range images using even-order Zernike moments.

    PubMed

    Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi

    2012-11-01

    Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

  16. Zernike phase contrast cryo-electron tomography of whole bacterial cells.

    PubMed

    Guerrero-Ferreira, Ricardo C; Wright, Elizabeth R

    2014-01-01

    Cryo-electron tomography (cryo-ET) provides three-dimensional (3D) structural information of bacteria preserved in a native, frozen-hydrated state. The typical low contrast of tilt-series images, a result of both the need for a low electron dose and the use of conventional defocus phase-contrast imaging, is a challenge for high-quality tomograms. We show that Zernike phase-contrast imaging allows the electron dose to be reduced. This limits movement of gold fiducials during the tilt series, which leads to better alignment and a higher-resolution reconstruction. Contrast is also enhanced, improving visibility of weak features. The reduced electron dose also means that more images at more tilt angles could be recorded, further increasing resolution. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.

    PubMed

    Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A

    2018-02-01

    A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.

  18. Weighted Iterative Bayesian Compressive Sensing (WIBCS) for High Dimensional Polynomial Surrogate Construction

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2016-12-01

    Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. Impact of Sequential Ammonia Fiber Expansion (AFEX) Pretreatment and Pelletization on the Moisture Sorption Properties of Corn Stover

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonner, Ian J.; Thompson, David N.; Teymouri, Farzaneh

    Combining ammonia fiber expansion (AFEX™) pretreatment with a depot processing facility is a promising option for delivering high-value densified biomass to the emerging bioenergy industry. However, because the pretreatment process results in a high moisture material unsuitable for pelleting or storage (40% wet basis), the biomass must be immediately dried. If AFEX pretreatment results in a material that is difficult to dry, the economics of this already costly operation would be at risk. This work tests the nature of moisture sorption isotherms and thin-layer drying behavior of corn (Zea mays L.) stover at 20°C to 60°C before and after sequentialmore » AFEX pretreatment and pelletization to determine whether any negative impacts to material drying or storage may result from the AFEX process. The equilibrium moisture content to equilibrium relative humidity relationship for each of the materials was determined using dynamic vapor sorption isotherms and modeled with modified Chung-Pfost, modified Halsey, and modified Henderson temperature-dependent models as well as the Double Log Polynomial (DLP), Peleg, and Guggenheim Anderson de Boer (GAB) temperature-independent models. Drying kinetics were quantified under thin-layer laboratory testing and modeled using the Modified Page's equation. Water activity isotherms for non-pelleted biomass were best modeled with the Peleg temperature-independent equation while isotherms for the pelleted biomass were best modeled with the Double Log Polynomial equation. Thin-layer drying results were accurately modeled with the Modified Page's equation. The results of this work indicate that AFEX pretreatment results in drying properties more favorable than or equal to that of raw corn stover, and pellets of superior physical stability in storage.« less

  20. Computational adaptive optics for broadband optical interferometric tomography of biological tissue.

    PubMed

    Adie, Steven G; Graf, Benedikt W; Ahmad, Adeel; Carney, P Scott; Boppart, Stephen A

    2012-05-08

    Aberrations in optical microscopy reduce image resolution and contrast, and can limit imaging depth when focusing into biological samples. Static correction of aberrations may be achieved through appropriate lens design, but this approach does not offer the flexibility of simultaneously correcting aberrations for all imaging depths, nor the adaptability to correct for sample-specific aberrations for high-quality tomographic optical imaging. Incorporation of adaptive optics (AO) methods have demonstrated considerable improvement in optical image contrast and resolution in noninterferometric microscopy techniques, as well as in optical coherence tomography. Here we present a method to correct aberrations in a tomogram rather than the beam of a broadband optical interferometry system. Based on Fourier optics principles, we correct aberrations of a virtual pupil using Zernike polynomials. When used in conjunction with the computed imaging method interferometric synthetic aperture microscopy, this computational AO enables object reconstruction (within the single scattering limit) with ideal focal-plane resolution at all depths. Tomographic reconstructions of tissue phantoms containing subresolution titanium-dioxide particles and of ex vivo rat lung tissue demonstrate aberration correction in datasets acquired with a highly astigmatic illumination beam. These results also demonstrate that imaging with an aberrated astigmatic beam provides the advantage of a more uniform depth-dependent signal compared to imaging with a standard gaussian beam. With further work, computational AO could enable the replacement of complicated and expensive optical hardware components with algorithms implemented on a standard desktop computer, making high-resolution 3D interferometric tomography accessible to a wider group of users and nonspecialists.

Top