Sample records for iterative phase-space explicit

  1. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    NASA Astrophysics Data System (ADS)

    de Almeida, Valmor F.

    2017-07-01

    A phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equation and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.

  2. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F.

    In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less

  3. An iterative phase-space explicit discontinuous Galerkin method for stellar radiative transfer in extended atmospheres

    DOE PAGES

    de Almeida, Valmor F.

    2017-04-19

    In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less

  4. Gyrokinetic equations and full f solution method based on Dirac's constrained Hamiltonian and inverse Kruskal iteration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heikkinen, J. A.; Nora, M.

    2011-02-15

    Gyrokinetic equations of motion, Poisson equation, and energy and momentum conservation laws are derived based on the reduced-phase-space Lagrangian and inverse Kruskal iteration introduced by Pfirsch and Correa-Restrepo [J. Plasma Phys. 70, 719 (2004)]. This formalism, together with the choice of the adiabatic invariant J= as one of the averaging coordinates in phase space, provides an alternative to the standard gyrokinetics. Within second order in gyrokinetic parameter, the new equations do not show explicit ponderomotivelike or polarizationlike terms. Pullback of particle information with an iterated gyrophase and field dependent gyroradius function from the gyrocenter position defined by gyroaveraged coordinates allowsmore » direct numerical integration of the gyrokinetic equations in particle simulation of the field and particles with full distribution function. As an example, gyrokinetic systems with polarization drift either present or absent in the equations of motion are considered.« less

  5. A subtraction scheme for computing QCD jet cross sections at NNLO: integrating the iterated singly-unresolved subtraction terms

    NASA Astrophysics Data System (ADS)

    Bolzoni, Paolo; Somogyi, Gábor; Trócsányi, Zoltán

    2011-01-01

    We perform the integration of all iterated singly-unresolved subtraction terms, as defined in ref. [1], over the two-particle factorized phase space. We also sum over the unresolved parton flavours. The final result can be written as a convolution (in colour space) of the Born cross section and an insertion operator. We spell out the insertion operator in terms of 24 basic integrals that are defined explicitly. We compute the coefficients of the Laurent expansion of these integrals in two different ways, with the method of Mellin-Barnes representations and sector decomposition. Finally, we present the Laurent-expansion of the full insertion operator for the specific examples of electron-positron annihilation into two and three jets.

  6. Strong disorder real-space renormalization for the many-body-localized phase of random Majorana models

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile

    2018-03-01

    For the many-body-localized phase of random Majorana models, a general strong disorder real-space renormalization procedure known as RSRG-X (Pekker et al 2014 Phys. Rev. X 4 011052) is described to produce the whole set of excited states, via the iterative construction of the local integrals of motion (LIOMs). The RG rules are then explicitly derived for arbitrary quadratic Hamiltonians (free-fermions models) and for the Kitaev chain with local interactions involving even numbers of consecutive Majorana fermions. The emphasis is put on the advantages of the Majorana language over the usual quantum spin language to formulate unified RSRG-X rules.

  7. An implicit-iterative solution of the heat conduction equation with a radiation boundary condition

    NASA Technical Reports Server (NTRS)

    Williams, S. D.; Curry, D. M.

    1977-01-01

    For the problem of predicting one-dimensional heat transfer between conducting and radiating mediums by an implicit finite difference method, four different formulations were used to approximate the surface radiation boundary condition while retaining an implicit formulation for the interior temperature nodes. These formulations are an explicit boundary condition, a linearized boundary condition, an iterative boundary condition, and a semi-iterative boundary method. The results of these methods in predicting surface temperature on the space shuttle orbiter thermal protection system model under a variety of heating rates were compared. The iterative technique caused the surface temperature to be bounded at each step. While the linearized and explicit methods were generally more efficient, the iterative and semi-iterative techniques provided a realistic surface temperature response without requiring step size control techniques.

  8. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  9. Higher order explicit symmetric integrators for inseparable forms of coordinates and momenta

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Wu, Xin; Huang, Guoqing; Liu, Fuyao

    2016-06-01

    Pihajoki proposed the extended phase-space second-order explicit symmetric leapfrog methods for inseparable Hamiltonian systems. On the basis of this work, we survey a critical problem on how to mix the variables in the extended phase space. Numerical tests show that sequent permutations of coordinates and momenta can make the leapfrog-like methods yield the most accurate results and the optimal long-term stabilized error behaviour. We also present a novel method to construct many fourth-order extended phase-space explicit symmetric integration schemes. Each scheme represents the symmetric production of six usual second-order leapfrogs without any permutations. This construction consists of four segments: the permuted coordinates, triple product of the usual second-order leapfrog without permutations, the permuted momenta and the triple product of the usual second-order leapfrog without permutations. Similarly, extended phase-space sixth, eighth and other higher order explicit symmetric algorithms are available. We used several inseparable Hamiltonian examples, such as the post-Newtonian approach of non-spinning compact binaries, to show that one of the proposed fourth-order methods is more efficient than the existing methods; examples include the fourth-order explicit symplectic integrators of Chin and the fourth-order explicit and implicit mixed symplectic integrators of Zhong et al. Given a moderate choice for the related mixing and projection maps, the extended phase-space explicit symplectic-like methods are well suited for various inseparable Hamiltonian problems. Samples of these problems involve the algorithmic regularization of gravitational systems with velocity-dependent perturbations in the Solar system and post-Newtonian Hamiltonian formulations of spinning compact objects.

  10. A Gauge Invariant Description for the General Conic Constrained Particle from the FJBW Iteration Algorithm

    NASA Astrophysics Data System (ADS)

    Barbosa, Gabriel D.; Thibes, Ronaldo

    2018-06-01

    We consider a second-degree algebraic curve describing a general conic constraint imposed on the motion of a massive spinless particle. The problem is trivial at classical level but becomes involved and interesting concerning its quantum counterpart with subtleties in its symplectic structure and symmetries. We start with a second-class version of the general conic constrained particle, which encompasses previous versions of circular and elliptical paths discussed in the literature. By applying the symplectic FJBW iteration program, we proceed on to show how a gauge invariant version for the model can be achieved from the originally second-class system. We pursue the complete constraint analysis in phase space and perform the Faddeev-Jackiw symplectic quantization following the Barcelos-Wotzasek iteration program to unravel the essential aspects of the constraint structure. While in the standard Dirac-Bergmann approach there are four second-class constraints, in the FJBW they reduce to two. By using the symplectic potential obtained in the last step of the FJBW iteration process, we construct a gauge invariant model exhibiting explicitly its BRST symmetry. We obtain the quantum BRST charge and write the Green functions generator for the gauge invariant version. Our results reproduce and neatly generalize the known BRST symmetry of the rigid rotor, clearly showing that this last one constitutes a particular case of a broader class of theories.

  11. Harmonic Fourier beads method for studying rare events on rugged energy surfaces.

    PubMed

    Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L

    2006-11-07

    We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.

  12. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  13. An explicit solution to the exoatmospheric powered flight guidance and trajectory optimization problem for rocket propelled vehicles

    NASA Technical Reports Server (NTRS)

    Jaggers, R. F.

    1977-01-01

    A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.

  14. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  15. HEATING 7. 1 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1991-07-01

    HEATING is a FORTRAN program designed to solve steady-state and/or transient heat conduction problems in one-, two-, or three- dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heating generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- and position-dependent. The boundary conditions, which maymore » be surface-to-boundary or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General graybody radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING is variably dimensioned and utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution (for one-dimensional or two-dimensional problems), and conjugate gradient. Transient problems may be solved using one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method (which for some circumstances allows a time step greater than the CEP stability criterion). The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  16. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    NASA Astrophysics Data System (ADS)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  17. Using an iterative eigensolver to compute vibrational energies with phase-spaced localized basis functions.

    PubMed

    Brown, James; Carrington, Tucker

    2015-07-28

    Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.

  18. Adjustment technique without explicit formation of normal equations /conjugate gradient method/

    NASA Technical Reports Server (NTRS)

    Saxena, N. K.

    1974-01-01

    For a simultaneous adjustment of a large geodetic triangulation system, a semiiterative technique is modified and used successfully. In this semiiterative technique, known as the conjugate gradient (CG) method, original observation equations are used, and thus the explicit formation of normal equations is avoided, 'huge' computer storage space being saved in the case of triangulation systems. This method is suitable even for very poorly conditioned systems where solution is obtained only after more iterations. A detailed study of the CG method for its application to large geodetic triangulation systems was done that also considered constraint equations with observation equations. It was programmed and tested on systems as small as two unknowns and three equations up to those as large as 804 unknowns and 1397 equations. When real data (573 unknowns, 965 equations) from a 1858-km-long triangulation system were used, a solution vector accurate to four decimal places was obtained in 2.96 min after 1171 iterations (i.e., 2.0 times the number of unknowns).

  19. Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide

    NASA Astrophysics Data System (ADS)

    Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.

    2011-04-01

    Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.

  20. Anharmonic quantum mechanical systems do not feature phase space trajectories

    NASA Astrophysics Data System (ADS)

    Oliva, Maxime; Kakofengitis, Dimitris; Steuernagel, Ole

    2018-07-01

    Phase space dynamics in classical mechanics is described by transport along trajectories. Anharmonic quantum mechanical systems do not allow for a trajectory-based description of their phase space dynamics. This invalidates some approaches to quantum phase space studies. We first demonstrate the absence of trajectories in general terms. We then give an explicit proof for all quantum phase space distributions with negative values: we show that the generation of coherences in anharmonic quantum mechanical systems is responsible for the occurrence of singularities in their phase space velocity fields, and vice versa. This explains numerical problems repeatedly reported in the literature, and provides deeper insight into the nature of quantum phase space dynamics.

  1. Heating 7.2 user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1993-02-01

    HEATING is a general-purpose conduction heat transfer program written in Fortran 77. HEATING can solve steady-state and/or transient heat conduction problems in one-, two-, or three-dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may also be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heat-generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- andmore » position-dependent. The boundary conditions, which may be surface-to-environment or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General gray-body radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING uses a runtime memory allocation scheme to avoid having to recompile to match memory requirements for each specific problem. HEATING utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution, and conjugate gradient. Transient problems may be solved using any one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method. The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  2. Heating 7. 2 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Childs, K.W.

    1993-02-01

    HEATING is a general-purpose conduction heat transfer program written in Fortran 77. HEATING can solve steady-state and/or transient heat conduction problems in one-, two-, or three-dimensional Cartesian, cylindrical, or spherical coordinates. A model may include multiple materials, and the thermal conductivity, density, and specific heat of each material may be both time- and temperature-dependent. The thermal conductivity may also be anisotropic. Materials may undergo change of phase. Thermal properties of materials may be input or may be extracted from a material properties library. Heat-generation rates may be dependent on time, temperature, and position, and boundary temperatures may be time- andmore » position-dependent. The boundary conditions, which may be surface-to-environment or surface-to-surface, may be specified temperatures or any combination of prescribed heat flux, forced convection, natural convection, and radiation. The boundary condition parameters may be time- and/or temperature-dependent. General gray-body radiation problems may be modeled with user-defined factors for radiant exchange. The mesh spacing may be variable along each axis. HEATING uses a runtime memory allocation scheme to avoid having to recompile to match memory requirements for each specific problem. HEATING utilizes free-form input. Three steady-state solution techniques are available: point-successive-overrelaxation iterative method with extrapolation, direct-solution, and conjugate gradient. Transient problems may be solved using any one of several finite-difference schemes: Crank-Nicolson implicit, Classical Implicit Procedure (CIP), Classical Explicit Procedure (CEP), or Levy explicit method. The solution of the system of equations arising from the implicit techniques is accomplished by point-successive-overrelaxation iteration and includes procedures to estimate the optimum acceleration parameter.« less

  3. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1992-01-01

    The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.

  4. Integrated Targeting and Guidance for Powered Planetary Descent

    NASA Astrophysics Data System (ADS)

    Azimov, Dilmurat M.; Bishop, Robert H.

    2018-02-01

    This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.

  5. Integrated Targeting and Guidance for Powered Planetary Descent

    NASA Astrophysics Data System (ADS)

    Azimov, Dilmurat M.; Bishop, Robert H.

    2018-06-01

    This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.

  6. Status on Iterative Transform Phase Retrieval Applied to the GBT Data

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Scott; Shiri, Ron; Hollis, Jan M.; Lyons, Richard; Prestage, Richard; Hunter, Todd; Ghigo, Frank; Nikolic, Bojan

    2007-01-01

    This slide presentation reviews the use of iterative transform phase retrieval in the analysis of the Green Bank Radio Telescope (GBT) Data. It reviews the NASA projects that have used phase retrieval, and the testbed for the algorithm to be used for the James Webb Space Telescope. It shows the comparison of phase retrieval with an interferometer, and reviews the two approaches used for phase retrieval, iterative transform (ITA) or parametric (non-linear least squares model fitting). The concept of ITA Phase Retrieval is reviewed, and the application to Radio Antennas is reviewed. The presentation also examines the National Radio Astronomy Observatory (NRAO) data from the GBT, and the Fourier model that NRAO uses to analyze the data. The challenge for ITA phase retrieval is reviewed, and the coherent approximation for incoherent data is shown. The validity of the approximation is good for a large tilt. There is a review of the proof of concept of the Phase Review simulation using the input wavefront, and the initial sampling parameters estimate from the focused GBT data.

  7. Dragons, Ladybugs, and Softballs: Girls' STEM Engagement with Human-Centered Robotics

    NASA Astrophysics Data System (ADS)

    Gomoll, Andrea; Hmelo-Silver, Cindy E.; Šabanović, Selma; Francisco, Matthew

    2016-12-01

    Early experiences in science, technology, engineering, and math (STEM) are important for getting youth interested in STEM fields, particularly for girls. Here, we explore how an after-school robotics club can provide informal STEM experiences that inspire students to engage with STEM in the future. Human-centered robotics, with its emphasis on the social aspects of science and technology, may be especially important for bringing girls into the STEM pipeline. Using a problem-based approach, we designed two robotics challenges. We focus here on the more extended second challenge, in which participants were asked to imagine and build a telepresence robot that would allow others to explore their space from a distance. This research follows four girls as they engage with human-centered telepresence robotics design. We constructed case studies of these target participants to explore their different forms of engagement and phases of interest development—considering facets of behavioral, social, cognitive, and conceptual-to-consequential engagement as well as stages of interest ranging from triggered interest to well-developed individual interest. The results demonstrated that opportunities to personalize their robots and feedback from peers and facilitators were important motivators. We found both explicit and vicarious engagement and varied interest phases in our group of four focus participants. This first iteration of our project demonstrated that human-centered robotics is a promising approach to getting girls interested and engaged in STEM practices. As we design future iterations of our robotics club environment, we must consider how to harness multiple forms of leadership and engagement without marginalizing students with different working preferences.

  8. Compensation for the phase-type spatial periodic modulation of the near-field beam at 1053 nm

    NASA Astrophysics Data System (ADS)

    Gao, Yaru; Liu, Dean; Yang, Aihua; Tang, Ruyu; Zhu, Jianqiang

    2017-10-01

    A phase-only spatial light modulator is used to provide and compensate for the spatial periodic modulation (SPM) of the near-field beam at the near infrared at 1053nm wavelength with an improved iterative weight-based method. The transmission characteristics of the incident beam has been changed by a spatial light modulator (SLM) to shape the spatial intensity of the output beam. The propagation and reverse propagation of the light in free space are two important processes in the iterative process. The based theory is the beam angular spectrum transmit formula (ASTF) and the principle of the iterative weight-based method. We have made two improvements to the originally proposed iterative weight-based method. We select the appropriate parameter by choosing the minimum value of the output beam contrast degree and use the MATLAB built-in angle function to acquire the corresponding phase of the light wave function. The required phase that compensates for the intensity distribution of the incident SPM beam is iterated by this algorithm, which can decrease the magnitude of the SPM of the intensity on the observation plane. The experimental results show that the phase-type SPM of the near-field beam is subject to a certain restriction. We have also analyzed some factors that make the results imperfect. The experiment results verifies the possible applicability of this iterative weight-based method to compensate for the SPM of the near-field beam.

  9. A phase space model of Fourier ptychographic microscopy

    PubMed Central

    Horstmeyer, Roarke; Yang, Changhuei

    2014-01-01

    A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography’s and FPM’s captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment. PMID:24514995

  10. Canonical quantization of classical mechanics in curvilinear coordinates. Invariant quantization procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl

    In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less

  11. Advanced nodal neutron diffusion method with space-dependent cross sections: ILLICO-VX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Advanced transverse integrated nodal methods for neutron diffusion developed since the 1970s require that node- or assembly-homogenized cross sections be known. The underlying structural heterogeneity can be accurately accounted for in homogenization procedures by the use of heterogeneity or discontinuity factors. Other (milder) types of heterogeneity, burnup-induced or due to thermal-hydraulic feedback, can be resolved by explicitly accounting for the spatial variations of material properties. This can be done during the nodal computations via nonlinear iterations. The new method has been implemented in the code ILLICO-VX (ILLICO variable cross-section method). Numerous numerical tests were performed. As expected, the convergence ratemore » of ILLICO-VX is lower than that of ILLICO, requiring approx. 30% more outer iterations per k/sub eff/ computation. The methodology has also been implemented as the NOMAD-VX option of the NOMAD, multicycle, multigroup, two- and three-dimensional nodal diffusion depletion code. The burnup-induced heterogeneities (space dependence of cross sections) are calculated during the burnup steps.« less

  12. Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic

    NASA Astrophysics Data System (ADS)

    Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.

    2015-11-01

    Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.

  13. Existence and amplitude bounds for irrotational water waves in finite depth

    NASA Astrophysics Data System (ADS)

    Kogelbauer, Florian

    2017-12-01

    We prove the existence of solutions to the irrotational water-wave problem in finite depth and derive an explicit upper bound on the amplitude of the nonlinear solutions in terms of the wavenumber, the total hydraulic head, the wave speed and the relative mass flux. Our approach relies upon a reformulation of the water-wave problem as a one-dimensional pseudo-differential equation and the Newton-Kantorovich iteration for Banach spaces. This article is part of the theme issue 'Nonlinear water waves'.

  14. Probabilistic Structures Analysis Methods (PSAM) for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The basic formulation for probabilistic finite element analysis is described and demonstrated on a few sample problems. This formulation is based on iterative perturbation that uses the factorized stiffness on the unperturbed system as the iteration preconditioner for obtaining the solution to the perturbed problem. This approach eliminates the need to compute, store and manipulate explicit partial derivatives of the element matrices and force vector, which not only reduces memory usage considerably, but also greatly simplifies the coding and validation tasks. All aspects for the proposed formulation were combined in a demonstration problem using a simplified model of a curved turbine blade discretized with 48 shell elements, and having random pressure and temperature fields with partial correlation, random uniform thickness, and random stiffness at the root.

  15. Asymmetrical booster ascent guidance and control system design study. Volume 5: Space shuttle powered explicit guidance. [space shuttle development

    NASA Technical Reports Server (NTRS)

    Jaggers, R. F.

    1974-01-01

    An optimum powered explicit guidance algorithm capable of handling all space shuttle exoatospheric maneuvers is presented. The theoretical and practical basis for the currently baselined space shuttle powered flight guidance equations and logic is documented. Detailed flow diagrams for implementing the steering computations for all shuttle phases, including powered return to launch site (RTLS) abort, are also presented. Derivation of the powered RTLS algorithm is provided, as well as detailed flow diagrams for implementing the option. The flow diagrams and equations are compatible with the current powered flight documentation.

  16. Discrete Fourier Transform Analysis in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2009-01-01

    Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.

  17. Phase-space evolution of x-ray coherence in phase-sensitive imaging.

    PubMed

    Wu, Xizeng; Liu, Hong

    2008-08-01

    X-ray coherence evolution in the imaging process plays a key role for x-ray phase-sensitive imaging. In this work we present a phase-space formulation for the phase-sensitive imaging. The theory is reformulated in terms of the cross-spectral density and associated Wigner distribution. The phase-space formulation enables an explicit and quantitative account of partial coherence effects on phase-sensitive imaging. The presented formulas for x-ray spectral density at the detector can be used for performing accurate phase retrieval and optimizing the phase-contrast visibility. The concept of phase-space shearing length derived from this phase-space formulation clarifies the spatial coherence requirement for phase-sensitive imaging with incoherent sources. The theory has been applied to x-ray Talbot interferometric imaging as well. The peak coherence condition derived reveals new insights into three-grating-based Talbot-interferometric imaging and gratings-based x-ray dark-field imaging.

  18. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  19. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  20. Group iterative methods for the solution of two-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.

    2016-06-01

    Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.

  1. On the development of OpenFOAM solvers based on explicit and implicit high-order Runge-Kutta schemes for incompressible flows with heat transfer

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato

    2018-01-01

    Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.

  2. F-8C adaptive control law refinement and software development

    NASA Technical Reports Server (NTRS)

    Hartmann, G. L.; Stein, G.

    1981-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.

  3. Four-level conservative finite-difference schemes for Boussinesq paradigm equation

    NASA Astrophysics Data System (ADS)

    Kolkovska, N.

    2013-10-01

    In this paper a two-parametric family of four level conservative finite difference schemes is constructed for the multidimensional Boussinesq paradigm equation. The schemes are explicit in the sense that no inner iterations are needed for evaluation of the numerical solution. The preservation of the discrete energy with this method is proved. The schemes have been numerically tested on one soliton propagation model and two solitons interaction model. The numerical experiments demonstrate that the proposed family of schemes has second order of convergence in space and time steps in the discrete maximal norm.

  4. Computational trigonometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, K.

    1994-12-31

    By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

  5. Iterative method of construction of a bifurcation diagram of autorotation motions for a system with one degree of freedom

    NASA Astrophysics Data System (ADS)

    Klimina, L. A.

    2018-05-01

    The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.

  6. High-Order Space-Time Methods for Conservation Laws

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2013-01-01

    Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown

  7. Discrete Fourier Transform in a Complex Vector Space

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2015-01-01

    An image-based phase retrieval technique has been developed that can be used on board a space based iterative transformation system. Image-based wavefront sensing is computationally demanding due to the floating-point nature of the process. The discrete Fourier transform (DFT) calculation is presented in "diagonal" form. By diagonal we mean that a transformation of basis is introduced by an application of the similarity transform of linear algebra. The current method exploits the diagonal structure of the DFT in a special way, particularly when parts of the calculation do not have to be repeated at each iteration to converge to an acceptable solution in order to focus an image.

  8. Visibility graphs and symbolic dynamics

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas; Just, Wolfram

    2018-07-01

    Visibility algorithms are a family of geometric and ordering criteria by which a real-valued time series of N data is mapped into a graph of N nodes. This graph has been shown to often inherit in its topology nontrivial properties of the series structure, and can thus be seen as a combinatorial representation of a dynamical system. Here we explore in some detail the relation between visibility graphs and symbolic dynamics. To do that, we consider the degree sequence of horizontal visibility graphs generated by the one-parameter logistic map, for a range of values of the parameter for which the map shows chaotic behaviour. Numerically, we observe that in the chaotic region the block entropies of these sequences systematically converge to the Lyapunov exponent of the time series. Hence, Pesin's identity suggests that these block entropies are converging to the Kolmogorov-Sinai entropy of the physical measure, which ultimately suggests that the algorithm is implicitly and adaptively constructing phase space partitions which might have the generating property. To give analytical insight, we explore the relation k(x) , x ∈ [ 0 , 1 ] that, for a given datum with value x, assigns in graph space a node with degree k. In the case of the out-degree sequence, such relation is indeed a piece-wise constant function. By making use of explicit methods and tools from symbolic dynamics we are able to analytically show that the algorithm indeed performs an effective partition of the phase space and that such partition is naturally expressed as a countable union of subintervals, where the endpoints of each subinterval are related to the fixed point structure of the iterates of the map and the subinterval enumeration is associated with particular ordering structures that we called motifs.

  9. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    NASA Technical Reports Server (NTRS)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  10. Denoised Wigner distribution deconvolution via low-rank matrix completion

    DOE PAGES

    Lee, Justin; Barbastathis, George

    2016-08-23

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less

  11. Denoised Wigner distribution deconvolution via low-rank matrix completion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Justin; Barbastathis, George

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less

  12. Improving the iterative Linear Interaction Energy approach using automated recognition of configurational transitions.

    PubMed

    Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P

    2016-01-01

    Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.

  13. STS safety approval process for small self-contained payloads

    NASA Technical Reports Server (NTRS)

    Gum, Mary A.

    1988-01-01

    The safety approval process established by the National Aeronautics and Space Administration for Get Away Special (GAS) payloads is described. Although the designing organization is ultimately responsible for the safe operation of its payload, the Get Away Special team at the Goddard Space Flight Center will act as advisors while iterative safety analyses are performed and the Safety Data Package inputs are submitted. This four phase communications process will ultimately give NASA confidence that the GAS payload is safe, and successful completion of the Phase 3 package and review will clear the way for flight aboard the Space Transportation System orbiter.

  14. Multi-Shot Sensitivity-Encoded Diffusion Data Recovery Using Structured Low-Rank Matrix Completion (MUSSELS)

    PubMed Central

    Mani, Merry; Jacob, Mathews; Kelley, Douglas; Magnotta, Vincent

    2017-01-01

    Purpose To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion-induced phase maps to recover artifact-free images. In the new formulation, the k-space data of the artifact-free DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of the MS-DW data. Results Our experiments on in-vivo data show effective removal of artifacts arising from inter-shot motion using the proposed method. The method is shown to achieve better reconstruction than the conventional phase-based methods. Conclusion We demonstrate the utility of the proposed method to effectively recover artifact-free images from Cartesian fully/under-sampled and partial Fourier acquired data without the use of explicit phase estimates. PMID:27550212

  15. Performance assessment of the antenna setup for the ITER plasma position reflectometry in-vessel systems.

    PubMed

    Varela, P; Belo, J H; Quental, P B

    2016-11-01

    The design of the in-vessel antennas for the ITER plasma position reflectometry diagnostic is very challenging due to the need to cope both with the space restrictions inside the vacuum vessel and with the high mechanical and thermal loads during ITER operation. Here, we present the work carried out to assess and optimise the design of the antenna. We show that the blanket modules surrounding the antenna strongly modify its characteristics and need to be considered from the early phases of the design. We also show that it is possible to optimise the antenna performance, within the design restrictions.

  16. Some new results on the central overlap problem in astrometry

    NASA Astrophysics Data System (ADS)

    Rapaport, M.

    1998-07-01

    The central overlap problem in astrometry has been revisited in the recent last years by Eichhorn (1988) who explicitly inverted the matrix of a constrained least squares problem. In this paper, the general explicit solution of the unconstrained central overlap problem is given. We also give the explicit solution for an other set of constraints; this result is a confirmation of a conjecture expressed by Eichhorn (1988). We also consider the use of iterative methods to solve the central overlap problem. A surprising result is obtained when the classical Gauss Seidel method is used; the iterations converge immediately to the general solution of the equations; we explain this property writing the central overlap problem in a new set of variables.

  17. Evolutionary Software Development (Developpement Evolutionnaire de Logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  18. Evolutionary Software Development (Developpement evolutionnaire de logiciels)

    DTIC Science & Technology

    2008-08-01

    development processes. While this may be true, frequently it is not. MIL-STD-498 was explicitly introduced to encourage iterative development; ISO /IEC... 12207 was carefully worded not to prohibit iterative development. Yet both standards were widely interpreted as requiring waterfall development, as

  19. Phase-space reaction network on a multisaddle energy landscape: HCN isomerization.

    PubMed

    Li, Chun-Biu; Matsunaga, Yasuhiro; Toda, Mikito; Komatsuzaki, Tamiki

    2005-11-08

    By using the HCN/CNH isomerization reaction as an illustrative vehicle of chemical reactions on multisaddle energy landscapes, we give explicit visualizations of molecular motions associated with a straight-through reaction tube in the phase space inside which all reactive trajectories pass from one basin to another, with eliminating recrossing trajectories in the configuration space. This visualization provides us with a chemical intuition of how chemical species "walk along" the reaction-rate slope in the multidimensional phase space compared with the intrinsic reaction path in the configuration space. The distinct nonergodic features in the two different HCN and CNH wells can be easily demonstrated by a section of Poincare surface of section in those potential minima, which predicts in a priori the pattern of trajectories residing in the potential well. We elucidate the global phase-space structure which gives rise to the non-Markovian dynamics or the dynamical correlation of sequential multisaddle chemical reactions. The phase-space structure relevant to the controllability of the product state in chemical reactions is also discussed.

  20. Explicit formulation of second and third order optical nonlinearity in the FDTD framework

    NASA Astrophysics Data System (ADS)

    Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas

    2018-01-01

    The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.

  1. An in-depth stability analysis of nonuniform FDTD combined with novel local implicitization techniques

    NASA Astrophysics Data System (ADS)

    Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries

    2017-08-01

    This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.

  2. Nonlinear dynamic theory for photorefractive phase hologram formation

    NASA Technical Reports Server (NTRS)

    Kim, D. M.; Shah, R. R.; Rabson, T. A.; Tittle, F. K.

    1976-01-01

    A nonlinear dynamic theory is developed for the formation of photorefractive volume phase holograms. A feedback mechanism existing between the photogenerated field and free-electron density, treated explicitly, yields the growth and saturation of the space-charge field in a time scale characterized by the coupling strength between them. The expression for the field reduces in the short-time limit to previous theories and approaches in the long-time limit the internal or photovoltaic field. Additionally, the phase of the space charge field is shown to be time-dependent.

  3. Numerical simulation of phase transition problems with explicit interface tracking

    DOE PAGES

    Hu, Yijing; Shi, Qiangqiang; de Almeida, Valmor F.; ...

    2015-12-19

    Phase change is ubiquitous in nature and industrial processes. Started from the Stefan problem, it is a topic with a long history in applied mathematics and sciences and continues to generate outstanding mathematical problems. For instance, the explicit tracking of the Gibbs dividing surface between phases is still a grand challenge. Our work has been motivated by such challenge and here we report on progress made in solving the governing equations of continuum transport in the presence of a moving interface by the front tracking method. The most pressing issue is the accounting of topological changes suffered by the interfacemore » between phases wherein break up and/or merge takes place. The underlying physics of topological changes require the incorporation of space-time subscales not at reach at the moment. Therefore we use heuristic geometrical arguments to reconnect phases in space. This heuristic approach provides new insight in various applications and it is extensible to include subscale physics and chemistry in the future. We demonstrate the method on applications such as simulating freezing, melting, dissolution, and precipitation. The later examples also include the coupling of the phase transition solution with the Navier-Stokes equations for the effect of flow convection.« less

  4. Adaptive implicit-explicit and parallel element-by-element iteration schemes

    NASA Technical Reports Server (NTRS)

    Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.

    1989-01-01

    Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.

  5. An adaptively refined phase-space element method for cosmological simulations and collisionless dynamics

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Angulo, Raul E.

    2016-01-01

    N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (I) explicit tracking of the fine-grained distribution function, (II) natural representation of caustics, (III) intrinsically smooth gravitational potential fields, thus (IV) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.

  6. Sparse magnetic resonance imaging reconstruction using the bregman iteration

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo

    2013-01-01

    Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.

  7. A finite element solver for 3-D compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Reddy, K. C.; Reddy, J. N.; Nayani, S.

    1990-01-01

    Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.

  8. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  9. From phase space to integrable representations and level-rank duality

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Arghya; Dutta, Parikshit; Dutta, Suvankar

    2018-05-01

    We explicitly find representations for different large N phases of Chern-Simons matter theory on S 2 × S 1. These representations are characterised by Young diagrams. We show that no-gap and lower-gap phase of Chern-Simons-matter theory correspond to integrable representations of SU( N) k affine Lie algebra, where as upper-cap phase corresponds to integrable representations of SU( k - N) k affine Lie algebra. We use phase space description of [1] to obtain these representations and argue how putting a cap on eigenvalue distribution forces corresponding representations to be integrable. We also prove that the Young diagrams corresponding to lower-gap and upper-cap representations are related to each other by transposition under level-rank duality. Finally we draw phase space droplets for these phases and show how information about eigenvalue and Young diagram descriptions can be captured in topologies of these droplets in a unified way.

  10. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  11. Torus as phase space: Weyl quantization, dequantization, and Wigner formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ligabò, Marilena, E-mail: marilena.ligabo@uniba.it

    2016-08-15

    The Weyl quantization of classical observables on the torus (as phase space) without regularity assumptions is explicitly computed. The equivalence class of symbols yielding the same Weyl operator is characterized. The Heisenberg equation for the dynamics of general quantum observables is written through the Moyal brackets on the torus and the support of the Wigner transform is characterized. Finally, a dequantization procedure is introduced that applies, for instance, to the Pauli matrices. As a result we obtain the corresponding classical symbols.

  12. Improved Diffuse Foreground Subtraction with the ILC Method: CMB Map and Angular Power Spectrum Using Planck and WMAP Observations

    NASA Astrophysics Data System (ADS)

    Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun

    2017-06-01

    We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.

  13. Nonlinear Network Description for Many-Body Quantum Systems in Continuous Space

    NASA Astrophysics Data System (ADS)

    Ruggeri, Michele; Moroni, Saverio; Holzmann, Markus

    2018-05-01

    We show that the recently introduced iterative backflow wave function can be interpreted as a general neural network in continuum space with nonlinear functions in the hidden units. Using this wave function in variational Monte Carlo simulations of liquid 4He in two and three dimensions, we typically find a tenfold increase in accuracy over currently used wave functions. Furthermore, subsequent stages of the iteration procedure define a set of increasingly good wave functions, each with its own variational energy and variance of the local energy: extrapolation to zero variance gives energies in close agreement with the exact values. For two dimensional 4He, we also show that the iterative backflow wave function can describe both the liquid and the solid phase with the same functional form—a feature shared with the shadow wave function, but now joined by much higher accuracy. We also achieve significant progress for liquid 3He in three dimensions, improving previous variational and fixed-node energies.

  14. Group Chaos Theory: A Metaphor and Model for Group Work

    ERIC Educational Resources Information Center

    Rivera, Edil Torres; Wilbur, Michael; Frank-Saraceni, James; Roberts-Wilbur, Janice; Phan, Loan T.; Garrett, Michael T.

    2005-01-01

    Group phenomena and interactions are described through the use of the chaos theory constructs and characteristics of sensitive dependence on initial conditions, phase space, turbulence, emergence, self-organization, dissipation, iteration, bifurcation, and attractors and fractals. These constructs and theoretical tenets are presented as applicable…

  15. Existence and exponential stability of traveling waves for delayed reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Hsu, Cheng-Hsiung; Yang, Tzi-Sheng; Yu, Zhixian

    2018-03-01

    The purpose of this work is to investigate the existence and exponential stability of traveling wave solutions for general delayed multi-component reaction-diffusion systems. Following the monotone iteration scheme via an explicit construction of a pair of upper and lower solutions, we first obtain the existence of monostable traveling wave solutions connecting two different equilibria. Then, applying the techniques of weighted energy method and comparison principle, we show that all solutions of the Cauchy problem for the considered systems converge exponentially to traveling wave solutions provided that the initial perturbations around the traveling wave fronts belong to a suitable weighted Sobolev space.

  16. 15-digit accuracy calculations of Ambartsumian-Chandrasekhar's H-functions for four-term phase functions with the double-exponential formula

    NASA Astrophysics Data System (ADS)

    Kawabata, Kiyoshi

    2018-01-01

    We have established an iterative scheme to calculate with 15-digit accuracy the numerical values of Ambartsumian-Chandrasekhar's H-functions for anisotropic scattering characterized by the four-term phase function: the method incorporates some advantageous features of the iterative procedure of Kawabata (Astrophys. Space Sci. 358:32, 2015) and the double-exponential integration formula (DE-formula) of Takahashi and Mori (Publ. Res. Inst. Math. Sci. Kyoto Univ. 9:721, 1974), which proved highly effective in Kawabata (Astrophys. Space Sci. 361:373, 2016). Actual calculations of the H-functions have been carried out employing 27 selected cases of the phase function, 56 values of the single scattering albedo π0, and 36 values of an angular variable μ(= cosθ), with θ being the zenith angle specifying the direction of incidence and/or emergence of radiation. Partial results obtained for conservative isotropic scattering, Rayleigh scattering, and anisotropic scattering due to a full four-term phase function are presented. They indicate that it is important to simultaneously verify accuracy of the numerical values of the H-functions for μ<0.05, the domain often neglected in tabulation. As a sample application of the isotropic scattering H-function, an attempt is made in Appendix to simulate by iteratively solving the Ambartsumian equation the values of the plane and spherical albedos of a semi-infinite, homogeneous atmosphere calculated by Rogovtsov and Borovik (J. Quant. Spectrosc. Radiat. Transf. 183:128, 2016), who employed their analytical representations for these quantities and the single-term and two-term Henyey-Greenstein phase functions of appreciably high degrees of anisotropy. While our results are in satisfactory agreement with theirs, our procedure is in need of a faster algorithm to routinely deal with problems involving highly anisotropic phase functions giving rise to near-conservative scattering.

  17. A numerical scheme to solve unstable boundary value problems

    NASA Technical Reports Server (NTRS)

    Kalnay Derivas, E.

    1975-01-01

    A new iterative scheme for solving boundary value problems is presented. It consists of the introduction of an artificial time dependence into a modified version of the system of equations. Then explicit forward integrations in time are followed by explicit integrations backwards in time. The method converges under much more general conditions than schemes based in forward time integrations (false transient schemes). In particular it can attain a steady state solution of an elliptical system of equations even if the solution is unstable, in which case other iterative schemes fail to converge. The simplicity of its use makes it attractive for solving large systems of nonlinear equations.

  18. On the performance of joint iterative detection and decoding in coherent optical channels with laser frequency fluctuations

    NASA Astrophysics Data System (ADS)

    Castrillón, Mario A.; Morero, Damián A.; Agazzi, Oscar E.; Hueda, Mario R.

    2015-08-01

    The joint iterative detection and decoding (JIDD) technique has been proposed by Barbieri et al. (2007) with the objective of compensating the time-varying phase noise and constant frequency offset experienced in satellite communication systems. The application of JIDD to optical coherent receivers in the presence of laser frequency fluctuations has not been reported in prior literature. Laser frequency fluctuations are caused by mechanical vibrations, power supply noise, and other mechanisms. They significantly degrade the performance of the carrier phase estimator in high-speed intradyne coherent optical receivers. This work investigates the performance of the JIDD algorithm in multi-gigabit optical coherent receivers. We present simulation results of bit error rate (BER) for non-differential polarization division multiplexing (PDM)-16QAM modulation in a 200 Gb/s coherent optical system that includes an LDPC code with 20% overhead and net coding gain of 11.3 dB at BER = 10-15. Our study shows that JIDD with a pilot rate ⩽ 5 % compensates for both laser phase noise and laser frequency fluctuation. Furthermore, since JIDD is used with non-differential modulation formats, we find that gains in excess of 1 dB can be achieved over existing solutions based on an explicit carrier phase estimator with differential modulation. The impact of the fiber nonlinearities in dense wavelength division multiplexing (DWDM) systems is also investigated. Our results demonstrate that JIDD is an excellent candidate for application in next generation high-speed optical coherent receivers.

  19. Universal single level implicit algorithm for gasdynamics

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Venkatapthy, E.

    1984-01-01

    A single level effectively explicit implicit algorithm for gasdynamics is presented. The method meets all the requirements for unconditionally stable global iteration over flows with mixed supersonic and supersonic zones including blunt body flow and boundary layer flows with strong interaction and streamwise separation. For hyperbolic (supersonic flow) regions the method is automatically equivalent to contemporary space marching methods. For elliptic (subsonic flow) regions, rapid convergence is facilitated by alternating direction solution sweeps which bring both sets of eigenvectors and the influence of both boundaries of a coordinate line equally into play. Point by point updating of the data with local iteration on the solution procedure at each spatial step as the sweeps progress not only renders the method single level in storage but, also, improves nonlinear accuracy to accelerate convergence by an order of magnitude over related two level linearized implicit methods. The method derives robust stability from the combination of an eigenvector split upwind difference method (CSCM) with diagonally dominant ADI(DDADI) approximate factorization and computed characteristic boundary approximations.

  20. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  1. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    PubMed

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  2. Anisotropic elastic moduli reconstruction in transversely isotropic model using MRE

    NASA Astrophysics Data System (ADS)

    Song, Jiah; In Kwon, Oh; Seo, Jin Keun

    2012-11-01

    Magnetic resonance elastography (MRE) is an elastic tissue property imaging modality in which the phase-contrast based MRI imaging technique is used to measure internal displacement induced by a harmonically oscillating mechanical vibration. MRE has made rapid technological progress in the past decade and has now reached the stage of clinical use. Most of the research outcomes are based on the assumption of isotropy. Since soft tissues like skeletal muscles show anisotropic behavior, the MRE technique should be extended to anisotropic elastic property imaging. This paper considers reconstruction in a transversely isotropic model, which is the simplest case of anisotropy, and develops a new non-iterative reconstruction method for visualizing the elastic moduli distribution. This new method is based on an explicit representation formula using the Newtonian potential of measured displacement. Hence, the proposed method does not require iterations since it directly recovers the anisotropic elastic moduli. We perform numerical simulations in order to demonstrate the feasibility of the proposed method in recovering a two-dimensional anisotropic tensor.

  3. Approximate solution of space and time fractional higher order phase field equation

    NASA Astrophysics Data System (ADS)

    Shamseldeen, S.

    2018-03-01

    This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.

  4. Motion-induced phase error estimation and correction in 3D diffusion tensor imaging.

    PubMed

    Van, Anh T; Hernando, Diego; Sutton, Bradley P

    2011-11-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and T2∗ blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to significant image artifacts. This work proposes a maximum likelihood estimation and k-space correction of motion-induced phase errors in 3D multishot diffusion tensor imaging. The proposed error estimation is robust, unbiased, and approaches the Cramer-Rao lower bound. For rigid body motion, the proposed correction effectively removes motion-induced phase errors regardless of the k-space trajectory used and gives comparable performance to the more computationally expensive 3D iterative nonlinear phase error correction method. The method has been extended to handle multichannel data collected using phased-array coils. Simulation and in vivo data are shown to demonstrate the performance of the method.

  5. Coherent states for the quantum complete rigid rotor

    NASA Astrophysics Data System (ADS)

    Fontanari, Daniele; Sadovskií, Dmitrií A.

    2018-07-01

    Motivated by the possibility to describe orientations of quantum triaxial rigid rotors, such as molecules, with respect to both internal (body-fixed) and external (laboratory) frames, we go through the theory of coherent states and design the appropriate family of coherent states on T∗ SO(3) , the classical phase space of the freely rotating rigid body (the Euler top). We pay particular attention to the resolution of identity property in order to establish the explicit relation between the parameters of the coherent states and classical phase-space variables, actions and angles.

  6. Online Pairwise Learning Algorithms.

    PubMed

    Ying, Yiming; Zhou, Ding-Xuan

    2016-04-01

    Pairwise learning usually refers to a learning task that involves a loss function depending on pairs of examples, among which the most notable ones are bipartite ranking, metric learning, and AUC maximization. In this letter we study an online algorithm for pairwise learning with a least-square loss function in an unconstrained setting of a reproducing kernel Hilbert space (RKHS) that we refer to as the Online Pairwise lEaRning Algorithm (OPERA). In contrast to existing works (Kar, Sriperumbudur, Jain, & Karnick, 2013 ; Wang, Khardon, Pechyony, & Jones, 2012 ), which require that the iterates are restricted to a bounded domain or the loss function is strongly convex, OPERA is associated with a non-strongly convex objective function and learns the target function in an unconstrained RKHS. Specifically, we establish a general theorem that guarantees the almost sure convergence for the last iterate of OPERA without any assumptions on the underlying distribution. Explicit convergence rates are derived under the condition of polynomially decaying step sizes. We also establish an interesting property for a family of widely used kernels in the setting of pairwise learning and illustrate the convergence results using such kernels. Our methodology mainly depends on the characterization of RKHSs using its associated integral operators and probability inequalities for random variables with values in a Hilbert space.

  7. Gauging Spatial Symmetries and the Classification of Topological Crystalline Phases

    NASA Astrophysics Data System (ADS)

    Thorngren, Ryan; Else, Dominic V.

    2018-01-01

    We put the theory of interacting topological crystalline phases on a systematic footing. These are topological phases protected by space-group symmetries. Our central tool is an elucidation of what it means to "gauge" such symmetries. We introduce the notion of a crystalline topological liquid and argue that most (and perhaps all) phases of interest are likely to satisfy this criterion. We prove a crystalline equivalence principle, which states that in Euclidean space, crystalline topological liquids with symmetry group G are in one-to-one correspondence with topological phases protected by the same symmetry G , but acting internally, where if an element of G is orientation reversing, it is realized as an antiunitary symmetry in the internal symmetry group. As an example, we explicitly compute, using group cohomology, a partial classification of bosonic symmetry-protected topological phases protected by crystalline symmetries in (3 +1 ) dimensions for 227 of the 230 space groups. For the 65 space groups not containing orientation-reversing elements (Sohncke groups), there are no cobordism invariants that may contribute phases beyond group cohomology, so we conjecture that our classification is complete.

  8. Artificial neural network for the determination of Hubble Space Telescope aberration from stellar images

    NASA Technical Reports Server (NTRS)

    Barrett, Todd K.; Sandler, David G.

    1993-01-01

    An artificial-neural-network method, first developed for the measurement and control of atmospheric phase distortion, using stellar images, was used to estimate the optical aberration of the Hubble Space Telescope. A total of 26 estimates of distortion was obtained from 23 stellar images acquired at several secondary-mirror axial positions. The results were expressed as coefficients of eight orthogonal Zernike polynomials: focus through third-order spherical. For all modes other than spherical the measured aberration was small. The average spherical aberration of the estimates was -0.299 micron rms, which is in good agreement with predictions obtained when iterative phase-retrieval algorithms were used.

  9. The Nosé–Hoover looped chain thermostat for low temperature thawed Gaussian wave-packet dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coughtrie, David J.; Tew, David P.

    2014-05-21

    We have used a generalised coherent state resolution of the identity to map the quantum canonical statistical average for a general system onto a phase-space average over the centre and width parameters of a thawed Gaussian wave packet. We also propose an artificial phase-space density that has the same behaviour as the canonical phase-space density in the low-temperature limit, and have constructed a novel Nosé–Hoover looped chain thermostat that generates this density in conjunction with variational thawed Gaussian wave-packet dynamics. This forms a new platform for evaluating statistical properties of quantum condensed-phase systems that has an explicit connection to themore » time-dependent Schrödinger equation, whilst retaining many of the appealing features of path-integral molecular dynamics.« less

  10. Periodic orbits of the integrable swinging Atwood's machine

    NASA Astrophysics Data System (ADS)

    Nunes, Ana; Casasayas, Josefina; Tufillaro, Nicholas

    1995-02-01

    We identify all the periodic orbits of the integrable swinging Atwood's machine by calculating the rotation number of each orbit on its invariant tori in phase space, and also providing explicit formulas for the initial conditions needed to generate each orbit.

  11. Capture zones for simple aquifers

    USGS Publications Warehouse

    McElwee, Carl D.

    1991-01-01

    Capture zones showing the area influenced by a well within a certain time are useful for both aquifer protection and cleanup. If hydrodynamic dispersion is neglected, a deterministic curve defines the capture zone. Analytical expressions for the capture zones can be derived for simple aquifers. However, the capture zone equations are transcendental and cannot be explicitly solved for the coordinates of the capture zone boundary. Fortunately, an iterative scheme allows the solution to proceed quickly and efficiently even on a modest personal computer. Three forms of the analytical solution must be used in an iterative scheme to cover the entire region of interest, after the extreme values of the x coordinate are determined by an iterative solution. The resulting solution is a discrete one, and usually 100-1000 intervals along the x-axis are necessary for a smooth definition of the capture zone. The presented program is written in FORTRAN and has been used in a variety of computing environments. No graphics capability is included with the program; it is assumed the user has access to a commercial package. The superposition of capture zones for multiple wells is expected to be satisfactory if the spacing is not too close. Because this program deals with simple aquifers, the results rarely will be the final word in a real application.

  12. Correlation dimension and phase space contraction via extreme value theory

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Vaienti, Sandro

    2018-04-01

    We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.

  13. A geometric viewpoint on generalized hydrodynamics

    NASA Astrophysics Data System (ADS)

    Doyon, Benjamin; Spohn, Herbert; Yoshimura, Takato

    2018-01-01

    Generalized hydrodynamics (GHD) is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective ("dressed") velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.

  14. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  15. Interior tomography from differential phase contrast data via Hilbert transform based on spline functions

    NASA Astrophysics Data System (ADS)

    Yang, Qingsong; Cong, Wenxiang; Wang, Ge

    2016-10-01

    X-ray phase contrast imaging is an important mode due to its sensitivity to subtle features of soft biological tissues. Grating-based differential phase contrast (DPC) imaging is one of the most promising phase imaging techniques because it works with a normal x-ray tube of a large focal spot at a high flux rate. However, a main obstacle before this paradigm shift is the fabrication of large-area gratings of a small period and a high aspect ratio. Imaging large objects with a size-limited grating results in data truncation which is a new type of the interior problem. While the interior problem was solved for conventional x-ray CT through analytic extension, compressed sensing and iterative reconstruction, the difficulty for interior reconstruction from DPC data lies in that the implementation of the system matrix requires the differential operation on the detector array, which is often inaccurate and unstable in the case of noisy data. Here, we propose an iterative method based on spline functions. The differential data are first back-projected to the image space. Then, a system matrix is calculated whose components are the Hilbert transforms of the spline bases. The system matrix takes the whole image as an input and outputs the back-projected interior data. Prior information normally assumed for compressed sensing is enforced to iteratively solve this inverse problem. Our results demonstrate that the proposed algorithm can successfully reconstruct an interior region of interest (ROI) from the differential phase data through the ROI.

  16. Robust control design with real parameter uncertainty using absolute stability theory. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    How, Jonathan P.; Hall, Steven R.

    1993-01-01

    The purpose of this thesis is to investigate an extension of mu theory for robust control design by considering systems with linear and nonlinear real parameter uncertainties. In the process, explicit connections are made between mixed mu and absolute stability theory. In particular, it is shown that the upper bounds for mixed mu are a generalization of results from absolute stability theory. Both state space and frequency domain criteria are developed for several nonlinearities and stability multipliers using the wealth of literature on absolute stability theory and the concepts of supply rates and storage functions. The state space conditions are expressed in terms of Riccati equations and parameter-dependent Lyapunov functions. For controller synthesis, these stability conditions are used to form an overbound of the H2 performance objective. A geometric interpretation of the equivalent frequency domain criteria in terms of off-axis circles clarifies the important role of the multiplier and shows that both the magnitude and phase of the uncertainty are considered. A numerical algorithm is developed to design robust controllers that minimize the bound on an H2 cost functional and satisfy an analysis test based on the Popov stability multiplier. The controller and multiplier coefficients are optimized simultaneously, which avoids the iteration and curve-fitting procedures required by the D-K procedure of mu synthesis. Several benchmark problems and experiments on the Middeck Active Control Experiment at M.I.T. demonstrate that these controllers achieve good robust performance and guaranteed stability bounds.

  17. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  18. Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels

    NASA Astrophysics Data System (ADS)

    Li, Husheng; Betz, Sharon M.; Poor, H. Vincent

    2007-05-01

    This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.

  19. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    NASA Astrophysics Data System (ADS)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.

  20. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  1. Discrete-Time Local Value Iteration Adaptive Dynamic Programming: Admissibility and Termination Analysis.

    PubMed

    Wei, Qinglai; Liu, Derong; Lin, Qiao

    In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.

  2. Convergence Time towards Periodic Orbits in Discrete Dynamical Systems

    PubMed Central

    San Martín, Jesús; Porter, Mason A.

    2014-01-01

    We investigate the convergence towards periodic orbits in discrete dynamical systems. We examine the probability that a randomly chosen point converges to a particular neighborhood of a periodic orbit in a fixed number of iterations, and we use linearized equations to examine the evolution near that neighborhood. The underlying idea is that points of stable periodic orbit are associated with intervals. We state and prove a theorem that details what regions of phase space are mapped into these intervals (once they are known) and how many iterations are required to get there. We also construct algorithms that allow our theoretical results to be implemented successfully in practice. PMID:24736594

  3. Nonassociative differential geometry and gravity with non-geometric fluxes

    NASA Astrophysics Data System (ADS)

    Aschieri, Paolo; Ćirić, Marija Dimitrijević; Szabo, Richard J.

    2018-02-01

    We systematically develop the metric aspects of nonassociative differential geometry tailored to the parabolic phase space model of constant locally non-geometric closed string vacua, and use it to construct preliminary steps towards a nonassociative theory of gravity on spacetime. We obtain explicit expressions for the torsion, curvature, Ricci tensor and Levi-Civita connection in nonassociative Riemannian geometry on phase space, and write down Einstein field equations. We apply this formalism to construct R-flux corrections to the Ricci tensor on spacetime, and comment on the potential implications of these structures in non-geometric string theory and double field theory.

  4. Time-Dependent Parabolic Finite Difference Formulation for Harmonic Sound Propagation in a Two-Dimensional Duct with Flow

    NASA Technical Reports Server (NTRS)

    Kreider, Kevin L.; Baumeister, Kenneth J.

    1996-01-01

    An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

  5. A Nonlinear Gyrokinetic Vlasov-Maxwell System for High-frequency Simulation in Toroidal Geometry

    NASA Astrophysics Data System (ADS)

    Liu, Pengfei; Zhang, Wenlu; Lin, Jingbo; Li, Ding; Dong, Chao

    2016-10-01

    A nonlinear gyrokinetic Vlasov equation is derived through the Lie-perturbation method to the Lagrangian and Hamiltonian systems in extanded phase space. The gyrokinetic Maxwell equations are derived in terms of the moments of gyrocenter phase-space distribution through the push-forward and pull-back representations, where the polarization and magnetization effects of gyrocenter are retained. The goal of this work is to construct a global nonlinear gyrokinetic vlasov-maxwell system for high-frequency simulation in toroidal geometry relevent for ion cyclotron range of frequencies (ICRF) waves heating and lower hybrid wave current driven (LHCD). Supported by National Special Research Program of China For ITER and National Natural Science Foundation of China.

  6. Explicit modeling of volatile organic compounds partitioning in the atmospheric aqueous phase

    NASA Astrophysics Data System (ADS)

    Mouchel-Vallon, C.; Bräuer, P.; Camredon, M.; Valorso, R.; Madronich, S.; Herrmann, H.; Aumont, B.

    2012-09-01

    The gas phase oxidation of organic species is a multigenerational process involving a large number of secondary compounds. Most secondary organic species are water-soluble multifunctional oxygenated molecules. The fully explicit chemical mechanism GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to describe the oxidation of organics in the gas phase and their mass transfer to the aqueous phase. The oxidation of three hydrocarbons of atmospheric interest (isoprene, octane and α-pinene) is investigated for various NOx conditions. The simulated oxidative trajectories are examined in a new two dimensional space defined by the mean oxidation state and the solubility. The amount of dissolved organic matter was found to be very low (<2%) under a water content typical of deliquescent aerosols. For cloud water content, 50% (isoprene oxidation) to 70% (octane oxidation) of the carbon atoms are found in the aqueous phase after the removal of the parent hydrocarbons for low NOx conditions. For high NOx conditions, this ratio is only 5% in the isoprene oxidation case, but remains large for α-pinene and octane oxidation cases (40% and 60%, respectively). Although the model does not yet include chemical reactions in the aqueous phase, much of this dissolved organic matter should be processed in cloud drops and modify both oxidation rates and the speciation of organic species.

  7. Explicit modeling of volatile organic compounds partitioning in the atmospheric aqueous phase

    NASA Astrophysics Data System (ADS)

    Mouchel-Vallon, C.; Bräuer, P.; Camredon, M.; Valorso, R.; Madronich, S.; Herrmann, H.; Aumont, B.

    2013-01-01

    The gas phase oxidation of organic species is a multigenerational process involving a large number of secondary compounds. Most secondary organic species are water-soluble multifunctional oxygenated molecules. The fully explicit chemical mechanism GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere) is used to describe the oxidation of organics in the gas phase and their mass transfer to the aqueous phase. The oxidation of three hydrocarbons of atmospheric interest (isoprene, octane and α-pinene) is investigated for various NOx conditions. The simulated oxidative trajectories are examined in a new two dimensional space defined by the mean oxidation state and the solubility. The amount of dissolved organic matter was found to be very low (yield less than 2% on carbon atom basis) under a water content typical of deliquescent aerosols. For cloud water content, 50% (isoprene oxidation) to 70% (octane oxidation) of the carbon atoms are found in the aqueous phase after the removal of the parent hydrocarbons for low NOx conditions. For high NOx conditions, this ratio is only 5% in the isoprene oxidation case, but remains large for α-pinene and octane oxidation cases (40% and 60%, respectively). Although the model does not yet include chemical reactions in the aqueous phase, much of this dissolved organic matter should be processed in cloud drops and modify both oxidation rates and the speciation of organic species.

  8. Moving walls and geometric phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Facchi, Paolo, E-mail: paolo.facchi@ba.infn.it; INFN, Sezione di Bari, I-70126 Bari; Garnero, Giancarlo, E-mail: giancarlo.garnero@uniba.it

    2016-09-15

    We unveil the existence of a non-trivial Berry phase associated to the dynamics of a quantum particle in a one dimensional box with moving walls. It is shown that a suitable choice of boundary conditions has to be made in order to preserve unitarity. For these boundary conditions we compute explicitly the geometric phase two-form on the parameter space. The unboundedness of the Hamiltonian describing the system leads to a natural prescription of renormalization for divergent contributions arising from the boundary.

  9. Space Station: The next iteration

    NASA Astrophysics Data System (ADS)

    Foley, Theresa M.

    1995-01-01

    NASA's international space station is nearing the completion stage of its troublesome 10-year design phase. With a revised design and new management team, NASA is tasked to deliver the station on time at a budget acceptable to both Congress and the White House. For the next three years, NASA is using tried-and-tested Russian hardware as the technical centerpiece of the station. The new station configuration consists of eight pressurized modules in which the crew can live and work; a long metal truss to connect the pieces; a robot arm for exterior jobs; a solar power system; and propelling the facility in space.

  10. Fusion energy

    NASA Astrophysics Data System (ADS)

    1990-09-01

    The main purpose of the International Thermonuclear Experimental Reactor (ITER) is to develop an experimental fusion reactor through the united efforts of many technologically advanced countries. The ITER terms of reference, issued jointly by the European Community, Japan, the USSR, and the United States, call for an integrated international design activity and constitute the basis of current activities. Joint work on ITER is carried out under the auspices of the International Atomic Energy Agency (IAEA), according to the terms of quadripartite agreement reached between the European Community, Japan, the USSR, and the United States. The site for joint technical work sessions is at the Max Planck Institute of Plasma Physics. Garching, Federal Republic of Germany. The ITER activities have two phases: a definition phase performed in 1988 and the present design phase (1989 to 1990). During the definition phase, a set of ITER technical characteristics and supporting research and development (R and D) activities were developed and reported. The present conceptual design phase of ITER lasts until the end of 1990. The objectives of this phase are to develop the design of ITER, perform a safety and environmental analysis, develop site requirements, define future R and D needs, and estimate cost, manpower, and schedule for construction and operation. A final report will be submitted at the end of 1990. This paper summarizes progress in the ITER program during the 1989 design phase.

  11. Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; O'Sullivan, Joseph A.

    2009-02-01

    We present an extension of the development of an alternating minimization (AM) method for the computation of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM algorithm is also presented. Both extensions of the AM method are investigated with simulations.

  12. Free energies from dynamic weighted histogram analysis using unbiased Markov state model.

    PubMed

    Rosta, Edina; Hummer, Gerhard

    2015-01-13

    The weighted histogram analysis method (WHAM) is widely used to obtain accurate free energies from biased molecular simulations. However, WHAM free energies can exhibit significant errors if some of the biasing windows are not fully equilibrated. To account for the lack of full equilibration, we develop the dynamic histogram analysis method (DHAM). DHAM uses a global Markov state model to obtain the free energy along the reaction coordinate. A maximum likelihood estimate of the Markov transition matrix is constructed by joint unbiasing of the transition counts from multiple umbrella-sampling simulations along discretized reaction coordinates. The free energy profile is the stationary distribution of the resulting Markov matrix. For this matrix, we derive an explicit approximation that does not require the usual iterative solution of WHAM. We apply DHAM to model systems, a chemical reaction in water treated using quantum-mechanics/molecular-mechanics (QM/MM) simulations, and the Na(+) ion passage through the membrane-embedded ion channel GLIC. We find that DHAM gives accurate free energies even in cases where WHAM fails. In addition, DHAM provides kinetic information, which we here use to assess the extent of convergence in each of the simulation windows. DHAM may also prove useful in the construction of Markov state models from biased simulations in phase-space regions with otherwise low population.

  13. Berry phases for Landau Hamiltonians on deformed tori

    NASA Astrophysics Data System (ADS)

    Lévay, Péter

    1995-06-01

    Parametrized families of Landau Hamiltonians are introduced, where the parameter space is the Teichmüller space (topologically the complex upper half plane) corresponding to deformations of tori. The underlying SO(2,1) symmetry of the families enables an explicit calculation of the Berry phases picked up by the eigenstates when the torus is slowly deformed. It is also shown that apart from these phases that are local in origin, there are global non-Abelian ones too, related to the hidden discrete symmetry group Γϑ (the theta group, which is a subgroup of the modular group) of the families. The induced Riemannian structure on the parameter space is the usual Poincare metric on the upper half plane of constant negative curvature. Due to the discrete symmetry Γϑ the geodesic motion restricted to the fundamental domain of this group is chaotic.

  14. Development of a Ground Test and Analysis Protocol to Support NASA's NextSTEP Phase 2 Habitation Concepts

    NASA Technical Reports Server (NTRS)

    Beaton, Kara H.; Chappell, Steven P.; Bekdash, Omar S.; Gernhardt, Michael L.

    2018-01-01

    The NASA Next Space Technologies for Exploration Partnerships (NextSTEP) program is a public-private partnership model that seeks commercial development of deep space exploration capabilities to support extensive human spaceflight missions around and beyond cislunar space. NASA first issued the Phase 1 NextSTEP Broad Agency Announcement to U.S. industries in 2014, which called for innovative cislunar habitation concepts that leveraged commercialization plans for low Earth orbit. These habitats will be part of the Deep Space Gateway (DSG), the cislunar space station planned by NASA for construction in the 2020s. In 2016, Phase 2 of the NextSTEP program selected five commercial partners to develop ground prototypes. A team of NASA research engineers and subject matter experts have been tasked with developing the ground test protocol that will serve as the primary means by which these Phase 2 prototype habitats will be evaluated. Since 2008, this core test team has successfully conducted multiple spaceflight analog mission evaluations utilizing a consistent set of operational products, tools, methods, and metrics to enable the iterative development, testing, analysis, and validation of evolving exploration architectures, operations concepts, and vehicle designs. The purpose of implementing a similar evaluation process for the NextSTEP Phase 2 Habitation Concepts is to consistently evaluate the different commercial partner ground prototypes to provide data-driven, actionable recommendations for Phase 3.

  15. Brownian motion with adaptive drift for remaining useful life prediction: Revisited

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2018-01-01

    Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.

  16. Multi-criteria Integrated Resource Assessment (MIRA)

    EPA Pesticide Factsheets

    MIRA is an approach that facilitates stakeholder engagement for collaborative multi-objective decision making. MIRA is designed to facilitate and support an inclusive, explicit, transparent, iterative learning-based decision process.

  17. An iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring

    NASA Astrophysics Data System (ADS)

    Li, J. Y.; Kitanidis, P. K.

    2013-12-01

    Reservoir forecasting and management are increasingly relying on an integrated reservoir monitoring approach, which involves data assimilation to calibrate the complex process of multi-phase flow and transport in the porous medium. The numbers of unknowns and measurements arising in such joint inversion problems are usually very large. The ensemble Kalman filter and other ensemble-based techniques are popular because they circumvent the computational barriers of computing Jacobian matrices and covariance matrices explicitly and allow nonlinear error propagation. These algorithms are very useful but their performance is not well understood and it is not clear how many realizations are needed for satisfactory results. In this presentation we introduce an iterative ensemble quasi-linear data assimilation approach for integrated reservoir monitoring. It is intended for problems for which the posterior or conditional probability density function is not too different from a Gaussian, despite nonlinearity in the state transition and observation equations. The algorithm generates realizations that have the potential to adequately represent the conditional probability density function (pdf). Theoretical analysis sheds light on the conditions under which this algorithm should work well and explains why some applications require very few realizations while others require many. This algorithm is compared with the classical ensemble Kalman filter (Evensen, 2003) and with Gu and Oliver's (2007) iterative ensemble Kalman filter on a synthetic problem of monitoring a reservoir using wellbore pressure and flux data.

  18. Imaging the wave functions of adsorbed molecules

    PubMed Central

    Lüftner, Daniel; Ules, Thomas; Reinisch, Eva Maria; Koller, Georg; Soubatch, Serguei; Tautz, F. Stefan; Ramsey, Michael G.; Puschnig, Peter

    2014-01-01

    The basis for a quantum-mechanical description of matter is electron wave functions. For atoms and molecules, their spatial distributions and phases are known as orbitals. Although orbitals are very powerful concepts, experimentally only the electron densities and -energy levels are directly observable. Regardless whether orbitals are observed in real space with scanning probe experiments, or in reciprocal space by photoemission, the phase information of the orbital is lost. Here, we show that the experimental momentum maps of angle-resolved photoemission from molecular orbitals can be transformed to real-space orbitals via an iterative procedure which also retrieves the lost phase information. This is demonstrated with images obtained of a number of orbitals of the molecules pentacene (C22H14) and perylene-3,4,9,10-tetracarboxylic dianhydride (C24H8O6), adsorbed on silver, which are in excellent agreement with ab initio calculations. The procedure requires no a priori knowledge of the orbitals and is shown to be simple and robust. PMID:24344291

  19. Developing a taxonomy for mission architecture definition

    NASA Technical Reports Server (NTRS)

    Neubek, Deborah J.

    1990-01-01

    The Lunar and Mars Exploration Program Office (LMEPO) was tasked to define candidate architectures for the Space Exploration Initiative to submit to NASA senior management and an externally constituted Outreach Synthesis Group. A systematic, structured process for developing, characterizing, and describing the alternate mission architectures, and applying this process to future studies was developed. The work was done in two phases: (1) national needs were identified and categorized into objectives achievable by the Space Exploration Initiative; and (2) a program development process was created which both hierarchically and iteratively describes the program planning process.

  20. Finite Nilpotent BRST Transformations in Hamiltonian Formulation

    NASA Astrophysics Data System (ADS)

    Rai, Sumit Kumar; Mandal, Bhabani Prasad

    2013-10-01

    We consider the finite field dependent BRST (FFBRST) transformations in the context of Hamiltonian formulation using Batalin-Fradkin-Vilkovisky method. The non-trivial Jacobian of such transformations is calculated in extended phase space. The contribution from Jacobian can be written as exponential of some local functional of fields which can be added to the effective Hamiltonian of the system. Thus, FFBRST in Hamiltonian formulation with extended phase space also connects different effective theories. We establish this result with the help of two explicit examples. We also show that the FFBRST transformations is similar to the canonical transformations in the sector of Lagrange multiplier and its corresponding momenta.

  1. Retrieval of the atomic displacements in the crystal from the coherent X-ray diffraction pattern.

    PubMed

    Minkevich, A A; Köhl, M; Escoubas, S; Thomas, O; Baumbach, T

    2014-07-01

    The retrieval of spatially resolved atomic displacements is investigated via the phases of the direct(real)-space image reconstructed from the strained crystal's coherent X-ray diffraction pattern. It is demonstrated that limiting the spatial variation of the first- and second-order spatial displacement derivatives improves convergence of the iterative phase-retrieval algorithm for displacements reconstructions to the true solution. This approach is exploited to retrieve the displacement in a periodic array of silicon lines isolated by silicon dioxide filled trenches.

  2. Fast non-interferometric iterative phase retrieval for holographic data storage.

    PubMed

    Lin, Xiao; Huang, Yong; Shimura, Tsutomu; Fujimura, Ryushi; Tanaka, Yoshito; Endo, Masao; Nishimoto, Hajimu; Liu, Jinpeng; Li, Yang; Liu, Ying; Tan, Xiaodi

    2017-12-11

    Fast non-interferometric phase retrieval is a very important technique for phase-encoded holographic data storage and other phase based applications due to its advantage of easy implementation, simple system setup, and robust noise tolerance. Here we present an iterative non-interferometric phase retrieval for 4-level phase encoded holographic data storage based on an iterative Fourier transform algorithm and known portion of the encoded data, which increases the storage code rate to two-times that of an amplitude based method. Only a single image at the Fourier plane of the beam is captured for the iterative reconstruction. Since beam intensity at the Fourier plane of the reconstructed beam is more concentrated than the reconstructed beam itself, the requirement of diffractive efficiency of the recording media is reduced, which will improve the dynamic range of recording media significantly. The phase retrieval only requires 10 iterations to achieve a less than 5% phase data error rate, which is successfully demonstrated by recording and reconstructing a test image data experimentally. We believe our method will further advance the holographic data storage technique in the era of big data.

  3. Activation Product Inverse Calculations with NDI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, Mark Girard

    NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.

  4. Kinetic field theory: exact free evolution of Gaussian phase-space correlations

    NASA Astrophysics Data System (ADS)

    Fabis, Felix; Kozlikin, Elena; Lilow, Robert; Bartelmann, Matthias

    2018-04-01

    In recent work we developed a description of cosmic large-scale structure formation in terms of non-equilibrium ensembles of classical particles, with time evolution obtained in the framework of a statistical field theory. In these works, the initial correlations between particles sampled from random Gaussian density and velocity fields have so far been treated perturbatively or restricted to pure momentum correlations. Here we treat the correlations between all phase-space coordinates exactly by adopting a diagrammatic language for the different forms of correlations, directly inspired by the Mayer cluster expansion. We will demonstrate that explicit expressions for phase-space density cumulants of arbitrary n-point order, which fully capture the non-linear coupling of free streaming kinematics due to initial correlations, can be obtained from a simple set of Feynman rules. These cumulants will be the foundation for future investigations of perturbation theory in particle interactions.

  5. A numerical model of two-phase flow at the micro-scale using the volume-of-fluid method

    NASA Astrophysics Data System (ADS)

    Shams, Mosayeb; Raeini, Ali Q.; Blunt, Martin J.; Bijeljic, Branko

    2018-03-01

    This study presents a simple and robust numerical scheme to model two-phase flow in porous media where capillary forces dominate over viscous effects. The volume-of-fluid method is employed to capture the fluid-fluid interface whose dynamics is explicitly described based on a finite volume discretization of the Navier-Stokes equations. Interfacial forces are calculated directly on reconstructed interface elements such that the total curvature is preserved. The computed interfacial forces are explicitly added to the Navier-Stokes equations using a sharp formulation which effectively eliminates spurious currents. The stability and accuracy of the implemented scheme is validated on several two- and three-dimensional test cases, which indicate the capability of the method to model two-phase flow processes at the micro-scale. In particular we show how the co-current flow of two viscous fluids leads to greatly enhanced flow conductance for the wetting phase in corners of the pore space, compared to a case where the non-wetting phase is an inviscid gas.

  6. Reconstruction from limited single-particle diffraction data via simultaneous determination of state, orientation, intensity, and phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.

    Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less

  7. Reconstruction from limited single-particle diffraction data via simultaneous determination of state, orientation, intensity, and phase

    DOE PAGES

    Donatelli, Jeffrey J.; Sethian, James A.; Zwart, Peter H.

    2017-06-26

    Free-electron lasers now have the ability to collect X-ray diffraction patterns from individual molecules; however, each sample is delivered at unknown orientation and may be in one of several conformational states, each with a different molecular structure. Hit rates are often low, typically around 0.1%, limiting the number of useful images that can be collected. Determining accurate structural information requires classifying and orienting each image, accurately assembling them into a 3D diffraction intensity function, and determining missing phase information. Additionally, single particles typically scatter very few photons, leading to high image noise levels. We develop a multitiered iterative phasing algorithmmore » to reconstruct structural information from singleparticle diffraction data by simultaneously determining the states, orientations, intensities, phases, and underlying structure in a single iterative procedure. We leverage real-space constraints on the structure to help guide optimization and reconstruct underlying structure from very few images with excellent global convergence properties. We show that this approach can determine structural resolution beyond what is suggested by standard Shannon sampling arguments for ideal images and is also robust to noise.« less

  8. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  9. Diffeomorphisms as symplectomorphisms in history phase space: Bosonic string model

    NASA Astrophysics Data System (ADS)

    Kouletsis, I.; Kuchař, K. V.

    2002-06-01

    The structure of the history phase space G of a covariant field system and its history group (in the sense of Isham and Linden) is analyzed on an example of a bosonic string. The history space G includes the time map T from the spacetime manifold (the two-sheet) Y to a one-dimensional time manifold T as one of its configuration variables. A canonical history action is posited on G such that its restriction to the configuration history space yields the familiar Polyakov action. The standard Dirac-ADM action is shown to be identical with the canonical history action, the only difference being that the underlying action is expressed in two different coordinate charts on G. The canonical history action encompasses all individual Dirac-ADM actions corresponding to different choices T of foliating Y. The history Poisson brackets of spacetime fields on G induce the ordinary Poisson brackets of spatial fields in the instantaneous phase space G0 of the Dirac-ADM formalism. The canonical history action is manifestly invariant both under spacetime diffeomorphisms Diff Y and temporal diffeomorphisms Diff T. Both of these diffeomorphisms are explicitly represented by symplectomorphisms on the history phase space G. The resulting classical history phase space formalism is offered as a starting point for projection operator quantization and consistent histories interpretation of the bosonic string model.

  10. Pivot methods for global optimization

    NASA Astrophysics Data System (ADS)

    Stanton, Aaron Fletcher

    A new algorithm is presented for the location of the global minimum of a multiple minima problem. It begins with a series of randomly placed probes in phase space, and then uses an iterative redistribution of the worst probes into better regions of phase space until a chosen convergence criterion is fulfilled. The method quickly converges, does not require derivatives, and is resistant to becoming trapped in local minima. Comparison of this algorithm with others using a standard test suite demonstrates that the number of function calls has been decreased conservatively by a factor of about three with the same degrees of accuracy. Two major variations of the method are presented, differing primarily in the method of choosing the probes that act as the basis for the new probes. The first variation, termed the lowest energy pivot method, ranks all probes by their energy and keeps the best probes. The probes being discarded select from those being kept as the basis for the new cycle. In the second variation, the nearest neighbor pivot method, all probes are paired with their nearest neighbor. The member of each pair with the higher energy is relocated in the vicinity of its neighbor. Both methods are tested against a standard test suite of functions to determine their relative efficiency, and the nearest neighbor pivot method is found to be the more efficient. A series of Lennard-Jones clusters is optimized with the nearest neighbor method, and a scaling law is found for cpu time versus the number of particles in the system. The two methods are then compared more explicitly, and finally a study in the use of the pivot method for solving the Schroedinger equation is presented. The nearest neighbor method is found to be able to solve the ground state of the quantum harmonic oscillator from a pure random initialization of the wavefunction.

  11. QCD axion dark matter from long-lived domain walls during matter domination

    NASA Astrophysics Data System (ADS)

    Harigaya, Keisuke; Kawasaki, Masahiro

    2018-07-01

    The domain wall problem of the Peccei-Quinn mechanism can be solved if the Peccei-Quinn symmetry is explicitly broken by a small amount. Domain walls decay into axions, which may account for dark matter of the universe. This scheme is however strongly constrained by overproduction of axions unless the phase of the explicit breaking term is tuned. We investigate the case where the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dominated epoch. We show how the viable parameter space is expanded.

  12. Explicitly computing geodetic coordinates from Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Zeng, Huaien

    2013-04-01

    This paper presents a new form of quartic equation based on Lagrange's extremum law and a Groebner basis under the constraint that the geodetic height is the shortest distance between a given point and the reference ellipsoid. A very explicit and concise formulae of the quartic equation by Ferrari's line is found, which avoids the need of a good starting guess for iterative methods. A new explicit algorithm is then proposed to compute geodetic coordinates from Cartesian coordinates. The convergence region of the algorithm is investigated and the corresponding correct solution is given. Lastly, the algorithm is validated with numerical experiments.

  13. Implicit methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Yoon, S.; Kwak, D.

    1990-01-01

    Numerical solutions of the Navier-Stokes equations using explicit schemes can be obtained at the expense of efficiency. Conventional implicit methods which often achieve fast convergence rates suffer high cost per iteration. A new implicit scheme based on lower-upper factorization and symmetric Gauss-Seidel relaxation offers very low cost per iteration as well as fast convergence. High efficiency is achieved by accomplishing the complete vectorizability of the algorithm on oblique planes of sweep in three dimensions.

  14. Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation

    NASA Astrophysics Data System (ADS)

    Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara

    2017-11-01

    Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.

  15. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  16. Using Minimum-Surface Bodies for Iteration Space Partitioning

    NASA Technical Reports Server (NTRS)

    Frumlin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. We study coverings of iteration spaces represented by structured and unstructured grids. For structured grids we introduce a covering based on successive minima tiles of the interference lattice of the grid. We show that the covering has good surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For unstructured grids no cache efficient covering can be guaranteed. We present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  17. Adaptive matching of the iota ring linear optics for space charge compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, A.; Bruhwiler, D. L.; Cook, N.

    Many present and future accelerators must operate with high intensity beams when distortions induced by space charge forces are among major limiting factors. Betatron tune depression of above approximately 0.1 per cell leads to significant distortions of linear optics. Many aspects of machine operation depend on proper relations between lattice functions and phase advances, and can be i proved with proper treatment of space charge effects. We implement an adaptive algorithm for linear lattice re matching with full account of space charge in the linear approximation for the case of Fermilab’s IOTA ring. The method is based on a searchmore » for initial second moments that give closed solution and, at the same predefined set of goals for emittances, beta functions, dispersions and phase advances at and between points of interest. Iterative singular value decomposition based technique is used to search for optimum by varying wide array of model parameters« less

  18. DAVIS: A direct algorithm for velocity-map imaging system

    NASA Astrophysics Data System (ADS)

    Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.

    2018-05-01

    In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

  19. Stability of iterative procedures with errors for approximating common fixed points of a couple of q-contractive-like mappings in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zeng, Lu-Chuan; Yao, Jen-Chih

    2006-09-01

    Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.

  20. Stability and dynamic analysis of a slender column with curved longitudinal stiffeners

    NASA Technical Reports Server (NTRS)

    Lake, Mark S.

    1989-01-01

    The results of a stability design study are presented for a slender column with curved longitudinal stiffeners for large space structure applications. Linear stability analyses are performed using a link-plate representation of the stiffeners to determine stiffener local buckling stresses. Results from a set of parametric analyses are used to determine an approximate explicit expression for stiffener local buckling in terms of its geometric parameters. This expression along with other equations governing column stability and mass are assembled into a determinate system describing minimum mass stiffened column design. An iterative solution is determined to solve this system and a computer program incorporating this routine is presented. Example design problems are presented which verify the solution accuracy and illustrate the implementation of the solution routine. Also, observations are made which lead to a greatly simplified first iteration design equation relating the percent increase in column mass to the percent increase in column buckling load. From this, generalizations are drawn as to the mass savings offered by the stiffened column concept. Finally, the percent increase in fundamental column vibration frequency due to the addition of deployable stiffeners is studied.

  1. Mass gap in the weak coupling limit of (2 +1 )-dimensional SU(2) lattice gauge theory

    NASA Astrophysics Data System (ADS)

    Anishetty, Ramesh; Sreeraj, T. P.

    2018-04-01

    We develop the dual description of (2 +1 )-dimensional SU(2) lattice gauge theory as interacting "Abelian-like" electric loops by using Schwinger bosons. "Point splitting" of the lattice enables us to construct explicit Hilbert space for the gauge invariant theory which in turn makes dynamics more transparent. Using path integral representation in phase space, the interacting closed loop dynamics is analyzed in the weak coupling limit to get the mass gap.

  2. Position, scale, and rotation invariant holographic associative memory

    NASA Astrophysics Data System (ADS)

    Fielding, Kenneth H.; Rogers, Steven K.; Kabrisky, Matthew; Mills, James P.

    1989-08-01

    This paper describes the development and characterization of a holographic associative memory (HAM) system that is able to recall stored objects whose inputs were changed in position, scale, and rotation. The HAM is based on the single iteration model described by Owechko et al. (1987); however, the system described uses a self-pumped BaTiO3 phase conjugate mirror, rather than a degenerate four-wave mixing proposed by Owechko and his coworkers. The HAM system can store objects in a position, scale, and rotation invariant feature space. The angularly multiplexed diffuse Fourier transform holograms of the HAM feature space are characterized as the memory unit; distorted input objects are correlated with the hologram, and the nonlinear phase conjugate mirror reduces cross-correlation noise and provides object discrimination. Applications of the HAM system are presented.

  3. Measurement of the velocity of a quantum object: A role of phase and group velocities

    NASA Astrophysics Data System (ADS)

    Lapinski, Mikaila; Rostovtsev, Yuri V.

    2017-08-01

    We consider the motion of a quantum particle in a free space. Introducing an explicit measurement procedure for velocity, we demonstrate that the measured velocity is related to the group and phase velocities of the corresponding matter waves. We show that for long distances the measured velocity coincides with the matter wave group velocity. We discuss the possibilities to demonstrate these effects for the optical pulses in coherently driven media or for radiation propagating in waveguides.

  4. Higher Order First Integrals of Motion in a Gauge Covariant Hamiltonian Framework

    NASA Astrophysics Data System (ADS)

    Visinescu, Mihai

    The higher order symmetries are investigated in a covariant Hamiltonian formulation. The covariant phase-space approach is extended to include the presence of external gauge fields and scalar potentials. The special role of the Killing-Yano tensors is pointed out. Some nontrivial examples involving Runge-Lenz type conserved quantities are explicitly worked out.

  5. p-Forms and diffeomorphisms: Hamiltonian formulation

    NASA Astrophysics Data System (ADS)

    Baulieu, Laurent; Henneaux, Marc

    1987-07-01

    The BRST charges corresponding to various (equivalent) ways of writing the action of the diffeomorphism group on p-form gauge fields are canonically related by a canonical transformation in the extended phase space which is explicitly constructed. The occurrence of higher order structure functions is pointed out. Also at: Centro de Estudios Cientificos de Santiago, Casilla 16443, Santiago 9, Chile.

  6. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  7. Considerations for preparing a randomized population health intervention trial: lessons from a South African–Canadian partnership to improve the health of health workers

    PubMed Central

    Yassi, Annalee; O’Hara, Lyndsay Michelle; Engelbrecht, Michelle C.; Uebel, Kerry; Nophale, Letshego Elizabeth; Bryce, Elizabeth Ann; Buxton, Jane A; Siegel, Jacob; Spiegel, Jerry Malcolm

    2014-01-01

    Background Community-based cluster-randomized controlled trials (RCTs) are increasingly being conducted to address pressing global health concerns. Preparations for clinical trials are well-described, as are the steps for multi-component health service trials. However, guidance is lacking for addressing the ethical and logistic challenges in (cluster) RCTs of population health interventions in low- and middle-income countries. Objective We aimed to identify the factors that population health researchers must explicitly consider when planning RCTs within North–South partnerships. Design We reviewed our experiences and identified key ethical and logistic issues encountered during the pre-trial phase of a recently implemented RCT. This trial aimed to improve tuberculosis (TB) and Human Immunodeficiency Virus (HIV) prevention and care for health workers by enhancing workplace assessment capability, addressing concerns about confidentiality and stigma, and providing onsite counseling, testing, and treatment. An iterative framework was used to synthesize this analysis with lessons taken from other studies. Results The checklist of critical factors was grouped into eight categories: 1) Building trust and shared ownership; 2) Conducting feasibility studies throughout the process; 3) Building capacity; 4) Creating an appropriate information system; 5) Conducting pilot studies; 6) Securing stakeholder support, with a view to scale-up; 7) Continuously refining methodological rigor; and 8) Explicitly addressing all ethical issues both at the start and continuously as they arise. Conclusion Researchers should allow for the significant investment of time and resources required for successful implementation of population health RCTs within North–South collaborations, recognize the iterative nature of the process, and be prepared to revise protocols as challenges emerge. PMID:24802561

  8. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    NASA Astrophysics Data System (ADS)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  9. Plasma facing materials performance under ITER-relevant mitigated disruption photonic heat loads

    NASA Astrophysics Data System (ADS)

    Klimov, N. S.; Putrik, A. B.; Linke, J.; Pitts, R. A.; Zhitlukhin, A. M.; Kuprianov, I. B.; Spitsyn, A. V.; Ogorodnikova, O. V.; Podkovyrov, V. L.; Muzichenko, A. D.; Ivanov, B. V.; Sergeecheva, Ya. V.; Lesina, I. G.; Kovalenko, D. V.; Barsuk, V. A.; Danilina, N. A.; Bazylev, B. N.; Giniyatulin, R. N.

    2015-08-01

    PFMs (Plasma-facing materials: ITER grade stainless steel, beryllium, and ferritic-martensitic steels) as well as deposited erosion products of PFCs (Be-like, tungsten, and carbon based) were tested in QSPA under photonic heat loads relevant to those expected from photon radiation during disruptions mitigated by massive gas injection in ITER. Repeated pulses slightly above the melting threshold on the bulk materials eventually lead to a regular, "corrugated" surface, with hills and valleys spaced by 0.2-2 mm. The results indicate that hill growth (growth rate of ∼1 μm per pulse) and sample thinning in the valleys is a result of melt-layer redistribution. The measurements on the 316L(N)-IG indicate that the amount of tritium absorbed by the sample from the gas phase significantly increases with pulse number as well as the modified layer thickness. Repeated pulses significantly below the melting threshold on the deposited erosion products lead to a decrease of hydrogen isotopes trapped during the deposition of the eroded material.

  10. Finite Difference Time Marching in the Frequency Domain: A Parabolic Formulation for the Convective Wave Equation

    NASA Technical Reports Server (NTRS)

    Baumeister, K. J.; Kreider, K. L.

    1996-01-01

    An explicit finite difference iteration scheme is developed to study harmonic sound propagation in ducts. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

  11. Finite Difference Time Marching in the Frequency Domain: A Parabolic Formulation for Aircraft Acoustic Nacelle Design

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.; Kreider, Kevin L.

    1996-01-01

    An explicit finite difference iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.

  12. Léon Rosenfeld's general theory of constrained Hamiltonian dynamics

    NASA Astrophysics Data System (ADS)

    Salisbury, Donald; Sundermeyer, Kurt

    2017-04-01

    This commentary reflects on the 1930 general theory of Léon Rosenfeld dealing with phase-space constraints. We start with a short biography of Rosenfeld and his motivation for this article in the context of ideas pursued by W. Pauli, F. Klein, E. Noether. We then comment on Rosenfeld's General Theory dealing with symmetries and constraints, symmetry generators, conservation laws and the construction of a Hamiltonian in the case of phase-space constraints. It is remarkable that he was able to derive expressions for all phase space symmetry generators without making explicit reference to the generator of time evolution. In his Applications, Rosenfeld treated the general relativistic example of Einstein-Maxwell-Dirac theory. We show, that although Rosenfeld refrained from fully applying his general findings to this example, he could have obtained the Hamiltonian. Many of Rosenfeld's discoveries were re-developed or re-discovered by others two decades later, yet as we show there remain additional firsts that are still not recognized in the community.

  13. Surface Wave Propagation on a Laterally Heterogeneous Earth

    NASA Astrophysics Data System (ADS)

    Tromp, Jeroen

    1992-01-01

    Love and Rayleigh waves propagating on the surface of the Earth exhibit path, phase and amplitude anomalies as a result of the lateral heterogeneity of the mantle. In the JWKB approximation, these anomalies can be determined by tracing surface wave trajectories, and calculating phase and amplitude anomalies along them. A time- or frequency -domain JWKB analysis yields local eigenfunctions, local dispersion relations, and conservation laws for the surface wave energy. The local dispersion relations determine the surface wave trajectories, and the energy equations determine the surface wave amplitudes. On an anisotrophic Earth model the local dispersion relation and the local vertical eigenfunctions depend explicitly on the direction of the local wavevector. Apart from the usual dynamical phase, which is the integral of the local wavevector along a raypath, there is an additional variation is phase. This additional phase, which is an analogue of the Berry phase in adiabatic quantum mechanics, vanishes in a waveguide with a local vertical two-fold symmetry axis or a local horizontal mirror plane. JWKB theory breaks down in the vicinity of caustics, where neighboring rays merge and the surface wave amplitude diverges. Based upon a potential representation of the surface wave field, a uniformly valid Maslov theory can be obtained. Surface wave trajectories are determined by a system of four ordinary differential equations which define a three-dimensional manifold in four-dimensional phase space (theta,phi,k_theta,k _phi), where theta is colatitude, phi is longitude, and k_theta and k _phi are the covariant components of the wavevector. There are no caustics in phase space; it is only when the rays in phase space are projected onto configuration space (theta,phi), the mixed spaces (k_theta,phi ) and (theta,k_phi), or onto momentum space (k_theta,k _phi), that caustics occur. The essential strategy is to employ a mixed or momentum space representation of the wavefield in the vicinity of a configuration space caustic.

  14. Space tug geosynchronous mission simulation

    NASA Technical Reports Server (NTRS)

    Lang, T. J.

    1973-01-01

    Near-optimal three dimensional trajectories from a low earth park orbit inclined at 28.5 deg to a synchronous-equatorial mission orbit were developed for both the storable (thrust = 28,912 N (6,500 lbs), I sub sp = 339 sec) and cryogenic (thrust = 44,480 N (10,000 lbs), I sub sp = 470 sec) space tug using the iterative cost function minimization technique contained within the modularized vehicle simulation (MVS) program. The finite burn times, due to low thrust-to-weight ratios, and the associated gravity losses are accounted for in the trajectory simulation and optimization. The use of an ascent phasing orbit to achieve burnout in synchronous orbit at any longitude is investigated. The ascent phasing orbit is found to offer the additional advantage of significantly reducing the overall delta velocity by splitting the low altitude burn into two parts and thereby reducing gravity losses.

  15. Searching for substructures in fragment spaces.

    PubMed

    Ehrlich, Hans-Christian; Volkamer, Andrea; Rarey, Matthias

    2012-12-21

    A common task in drug development is the selection of compounds fulfilling specific structural features from a large data pool. While several methods that iteratively search through such data sets exist, their application is limited compared to the infinite character of molecular space. The introduction of the concept of fragment spaces (FSs), which are composed of molecular fragments and their connection rules, made the representation of large combinatorial data sets feasible. At the same time, search algorithms face the problem of structural features spanning over multiple fragments. Due to the combinatorial nature of FSs, an enumeration of all products is impossible. In order to overcome these time and storage issues, we present a method that is able to find substructures in FSs without explicit product enumeration. This is accomplished by splitting substructures into subsubstructures and mapping them onto fragments with respect to fragment connectivity rules. The method has been evaluated on three different drug discovery scenarios considering the exploration of a molecule class, the elaboration of decoration patterns for a molecular core, and the exhaustive query for peptides in FSs. FSs can be searched in seconds, and found products contain novel compounds not present in the PubChem database which may serve as hints for new lead structures.

  16. Real-space decoupling transformation for quantum many-body systems.

    PubMed

    Evenbly, G; Vidal, G

    2014-06-06

    We propose a real-space renormalization group method to explicitly decouple into independent components a many-body system that, as in the phenomenon of spin-charge separation, exhibits separation of degrees of freedom at low energies. Our approach produces a branching holographic description of such systems that opens the path to the efficient simulation of the most entangled phases of quantum matter, such as those whose ground state violates a boundary law for entanglement entropy. As in the coarse-graining transformation of Vidal [Phys. Rev. Lett. 99, 220405 (2007).

  17. The ITER project construction status

    NASA Astrophysics Data System (ADS)

    Motojima, O.

    2015-10-01

    The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.

  18. Metric Optimization for Surface Analysis in the Laplace-Beltrami Embedding Space

    PubMed Central

    Lai, Rongjie; Wang, Danny J.J.; Pelletier, Daniel; Mohr, David; Sicotte, Nancy; Toga, Arthur W.

    2014-01-01

    In this paper we present a novel approach for the intrinsic mapping of anatomical surfaces and its application in brain mapping research. Using the Laplace-Beltrami eigen-system, we represent each surface with an isometry invariant embedding in a high dimensional space. The key idea in our system is that we realize surface deformation in the embedding space via the iterative optimization of a conformal metric without explicitly perturbing the surface or its embedding. By minimizing a distance measure in the embedding space with metric optimization, our method generates a conformal map directly between surfaces with highly uniform metric distortion and the ability of aligning salient geometric features. Besides pairwise surface maps, we also extend the metric optimization approach for group-wise atlas construction and multi-atlas cortical label fusion. In experimental results, we demonstrate the robustness and generality of our method by applying it to map both cortical and hippocampal surfaces in population studies. For cortical labeling, our method achieves excellent performance in a cross-validation experiment with 40 manually labeled surfaces, and successfully models localized brain development in a pediatric study of 80 subjects. For hippocampal mapping, our method produces much more significant results than two popular tools on a multiple sclerosis study of 109 subjects. PMID:24686245

  19. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  20. How to calculate H3 better.

    PubMed

    Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik

    2009-11-14

    Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.

  1. Iterative projection algorithms for ab initio phasing in virus crystallography.

    PubMed

    Lo, Victor L; Kingston, Richard L; Millane, Rick P

    2016-12-01

    Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Statistical characterization of the standard map

    NASA Astrophysics Data System (ADS)

    Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino

    2017-06-01

    The standard map, paradigmatic conservative system in the (x, p) phase space, has been recently shown (Tirnakli and Borges (2016 Sci. Rep. 6 23644)) to exhibit interesting statistical behaviors directly related to the value of the standard map external parameter K. A comprehensive statistical numerical description is achieved in the present paper. More precisely, for large values of K (e.g. K  =  10) where the Lyapunov exponents are neatly positive over virtually the entire phase space consistently with Boltzmann-Gibbs (BG) statistics, we verify that the q-generalized indices related to the entropy production q{ent} , the sensitivity to initial conditions q{sen} , the distribution of a time-averaged (over successive iterations) phase-space coordinate q{stat} , and the relaxation to the equilibrium final state q{rel} , collapse onto a fixed point, i.e. q{ent}=q{sen}=q{stat}=q{rel}=1 . In remarkable contrast, for small values of K (e.g. K  =  0.2) where the Lyapunov exponents are virtually zero over the entire phase space, we verify q{ent}=q{sen}=0 , q{stat} ≃ 1.935 , and q{rel} ≃1.4 . The situation corresponding to intermediate values of K, where both stable orbits and a chaotic sea are present, is discussed as well. The present results transparently illustrate when BG behavior and/or q-statistical behavior are observed.

  3. 3D Diffraction Microscope Provides a First Deep View

    NASA Astrophysics Data System (ADS)

    Miao, Jianwei

    2005-03-01

    When a coherent diffraction pattern is sampled at a spacing sufficiently finer than the Bragg peak frequency (i.e. the inverse of the sample size), the phase information is in principle encoded inside the diffraction pattern, and can be directly retrieved by using an iterative process. In combination of this oversampling phasing method with either coherent X-rays or electrons, a novel form of diffraction microscopy has recently been developed to image nanoscale materials and biological structures. In this talk, I will present the principle of the oversampling method, discuss the first experimental demonstration of this microscope, and illustrate some applications in nanoscience and biology.

  4. Radiation from a space charge dominated linear electron beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Debabrata

    2008-01-15

    It is commonly known that radiation loss in linear beam transport is largely unimportant. For a space charge dominated linear beam, however, radiation power loss can be an appreciable fraction of the injected beam power [Biswas, Kumar, and Puri, Phys. Plasmas 14, 094702 (2007)]. Exploring this further, the electromagnetic nature of radiation due to the passage of a space charge dominated electron beam in a 'closed' drift tube is explicitly demonstrated by identifying the cavity modes where none existed prior to beam injection. It is further shown that even in an 'open' drift tube from which radiation may leak, themore » modes that escape contribute to the time variation of the electric and magnetic fields in the transient phase. As the window opening increases, the oscillatory transient phase disappears altogether. However, the 'bouncing ball' modes survive and can be observed between the injection and collection plates.« less

  5. A Novel Hyperbolization Procedure for The Two-Phase Six-Equation Flow Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samet Y. Kadioglu; Robert Nourgaliev; Nam Dinh

    2011-10-01

    We introduce a novel approach for the hyperbolization of the well-known two-phase six equation flow model. The six-equation model has been frequently used in many two-phase flow applications such as bubbly fluid flows in nuclear reactors. One major drawback of this model is that it can be arbitrarily non-hyperbolic resulting in difficulties such as numerical instability issues. Non-hyperbolic behavior can be associated with complex eigenvalues that correspond to characteristic matrix of the system. Complex eigenvalues are often due to certain flow parameter choices such as the definition of inter-facial pressure terms. In our method, we prevent the characteristic matrix receivingmore » complex eigenvalues by fine tuning the inter-facial pressure terms with an iterative procedure. In this way, the characteristic matrix possesses all real eigenvalues meaning that the characteristic wave speeds are all real therefore the overall two-phase flowmodel becomes hyperbolic. The main advantage of this is that one can apply less diffusive highly accurate high resolution numerical schemes that often rely on explicit calculations of real eigenvalues. We note that existing non-hyperbolic models are discretized mainly based on low order highly dissipative numerical techniques in order to avoid stability issues.« less

  6. Solving the Sea-Level Equation in an Explicit Time Differencing Scheme

    NASA Astrophysics Data System (ADS)

    Klemann, V.; Hagedoorn, J. M.; Thomas, M.

    2016-12-01

    In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x

  7. A SEMI-LAGRANGIAN TWO-LEVEL PRECONDITIONED NEWTON-KRYLOV SOLVER FOR CONSTRAINED DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Biros, George

    2017-01-01

    We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.

  8. Multi-Regge kinematics and the moduli space of Riemann spheres with marked points

    DOE PAGES

    Del Duca, Vittorio; Druc, Stefan; Drummond, James; ...

    2016-08-25

    We show that scattering amplitudes in planar N = 4 Super Yang-Mills in multi-Regge kinematics can naturally be expressed in terms of single-valued iterated integrals on the moduli space of Riemann spheres with marked points. As a consequence, scattering amplitudes in this limit can be expressed as convolutions that can easily be computed using Stokes’ theorem. We apply this framework to MHV amplitudes to leading-logarithmic accuracy (LLA), and we prove that at L loops all MHV amplitudes are determined by amplitudes with up to L + 4 external legs. We also investigate non-MHV amplitudes, and we show that they canmore » be obtained by convoluting the MHV results with a certain helicity flip kernel. We classify all leading singularities that appear at LLA in the Regge limit for arbitrary helicity configurations and any number of external legs. In conclusion, we use our new framework to obtain explicit analytic results at LLA for all MHV amplitudes up to five loops and all non-MHV amplitudes with up to eight external legs and four loops.« less

  9. Extension of the KLI approximation toward the exact optimized effective potential.

    PubMed

    Iafrate, G J; Krieger, J B

    2013-03-07

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  10. Extension of the KLI approximation toward the exact optimized effective potential

    NASA Astrophysics Data System (ADS)

    Iafrate, G. J.; Krieger, J. B.

    2013-03-01

    The integral equation for the optimized effective potential (OEP) is utilized in a compact form from which an accurate OEP solution for the spin-unrestricted exchange-correlation potential, Vxcσ, is obtained for any assumed orbital-dependent exchange-correlation energy functional. The method extends beyond the Krieger-Li-Iafrate (KLI) approximation toward the exact OEP result. The compact nature of the OEP equation arises by replacing the integrals involving the Green's function terms in the traditional OEP equation by an equivalent first-order perturbation theory wavefunction often referred to as the "orbital shift" function. Significant progress is then obtained by solving the equation for the first order perturbation theory wavefunction by use of Dalgarno functions which are determined from well known methods of partial differential equations. The use of Dalgarno functions circumvents the need to explicitly address the Green's functions and the associated problems with "sum over states" numerics; as well, the Dalgarno functions provide ease in dealing with inherent singularities arising from the origin and the zeros of the occupied orbital wavefunctions. The Dalgarno approach for finding a solution to the OEP equation is described herein, and a detailed illustrative example is presented for the special case of a spherically symmetric exchange-correlation potential. For the case of spherical symmetry, the relevant Dalgarno function is derived by direct integration of the appropriate radial equation while utilizing a user friendly method which explicitly treats the singular behavior at the origin and at the nodal singularities arising from the zeros of the occupied states. The derived Dalgarno function is shown to be an explicit integral functional of the exact OEP Vxcσ, thus allowing for the reduction of the OEP equation to a self-consistent integral equation for the exact exchange-correlation potential; the exact solution to this integral equation can be determined by iteration with the natural zeroth order correction given by the KLI exchange-correlation potential. Explicit analytic results are provided to illustrate the first order iterative correction beyond the KLI approximation. The derived correction term to the KLI potential explicitly involves spatially weighted products of occupied orbital densities in any assumed orbital-dependent exchange-correlation energy functional; as well, the correction term is obtained with no adjustable parameters. Moreover, if the equation for the exact optimized effective potential is further iterated, one can obtain the OEP as accurately as desired.

  11. Stability of the iterative solutions of integral equations as one phase freezing criterion.

    PubMed

    Fantoni, R; Pastore, G

    2003-10-01

    A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, O. L.; Fonseca, T. L., E-mail: tertius@ufg.br; Sabino, J. R.

    We present theoretical results for the dipole moment, linear polarizability, and first hyperpolarizability of the urea and thiourea molecules in solid phase. The in-crystal electric properties were determined by applying a supermolecule approach in combination with an iterative electrostatic scheme, in which the surrounding molecules are represented by point charges. It is found for both urea and thiourea molecules that the influence of the polarization effects is mild for the linear polarizability, but it is marked for the dipole moment and first hyperpolarizability. The replacement of oxygen atoms by sulfur atoms increases, in general, the electric responses. Our second-order Møller–Plessetmore » perturbation theory based iterative scheme predicts for the in-crystal dipole moment of urea and thiourea the values of 7.54 and 9.19 D which are, respectively, increased by 61% and 58%, in comparison with the corresponding isolated values. The result for urea is in agreement with the available experimental result of 6.56 D. In addition, we present an estimate of macroscopic quantities considering explicit unit cells of urea and thiourea crystals including environment polarization effects. These supermolecule calculations take into account partially the exchange and dispersion effects. The results illustrate the role played by the electrostatic interactions on the static second-order nonlinear susceptibility of the urea crystal.« less

  13. Adaptive restoration of river terrace vegetation through iterative experiments

    USGS Publications Warehouse

    Dela Cruz, Michelle P.; Beauchamp, Vanessa B.; Shafroth, Patrick B.; Decker, Cheryl E.; O’Neil, Aviva

    2014-01-01

    Restoration projects can involve a high degree of uncertainty and risk, which can ultimately result in failure. An adaptive restoration approach can reduce uncertainty through controlled, replicated experiments designed to test specific hypotheses and alternative management approaches. Key components of adaptive restoration include willingness of project managers to accept the risk inherent in experimentation, interest of researchers, availability of funding for experimentation and monitoring, and ability to restore sites as iterative experiments where results from early efforts can inform the design of later phases. This paper highlights an ongoing adaptive restoration project at Zion National Park (ZNP), aimed at reducing the cover of exotic annual Bromus on riparian terraces, and revegetating these areas with native plant species. Rather than using a trial-and-error approach, ZNP staff partnered with academic, government, and private-sector collaborators to conduct small-scale experiments to explicitly address uncertainties concerning biomass removal of annual bromes, herbicide application rates and timing, and effective seeding methods for native species. Adaptive restoration has succeeded at ZNP because managers accept the risk inherent in experimentation and ZNP personnel are committed to continue these projects over a several-year period. Techniques that result in exotic annual Bromus removal and restoration of native plant species at ZNP can be used as a starting point for adaptive restoration projects elsewhere in the region.

  14. Constraints on Martian Aerosol Particles Using MGS/TES and HST Data: Shapes

    NASA Astrophysics Data System (ADS)

    Wolff, M. J.; Clancy, R. T.; Pitman, K. M.; Bell, J. F.; James, P. B.

    2001-12-01

    In order to constrain the shape of water ice and dust aerosols, we have combined a numerical approach for axisymmetric particle shapes, i.e., cylinders, disks, spheroids (Waterman's T-Matrix approach as improved by Mishchenko and collaborators; cf., Mishchenko et al. 1997, JGR, 102, D14, 16,831), with a multiple-scattering radiative transfer algorithm. We utilize a two-stage iterative process. First, we empirically derive a scattering phase function for each aerosol component from radiative transfer models of Mars Global Surveyor Thermal Emission Spectrometer Emission Phase Function (EPF) sequences. Next, we perform a series of scattering calculations, adjusting our parameters to arrive at a ``best-fit'' theoretical phase function. It is important to note that in addition to randomly-oriented particles, we explicitly consider the possibility of (partially) aligned aerosol particles as well. Thus far, we have been analyzing the three empirically-derived presented by Clancy et al. (this meeting): dust, Type I ice particles (effective radii ~ 1-2 microns), and Type II ice particles (effective radii ~ 3-4 microns). We find that the ``dust'' phase function is best fit by randomly-oriented cylinders with an axial ratio (D/L = diameter-to-length) of either 2.3 or 0.6. Similarly, the shape of the Type II ice curve is reasonably reproduced by randomly-oriented spheroids with an axial ratio of either 0.7 or 1.4. However, neither of the two shapes (nor that of spheres or randomly-oriented hexagonal prisms) can reproduce the phase function derived for the Type I ice. This led to the direct consideration of oriented or aligned particles. which, at least qualitatively, have the ability to account for the phase function shapes for both Type I and II ice particles. The difference between these two phase functions may represent the degree of alignment, with the Type II particles being much less-aligned. The calculations for partially aligned particles is quite numerically intensive and this avenue of research is currently in progress. Additional work is also being done to further constrain the dust aerosol properties using both TES visible/IR and Hubble Space Telescope UV-NIR spectroscopy/imaging data of the recent (and ongoing) Martian global dust storm. Our work has been supported through NASA (MDAP) grant NAG5-9820, (MED) JPL contract 961471, STScI GO programs #8577 and #9052.

  15. An iterative method for near-field Fresnel region polychromatic phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2017-07-01

    We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.

  16. Implementation of the pyramid wavefront sensor as a direct phase detector for large amplitude aberrations

    NASA Astrophysics Data System (ADS)

    Kupke, Renate; Gavel, Don; Johnson, Jess; Reinig, Marc

    2008-07-01

    We investigate the non-modulating pyramid wave-front sensor's (P-WFS) implementation in the context of Lick Observatory's Villages visible light AO system on the Nickel 1-meter telescope. A complete adaptive optics correction, using a non-modulated P-WFS in slope sensing mode as a boot-strap to a regime in which the P-WFS can act as a direct phase sensor is explored. An iterative approach to reconstructing the wave-front phase, given the pyramid wave-front sensor's non-linear signal, is developed. Using Monte Carlo simulations, the iterative reconstruction method's photon noise propagation behavior is compared to both the pyramid sensor used in slope-sensing mode, and the traditional Shack Hartmann sensor's theoretical performance limits. We determine that bootstrapping using the P-WFS as a slope sensor does not offer enough correction to bring the phase residuals into a regime in which the iterative algorithm can provide much improvement in phase measurement. It is found that both the iterative phase reconstructor and the slope reconstruction methods offer an advantage in noise propagation over Shack Hartmann sensors.

  17. Ultrametric properties of the attractor spaces for random iterated linear function systems

    NASA Astrophysics Data System (ADS)

    Buchovets, A. G.; Moskalev, P. V.

    2018-03-01

    We investigate attractors of random iterated linear function systems as independent spaces embedded in the ordinary Euclidean space. The introduction on the set of attractor points of a metric that satisfies the strengthened triangle inequality makes this space ultrametric. Then inherent in ultrametric spaces the properties of disconnectedness and hierarchical self-similarity make it possible to define an attractor as a fractal. We note that a rigorous proof of these properties in the case of an ordinary Euclidean space is very difficult.

  18. LBQ2D, Extending the Line Broadened Quasilinear Model to TAE-EP Interaction

    NASA Astrophysics Data System (ADS)

    Ghantous, Katy; Gorelenkov, Nikolai; Berk, Herbert

    2012-10-01

    The line broadened quasilinear model was proposed and tested on the one dimensional electrostatic case of the bump on tailfootnotetextH.L Berk, B. Breizman and J. Fitzpatrick, Nucl. Fusion, 35:1661, 1995 to study the wave particle interaction. In conventional quasilinear theory, the sea of overlapping modes evolve with time as the particle distribution function self consistently undergo diffusion in phase space. The line broadened quasilinear model is an extension to the conventional theory in a way that allows treatment of isolated modes as well as overlapping modes by broadening the resonant line in phase space. This makes it possible to treat the evolution of modes self consistently from onset to saturation in either case. We describe here the model denoted by LBQ2D which is an extension of the proposed one dimensional line broadened quasilinear model to the case of TAEs interacting with energetic particles in two dimensional phase space, energy as well as canonical angular momentum. We study the saturation of isolated modes in various regimes and present the analytical derivation and numerical results. Finally, we present, using ITER parameters, the case where multiple modes overlap and describe the techniques used for the numerical treatment.

  19. Coherent States for Kronecker Products of Non Compact Groups: Formulation and Applications

    NASA Technical Reports Server (NTRS)

    Bambah, Bindu A.; Agarwal, Girish S.

    1996-01-01

    We introduce and study the properties of a class of coherent states for the group SU(1,1) X SU(1,1) and derive explicit expressions for these using the Clebsch-Gordan algebra for the SU(1,1) group. We restrict ourselves to the discrete series representations of SU(1,1). These are the generalization of the 'Barut Girardello' coherent states to the Kronecker Product of two non-compact groups. The resolution of the identity and the analytic phase space representation of these states is presented. This phase space representation is based on the basis of products of 'pair coherent states' rather than the standard number state canonical basis. We discuss the utility of the resulting 'bi-pair coherent states' in the context of four-mode interactions in quantum optics.

  20. Statistical Mechanics of Combinatorial Auctions

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-09-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  1. On the breakdown modes and parameter space of Ohmic Tokamak startup

    NASA Astrophysics Data System (ADS)

    Peng, Yanli; Jiang, Wei; Zhang, Ya; Hu, Xiwei; Zhuang, Ge; Innocenti, Maria; Lapenta, Giovanni

    2017-10-01

    Tokamak plasma has to be hot. The process of turning the initial dilute neutral hydrogen gas at room temperature into fully ionized plasma is called tokamak startup. Even with over 40 years of research, the parameter ranges for the successful startup still aren't determined by numerical simulations but by trial and errors. However, in recent years it has drawn much attention due to one of the challenges faced by ITER: the maximum electric field for startup can't exceed 0.3 V/m, which makes the parameter range for successful startup narrower. Besides, this physical mechanism is far from being understood either theoretically or numerically. In this work, we have simulated the plasma breakdown phase driven by pure Ohmic heating using a particle-in-cell/Monte Carlo code, with the aim of giving a predictive parameter range for most tokamaks, even for ITER. We have found three situations during the discharge, as a function of the initial parameters: no breakdown, breakdown and runaway. Moreover, breakdown delay and volt-second consumption under different initial conditions are evaluated. In addition, we have simulated breakdown on ITER and confirmed that when the electric field is 0.3 V/m, the optimal pre-filling pressure is 0.001 Pa, which is in good agreement with ITER's design.

  2. Optimization of image quality and acquisition time for lab-based X-ray microtomography using an iterative reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko

    2018-05-01

    Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.

  3. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  4. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  5. Element fracture technique for hypervelocity impact simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao-tian; Li, Xiao-gang; Liu, Tao; Jia, Guang-hui

    2015-05-01

    Hypervelocity impact dynamics is the theoretical support of spacecraft shielding against space debris. The numerical simulation has become an important approach for obtaining the ballistic limits of the spacecraft shields. Currently, the most widely used algorithm for hypervelocity impact is the smoothed particle hydrodynamics (SPH). Although the finite element method (FEM) is widely used in fracture mechanics and low-velocity impacts, the standard FEM can hardly simulate the debris cloud generated by hypervelocity impact. This paper presents a successful application of the node-separation technique for hypervelocity impact debris cloud simulation. The node-separation technique assigns individual/coincident nodes for the adjacent elements, and it applies constraints to the coincident node sets in the modeling step. In the explicit iteration, the cracks are generated by releasing the constrained node sets that meet the fracture criterion. Additionally, the distorted elements are identified from two aspects - self-piercing and phase change - and are deleted so that the constitutive computation can continue. FEM with the node-separation technique is used for thin-wall hypervelocity impact simulations. The internal structures of the debris cloud in the simulation output are compared with that in the test X-ray graphs under different material fracture criteria. It shows that the pressure criterion is more appropriate for hypervelocity impact. The internal structures of the debris cloud are also simulated and compared under different thickness-to-diameter ratios (t/D). The simulation outputs show the same spall pattern with the tests. Finally, the triple-plate impact case is simulated with node-separation FEM.

  6. Cosmic time and reduced phase space of general relativity

    NASA Astrophysics Data System (ADS)

    Ita, Eyo Eyo; Soo, Chopin; Yu, Hoi-Lai

    2018-05-01

    In an ever-expanding spatially closed universe, the fractional change of the volume is the preeminent intrinsic time interval to describe evolution in general relativity. The expansion of the universe serves as a subsidiary condition which transforms Einstein's theory from a first class to a second class constrained system when the physical degrees of freedom (d.o.f.) are identified with transverse traceless excitations. The super-Hamiltonian constraint is solved by eliminating the trace of the momentum in terms of the other variables, and spatial diffeomorphism symmetry is tackled explicitly by imposing transversality. The theorems of Maskawa-Nishijima appositely relate the reduced phase space to the physical variables in canonical functional integral and Dirac's criterion for second class constraints to nonvanishing Faddeev-Popov determinants in the phase space measures. A reduced physical Hamiltonian for intrinsic time evolution of the two physical d.o.f. emerges. Freed from the first class Dirac algebra, deformation of the Hamiltonian constraint is permitted, and natural extension of the Hamiltonian while maintaining spatial diffeomorphism invariance leads to a theory with Cotton-York term as the ultraviolet completion of Einstein's theory.

  7. The CFS-PML in numerical simulation of ATEM

    NASA Astrophysics Data System (ADS)

    Zhao, Xuejiao; Ji, Yanju; Qiu, Shuo; Guan, Shanshan; Wu, Yanqi

    2017-01-01

    In the simulation of airborne transient electromagnetic method (ATEM) in time-domain, the truncated boundary reflection can bring a big error to the results. The complex frequency shifted perfectly matched layer (CFS-PML) absorbing boundary condition has been proved to have a better absorption of low frequency incident wave and can reduce the late reflection greatly. In this paper, we apply the CFS-PML to three-dimensional numerical simulation of ATEM in time-domain to achieve a high precision .The expression of divergence equation in CFS-PML is confirmed and its explicit iteration format based on the finite difference method and the recursive convolution technique is deduced. Finally, we use the uniformity half space model and the anomalous model to test the validity of this method. Results show that the CFS-PML can reduce the average relative error to 2.87% and increase the accuracy of the anomaly recognition.

  8. Markov Chain Monte Carlo from Lagrangian Dynamics.

    PubMed

    Lan, Shiwei; Stathopoulos, Vasileios; Shahbaba, Babak; Girolami, Mark

    2015-04-01

    Hamiltonian Monte Carlo (HMC) improves the computational e ciency of the Metropolis-Hastings algorithm by reducing its random walk behavior. Riemannian HMC (RHMC) further improves the performance of HMC by exploiting the geometric properties of the parameter space. However, the geometric integrator used for RHMC involves implicit equations that require fixed-point iterations. In some cases, the computational overhead for solving implicit equations undermines RHMC's benefits. In an attempt to circumvent this problem, we propose an explicit integrator that replaces the momentum variable in RHMC by velocity. We show that the resulting transformation is equivalent to transforming Riemannian Hamiltonian dynamics to Lagrangian dynamics. Experimental results suggests that our method improves RHMC's overall computational e ciency in the cases considered. All computer programs and data sets are available online (http://www.ics.uci.edu/~babaks/Site/Codes.html) in order to allow replication of the results reported in this paper.

  9. Electrode effects in dielectric spectroscopy of colloidal suspensions

    NASA Astrophysics Data System (ADS)

    Cirkel, P. A.; van der Ploeg, J. P. M.; Koper, G. J. M.

    1997-02-01

    We present a simple model to account for electrode polarization in colloidal suspensions. Apart from correctly predicting the ω {-3}/{2} dependence for the dielectric permittivity at low frequencies ω, the model provides an explicit dependence of the effect on electrode spacing. The predictions are tested for the sodium bis(2-ethylhexyl) sulfosuccinate (AOT) water-in-oil microemulsion with iso-octane as continuous phase. In particular, the dependence of electrode polarization effects on electrode spacing has been measured and is found to be in accordance with the model prediction. Methods to reduce or account for electrode polarization are briefly discussed.

  10. An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie

    2018-01-01

    In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.

  11. Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2015-09-01

    We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.

  12. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    NASA Astrophysics Data System (ADS)

    Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.

    2012-12-01

    Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  14. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  15. Metaheuristics-Assisted Combinatorial Screening of Eu2+-Doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N Compositional Space in Search of a Narrow-Band Green Emitting Phosphor and Density Functional Theory Calculations.

    PubMed

    Lee, Jin-Woong; Singh, Satendra Pal; Kim, Minseuk; Hong, Sung Un; Park, Woon Bae; Sohn, Kee-Sun

    2017-08-21

    A metaheuristics-based design would be of great help in relieving the enormous experimental burdens faced during the combinatorial screening of a huge, multidimensional search space, while providing the same effect as total enumeration. In order to tackle the high-throughput powder processing complications and to secure practical phosphors, metaheuristics, an elitism-reinforced nondominated sorting genetic algorithm (NSGA-II), was employed in this study. The NSGA-II iteration targeted two objective functions. The first was to search for a higher emission efficacy. The second was to search for narrow-band green color emissions. The NSGA-II iteration finally converged on BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors in the Eu 2+ -doped Ca-Sr-Ba-Li-Mg-Al-Si-Ge-N compositional search space. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor, which was synthesized with no human intervention via the assistance of NSGA-II, was a clear single phase and gave an acceptable luminescence. The BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphor as well as all other phosphors that appeared during the NSGA-II iterations were examined in detail by employing powder X-ray diffraction-based Rietveld refinement, X-ray absorption near edge structure, density functional theory calculation, and time-resolved photoluminescence. The thermodynamic stability and the band structure plausibility were confirmed, and more importantly a novel approach to the energy transfer analysis was also introduced for BaLi 2 Al 2 Si 2 N 6 :Eu 2+ phosphors.

  16. A Structured Decision Approach for Integrating and Analyzing Community Perspectives in Re-Use Planning of Vacant Properties in Cleveland, Ohio

    EPA Science Inventory

    An integrated GIS-based, multi-attribute decision model deployed in a web-based platform is presented enabling an iterative, spatially explicit and collaborative analysis of relevant and available information for repurposing vacant land. The process incorporated traditional and ...

  17. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Real time groove characterization combining partial least squares and SVR strategies: application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Salucci, M.; Miorelli, R.; Anselmi, N.; Oliveri, G.; Calmon, P.; Reboud, C.; Massa, A.

    2017-10-01

    A quasi real-time inversion strategy is presented for groove characterization of a conductive non-ferromagnetic tube structure by exploiting eddy current testing (ECT) signal. Inversion problem has been formulated by non-iterative Learning-by-Examples (LBE) strategy. Within the framework of LBE, an efficient training strategy has been adopted with the combination of feature extraction and a customized version of output space filling (OSF) adaptive sampling in order to get optimal training set during offline phase. Partial Least Squares (PLS) and Support Vector Regression (SVR) have been exploited for feature extraction and prediction technique respectively to have robust and accurate real time inversion during online phase.

  19. Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2018-03-01

    Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.

  20. Flexible CDOCKER: Development and application of a pseudo-explicit structure-based docking method within CHARMM

    PubMed Central

    Gagnon, Jessica K.; Law, Sean M.; Brooks, Charles L.

    2016-01-01

    Protein-ligand docking is a commonly used method for lead identification and refinement. While traditional structure-based docking methods represent the receptor as a rigid body, recent developments have been moving toward the inclusion of protein flexibility. Proteins exist in an inter-converting ensemble of conformational states, but effectively and efficiently searching the conformational space available to both the receptor and ligand remains a well-appreciated computational challenge. To this end, we have developed the Flexible CDOCKER method as an extension of the family of complete docking solutions available within CHARMM. This method integrates atomically detailed side chain flexibility with grid-based docking methods, maintaining efficiency while allowing the protein and ligand configurations to explore their conformational space simultaneously. This is in contrast to existing approaches that use induced-fit like sampling, such as Glide or Autodock, where the protein or the ligand space is sampled independently in an iterative fashion. Presented here are developments to the CHARMM docking methodology to incorporate receptor flexibility and improvements to the sampling protocol as demonstrated with re-docking trials on a subset of the CCDC/Astex set. These developments within CDOCKER achieve docking accuracy competitive with or exceeding the performance of other widely utilized docking programs. PMID:26691274

  1. Flexible CDOCKER: Development and application of a pseudo-explicit structure-based docking method within CHARMM.

    PubMed

    Gagnon, Jessica K; Law, Sean M; Brooks, Charles L

    2016-03-30

    Protein-ligand docking is a commonly used method for lead identification and refinement. While traditional structure-based docking methods represent the receptor as a rigid body, recent developments have been moving toward the inclusion of protein flexibility. Proteins exist in an interconverting ensemble of conformational states, but effectively and efficiently searching the conformational space available to both the receptor and ligand remains a well-appreciated computational challenge. To this end, we have developed the Flexible CDOCKER method as an extension of the family of complete docking solutions available within CHARMM. This method integrates atomically detailed side chain flexibility with grid-based docking methods, maintaining efficiency while allowing the protein and ligand configurations to explore their conformational space simultaneously. This is in contrast to existing approaches that use induced-fit like sampling, such as Glide or Autodock, where the protein or the ligand space is sampled independently in an iterative fashion. Presented here are developments to the CHARMM docking methodology to incorporate receptor flexibility and improvements to the sampling protocol as demonstrated with re-docking trials on a subset of the CCDC/Astex set. These developments within CDOCKER achieve docking accuracy competitive with or exceeding the performance of other widely utilized docking programs. © 2015 Wiley Periodicals, Inc.

  2. Powered Explicit Guidance Modifications and Enhancements for Space Launch System Block-1 and Block-1B Vehicles

    NASA Technical Reports Server (NTRS)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt; Fill, Thomas

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. NASA is also currently designing the next evolution of SLS, the Block-1B. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm (of Space Shuttle heritage) for closed loop guidance. To accommodate vehicle capabilities and design for future evolutions of SLS, modifications were made to PEG for Block-1 to handle multi-phase burns, provide PEG updated propulsion information, and react to a core stage engine out. In addition, due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS) and EUS carrying out Lunar Vicinity and Earth Escape missions, certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions to account for long burn arcs and target translunar and hyperbolic orbits. This paper describes the design and implementation of modifications to the Block-1 PEG algorithm as compared to Space Shuttle. Furthermore, this paper illustrates challenges posed by the Block-1B vehicle and the required PEG enhancements. These improvements make PEG capable for use on the SLS Block-1B vehicle as part of the Guidance, Navigation, and Control (GN&C) System.

  3. Conservative tightly-coupled simulations of stochastic multiscale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2016-05-15

    Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less

  4. Iteration and superposition encryption scheme for image sequences based on multi-dimensional keys

    NASA Astrophysics Data System (ADS)

    Han, Chao; Shen, Yuzhen; Ma, Wenlin

    2017-12-01

    An iteration and superposition encryption scheme for image sequences based on multi-dimensional keys is proposed for high security, big capacity and low noise information transmission. Multiple images to be encrypted are transformed into phase-only images with the iterative algorithm and then are encrypted by different random phase, respectively. The encrypted phase-only images are performed by inverse Fourier transform, respectively, thus new object functions are generated. The new functions are located in different blocks and padded zero for a sparse distribution, then they propagate to a specific region at different distances by angular spectrum diffraction, respectively and are superposed in order to form a single image. The single image is multiplied with a random phase in the frequency domain and then the phase part of the frequency spectrums is truncated and the amplitude information is reserved. The random phase, propagation distances, truncated phase information in frequency domain are employed as multiple dimensional keys. The iteration processing and sparse distribution greatly reduce the crosstalk among the multiple encryption images. The superposition of image sequences greatly improves the capacity of encrypted information. Several numerical experiments based on a designed optical system demonstrate that the proposed scheme can enhance encrypted information capacity and make image transmission at a highly desired security level.

  5. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  6. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE PAGES

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...

    2017-06-29

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  7. Closed-form solution for the Wigner phase-space distribution function for diffuse reflection and small-angle scattering in a random medium.

    PubMed

    Yura, H T; Thrane, L; Andersen, P E

    2000-12-01

    Within the paraxial approximation, a closed-form solution for the Wigner phase-space distribution function is derived for diffuse reflection and small-angle scattering in a random medium. This solution is based on the extended Huygens-Fresnel principle for the optical field, which is widely used in studies of wave propagation through random media. The results are general in that they apply to both an arbitrary small-angle volume scattering function, and arbitrary (real) ABCD optical systems. Furthermore, they are valid in both the single- and multiple-scattering regimes. Some general features of the Wigner phase-space distribution function are discussed, and analytic results are obtained for various types of scattering functions in the asymptotic limit s > 1, where s is the optical depth. In particular, explicit results are presented for optical coherence tomography (OCT) systems. On this basis, a novel way of creating OCT images based on measurements of the momentum width of the Wigner phase-space distribution is suggested, and the advantage over conventional OCT images is discussed. Because all previous published studies regarding the Wigner function are carried out in the transmission geometry, it is important to note that the extended Huygens-Fresnel principle and the ABCD matrix formalism may be used successfully to describe this geometry (within the paraxial approximation). Therefore for completeness we present in an appendix the general closed-form solution for the Wigner phase-space distribution function in ABCD paraxial optical systems for direct propagation through random media, and in a second appendix absorption effects are included.

  8. Incorrect support and missing center tolerances of phasing algorithms

    DOE PAGES

    Huang, Xiaojing; Nelson, Johanna; Steinbrener, Jan; ...

    2010-01-01

    In x-ray diffraction microscopy, iterative algorithms retrieve reciprocal space phase information, and a real space image, from an object's coherent diffraction intensities through the use of a priori information such as a finite support constraint. In many experiments, the object's shape or support is not well known, and the diffraction pattern is incompletely measured. We describe here computer simulations to look at the effects of both of these possible errors when using several common reconstruction algorithms. Overly tight object supports prevent successful convergence; however, we show that this can often be recognized through pathological behavior of the phase retrieval transfermore » function. Dynamic range limitations often make it difficult to record the central speckles of the diffraction pattern. We show that this leads to increasing artifacts in the image when the number of missing central speckles exceeds about 10, and that the removal of unconstrained modes from the reconstructed image is helpful only when the number of missing central speckles is less than about 50. In conclusion, this simulation study helps in judging the reconstructability of experimentally recorded coherent diffraction patterns.« less

  9. Free-Space Time-Domain Method for Measuring Thin Film Dielectric Properties

    DOEpatents

    Li, Ming; Zhang, Xi-Cheng; Cho, Gyu Cheon

    2000-05-02

    A non-contact method for determining the index of refraction or dielectric constant of a thin film on a substrate at a desired frequency in the GHz to THz range having a corresponding wavelength larger than the thickness of the thin film (which may be only a few microns). The method comprises impinging the desired-frequency beam in free space upon the thin film on the substrate and measuring the measured phase change and the measured field reflectance from the reflected beam for a plurality of incident angles over a range of angles that includes the Brewster's angle for the thin film. The index of refraction for the thin film is determined by applying Fresnel equations to iteratively calculate a calculated phase change and a calculated field reflectance at each of the plurality of incident angles, and selecting the index of refraction that provides the best mathematical curve fit with both the dataset of measured phase changes and the dataset of measured field reflectances for each incident angle. The dielectric constant for the thin film can be calculated as the index of refraction squared.

  10. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  11. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  12. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  13. Bayesian History Matching of Complex Infectious Disease Models Using Emulation: A Tutorial and a Case Study on HIV in Uganda

    PubMed Central

    Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.

    2015-01-01

    Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850

  14. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  15. Evolutionary squeaky wheel optimization: a new framework for analysis.

    PubMed

    Li, Jingpeng; Parkes, Andrew J; Burke, Edmund K

    2011-01-01

    Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing.

  16. Inner Space Perturbation Theory in Matrix Product States: Replacing Expensive Iterative Diagonalization.

    PubMed

    Ren, Jiajun; Yi, Yuanping; Shuai, Zhigang

    2016-10-11

    We propose an inner space perturbation theory (isPT) to replace the expensive iterative diagonalization in the standard density matrix renormalization group theory (DMRG). The retained reduced density matrix eigenstates are partitioned into the active and secondary space. The first-order wave function and the second- and third-order energies are easily computed by using one step Davidson iteration. Our formulation has several advantages including (i) keeping a balance between the efficiency and accuracy, (ii) capturing more entanglement with the same amount of computational time, (iii) recovery of the standard DMRG when all the basis states belong to the active space. Numerical examples for the polyacenes and periacene show that the efficiency gain is considerable and the accuracy loss due to the perturbation treatment is very small, when half of the total basis states belong to the active space. Moreover, the perturbation calculations converge in all our numerical examples.

  17. Community Based Learning and Civic Engagement: Informal Learning among Adult Volunteers in Community Organizations

    ERIC Educational Resources Information Center

    Mundel, Karsten; Schugurensky, Daniel

    2008-01-01

    Many iterations of community based learning employ models, such as consciousness raising groups, cultural circles, and participatory action research. In all of them, learning is a deliberate part of an explicit educational activity. This article explores another realm of community learning: the informal learning that results from volunteering in…

  18. James Webb Space Telescope segment phasing using differential optical transfer functions

    PubMed Central

    Codona, Johanan L.; Doble, Nathan

    2015-01-01

    Differential optical transfer function (dOTF) is an image-based, noniterative wavefront sensing method that uses two star images with a single small change in the pupil. We describe two possible methods for introducing the required pupil modification to the James Webb Space Telescope, one using a small (<λ/4) displacement of a single segment's actuator and another that uses small misalignments of the NIRCam's filter wheel. While both methods should work with NIRCam, the actuator method will allow both MIRI and NIRISS to be used for segment phasing, which is a new functionality. Since the actuator method requires only small displacements, it should provide a fast and safe phasing alternative that reduces the mission risk and can be performed frequently for alignment monitoring and maintenance. Since a single actuator modification can be seen by all three cameras, it should be possible to calibrate the non-common-path aberrations between them. Large segment discontinuities can be measured using dOTFs in two filter bands. Using two images of a star field, aberrations along multiple lines of sight through the telescope can be measured simultaneously. Also, since dOTF gives the pupil field amplitude as well as the phase, it could provide a first approximation or constraint to the planned iterative phase retrieval algorithms. PMID:27042684

  19. Improving the efficiency of molecular replacement by utilizing a new iterative transform phasing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongxing; Fang, Hengrui; Miller, Mitchell D.

    2016-07-15

    An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less

  20. A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems

    NASA Astrophysics Data System (ADS)

    Hoag, Lucy M.

    The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The approach was tested during the design of USC's Aeneas nanosatellite project, and a case study was performed to assess the advantages of the new approach over past procedural approaches. It was found that use of the declarative approach improved design accuracy through exhaustive tradespace search and provable optimality; decreased design time through improved model generation, faster run time, and reduction in time and number of iteration cycles; and enabled modular and extensible code. Observed weaknesses included non-intuitive model abstraction; increased debugging time; and difficulty of data extrapolation and analysis.

  1. Simple iterative construction of the optimized effective potential for orbital functionals, including exact exchange.

    PubMed

    Kümmel, Stephan; Perdew, John P

    2003-01-31

    For exchange-correlation functionals that depend explicitly on the Kohn-Sham orbitals, the potential V(xcsigma)(r) must be obtained as the solution of the optimized effective potential (OEP) integral equation. This is very demanding and has limited the use of orbital functionals. We demonstrate that instead the OEP can be obtained iteratively by solving the partial differential equations for the orbital shifts that exactify the Krieger-Li-Iafrate approximation. Unoccupied orbitals do not need to be calculated. Accuracy and efficiency of the method are shown for atoms and clusters using the exact-exchange energy. Counterintuitive asymptotic limits of the exact OEP are presented.

  2. A scheme for lensless X-ray microscopy combining coherent diffraction imaging and differential corner holography.

    PubMed

    Capotondi, F; Pedersoli, E; Kiskinova, M; Martin, A V; Barthelmess, M; Chapman, H N

    2012-10-22

    We successfully use the corners of a common silicon nitride supporting window in lensless X-ray microscopy as extended references in differential holography to obtain a real space hologram of the illuminated object. Moreover, we combine this method with the iterative phasing techniques of coherent diffraction imaging to enhance the spatial resolution on the reconstructed object, and overcome the problem of missing areas in the collected data due to the presence of a beam stop, achieving a resolution close to 85 nm.

  3. Environmental Media Phase-Tracking Units in the Classroom

    ERIC Educational Resources Information Center

    Langseth, David E.

    2009-01-01

    When teaching phase partitioning concepts for solutes in porous media, and other multi-phase environmental systems, explicitly tracking the environmental media phase with which a substance of interest (S0I) is associated can enhance the students' understanding of the fundamental concepts and derivations. It is common to explicitly track the…

  4. Nonglobal correlations in collider physics

    DOE PAGES

    Moult, Ian; Larkoski, Andrew J.

    2016-01-13

    Despite their importance for precision QCD calculations, correlations between in- and out-of-jet regions of phase space have never directly been observed. These so-called non-global effects are present generically whenever a collider physics measurement is not explicitly dependent on radiation throughout the entire phase space. In this paper, we introduce a novel procedure based on mutual information, which allows us to isolate these non-global correlations between measurements made in different regions of phase space. We study this procedure both analytically and in Monte Carlo simulations in the context of observables measured on hadronic final states produced in e+e- collisions, though itmore » is more widely applicable.The procedure exploits the sensitivity of soft radiation at large angles to non-global correlations, and we calculate these correlations through next-to-leading logarithmic accuracy. The bulk of these non-global correlations are found to be described in Monte Carlo simulation. They increase by the inclusion of non-perturbative effects, which we show can be incorporated in our calculation through the use of a model shape function. As a result, this procedure illuminates the source of non-global correlations and has connections more broadly to fundamental quantities in quantum field theory.« less

  5. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  6. Evolutionary-inspired probabilistic search for enhancing sampling of local minima in the protein energy surface

    PubMed Central

    2012-01-01

    Background Despite computational challenges, elucidating conformations that a protein system assumes under physiologic conditions for the purpose of biological activity is a central problem in computational structural biology. While these conformations are associated with low energies in the energy surface that underlies the protein conformational space, few existing conformational search algorithms focus on explicitly sampling low-energy local minima in the protein energy surface. Methods This work proposes a novel probabilistic search framework, PLOW, that explicitly samples low-energy local minima in the protein energy surface. The framework combines algorithmic ingredients from evolutionary computation and computational structural biology to effectively explore the subspace of local minima. A greedy local search maps a conformation sampled in conformational space to a nearby local minimum. A perturbation move jumps out of a local minimum to obtain a new starting conformation for the greedy local search. The process repeats in an iterative fashion, resulting in a trajectory-based exploration of the subspace of local minima. Results and conclusions The analysis of PLOW's performance shows that, by navigating only the subspace of local minima, PLOW is able to sample conformations near a protein's native structure, either more effectively or as well as state-of-the-art methods that focus on reproducing the native structure for a protein system. Analysis of the actual subspace of local minima shows that PLOW samples this subspace more effectively that a naive sampling approach. Additional theoretical analysis reveals that the perturbation function employed by PLOW is key to its ability to sample a diverse set of low-energy conformations. This analysis also suggests directions for further research and novel applications for the proposed framework. PMID:22759582

  7. US NDC Modernization Iteration E2 Prototyping Report: User Interface Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Jennifer E.; Palmer, Melanie A.; Vickers, James Wallace

    2014-12-01

    During the second iteration of the US NDC Modernization Elaboration phase (E2), the SNL US NDC Modernization project team completed follow-on Rich Client Platform (RCP) exploratory prototyping related to the User Interface Framework (UIF). The team also developed a survey of browser-based User Interface solutions and completed exploratory prototyping for selected solutions. This report presents the results of the browser-based UI survey, summarizes the E2 browser-based UI and RCP prototyping work, and outlines a path forward for the third iteration of the Elaboration phase (E3).

  8. Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization

    PubMed Central

    2012-01-01

    Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104

  9. Bounded-Angle Iterative Decoding of LDPC Codes

    NASA Technical Reports Server (NTRS)

    Dolinar, Samuel; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2009-01-01

    Bounded-angle iterative decoding is a modified version of conventional iterative decoding, conceived as a means of reducing undetected-error rates for short low-density parity-check (LDPC) codes. For a given code, bounded-angle iterative decoding can be implemented by means of a simple modification of the decoder algorithm, without redesigning the code. Bounded-angle iterative decoding is based on a representation of received words and code words as vectors in an n-dimensional Euclidean space (where n is an integer).

  10. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  11. Using conceptual work products of health care to design health IT.

    PubMed

    Berry, Andrew B L; Butler, Keith A; Harrington, Craig; Braxton, Melissa O; Walker, Amy J; Pete, Nikki; Johnson, Trevor; Oberle, Mark W; Haselkorn, Jodie; Paul Nichol, W; Haselkorn, Mark

    2016-02-01

    This paper introduces a new, model-based design method for interactive health information technology (IT) systems. This method extends workflow models with models of conceptual work products. When the health care work being modeled is substantially cognitive, tacit, and complex in nature, graphical workflow models can become too complex to be useful to designers. Conceptual models complement and simplify workflows by providing an explicit specification for the information product they must produce. We illustrate how conceptual work products can be modeled using standard software modeling language, which allows them to provide fundamental requirements for what the workflow must accomplish and the information that a new system should provide. Developers can use these specifications to envision how health IT could enable an effective cognitive strategy as a workflow with precise information requirements. We illustrate the new method with a study conducted in an outpatient multiple sclerosis (MS) clinic. This study shows specifically how the different phases of the method can be carried out, how the method allows for iteration across phases, and how the method generated a health IT design for case management of MS that is efficient and easy to use. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Phases of New Physics in the BAO Spectrum

    NASA Astrophysics Data System (ADS)

    Baumann, Daniel; Green, Daniel; Zaldarriaga, Matias

    2017-11-01

    We show that the phase of the spectrum of baryon acoustic oscillations (BAO) is immune to the effects of nonlinear evolution. This suggests that any new physics that contributes to the initial phase of the BAO spectrum, such as extra light species in the early universe, can be extracted reliably at late times. We provide three arguments in support of our claim: first, we point out that a phase shift of the BAO spectrum maps to a characteristic sign change in the real space correlation function and that this feature cannot be generated or modified by nonlinear dynamics. Second, we confirm this intuition through an explicit computation, valid to all orders in cosmological perturbation theory. Finally, we provide a nonperturbative argument using general analytic properties of the linear response to the initial oscillations. Our result motivates measuring the phase of the BAO spectrum as a robust probe of new physics.

  13. Single-shot dual-wavelength in-line and off-axis hybrid digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2018-02-01

    We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.

  14. Correction of phase velocity bias caused by strong directional noise sources in high-frequency ambient noise tomography: a case study in Karamay, China

    NASA Astrophysics Data System (ADS)

    Wang, K.; Luo, Y.; Yang, Y.

    2016-12-01

    We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.

  15. A polygon-based modeling approach to assess exposure of resources and assets to wildfire

    Treesearch

    Matthew P. Thompson; Joe Scott; Jeffrey D. Kaiden; Julie W. Gilbertson-Day

    2013-01-01

    Spatially explicit burn probability modeling is increasingly applied to assess wildfire risk and inform mitigation strategy development. Burn probabilities are typically expressed on a per-pixel basis, calculated as the number of times a pixel burns divided by the number of simulation iterations. Spatial intersection of highly valued resources and assets (HVRAs) with...

  16. FBILI method for multi-level line transfer

    NASA Astrophysics Data System (ADS)

    Kuzmanovska, O.; Atanacković, O.; Faurobert, M.

    2017-07-01

    Efficient non-LTE multilevel radiative transfer calculations are needed for a proper interpretation of astrophysical spectra. In particular, realistic simulations of time-dependent processes or multi-dimensional phenomena require that the iterative method used to solve such non-linear and non-local problem is as fast as possible. There are several multilevel codes based on efficient iterative schemes that provide a very high convergence rate, especially when combined with mathematical acceleration techniques. The Forth-and-Back Implicit Lambda Iteration (FBILI) developed by Atanacković-Vukmanović et al. [1] is a Gauss-Seidel-type iterative scheme that is characterized by a very high convergence rate without the need of complementing it with additional acceleration techniques. In this paper we make the implementation of the FBILI method to the multilevel atom line transfer in 1D more explicit. We also consider some of its variants and investigate their convergence properties by solving the benchmark problem of CaII line formation in the solar atmosphere. Finally, we compare our solutions with results obtained with the well known code MULTI.

  17. Flexible binding simulation by a novel and improved version of virtual-system coupled adaptive umbrella sampling

    NASA Astrophysics Data System (ADS)

    Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi

    2016-10-01

    Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.

  18. Improved quantitative visualization of hypervelocity flow through wavefront estimation based on shadow casting of sinusoidal gratings.

    PubMed

    Medhi, Biswajit; Hegde, Gopalakrishna M; Gorthi, Sai Siva; Reddy, Kalidevapura Jagannath; Roy, Debasish; Vasu, Ram Mohan

    2016-08-01

    A simple noninterferometric optical probe is developed to estimate wavefront distortion suffered by a plane wave in its passage through density variations in a hypersonic flow obstructed by a test model in a typical shock tunnel. The probe has a plane light wave trans-illuminating the flow and casting a shadow of a continuous-tone sinusoidal grating. Through a geometrical optics, eikonal approximation to the distorted wavefront, a bilinear approximation to it is related to the location-dependent shift (distortion) suffered by the grating, which can be read out space-continuously from the projected grating image. The processing of the grating shadow is done through an efficient Fourier fringe analysis scheme, either with a windowed or global Fourier transform (WFT and FT). For comparison, wavefront slopes are also estimated from shadows of random-dot patterns, processed through cross correlation. The measured slopes are suitably unwrapped by using a discrete cosine transform (DCT)-based phase unwrapping procedure, and also through iterative procedures. The unwrapped phase information is used in an iterative scheme, for a full quantitative recovery of density distribution in the shock around the model, through refraction tomographic inversion. Hypersonic flow field parameters around a missile-shaped body at a free-stream Mach number of ∼8 measured using this technique are compared with the numerically estimated values. It is shown that, while processing a wavefront with small space-bandwidth product (SBP) the FT inversion gave accurate results with computational efficiency; computation-intensive WFT was needed for similar results when dealing with larger SBP wavefronts.

  19. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted

    1990-01-01

    Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.

  20. Parallel transmission pulse design with explicit control for the specific absorption rate in the presence of radiofrequency errors.

    PubMed

    Martin, Adrian; Schiavi, Emanuele; Eryaman, Yigitcan; Herraiz, Joaquin L; Gagoski, Borjan; Adalsteinsson, Elfar; Wald, Lawrence L; Guerin, Bastien

    2016-06-01

    A new framework for the design of parallel transmit (pTx) pulses is presented introducing constraints for local and global specific absorption rate (SAR) in the presence of errors in the radiofrequency (RF) transmit chain. The first step is the design of a pTx RF pulse with explicit constraints for global and local SAR. Then, the worst possible SAR associated with that pulse due to RF transmission errors ("worst-case SAR") is calculated. Finally, this information is used to re-calculate the pulse with lower SAR constraints, iterating this procedure until its worst-case SAR is within safety limits. Analysis of an actual pTx RF transmit chain revealed amplitude errors as high as 8% (20%) and phase errors above 3° (15°) for spokes (spiral) pulses. Simulations show that using the proposed framework, pulses can be designed with controlled "worst-case SAR" in the presence of errors of this magnitude at minor cost of the excitation profile quality. Our worst-case SAR-constrained pTx design strategy yields pulses with local and global SAR within the safety limits even in the presence of RF transmission errors. This strategy is a natural way to incorporate SAR safety factors in the design of pTx pulses. Magn Reson Med 75:2493-2504, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  1. Animating streamlines with repeated asymmetric patterns for steady flow visualization

    NASA Astrophysics Data System (ADS)

    Yeh, Chih-Kuo; Liu, Zhanping; Lee, Tong-Yee

    2012-01-01

    Animation provides intuitive cueing for revealing essential spatial-temporal features of data in scientific visualization. This paper explores the design of Repeated Asymmetric Patterns (RAPs) in animating evenly-spaced color-mapped streamlines for dense accurate visualization of complex steady flows. We present a smooth cyclic variable-speed RAP animation model that performs velocity (magnitude) integral luminance transition on streamlines. This model is extended with inter-streamline synchronization in luminance varying along the tangential direction to emulate orthogonal advancing waves from a geometry-based flow representation, and then with evenly-spaced hue differing in the orthogonal direction to construct tangential flow streaks. To weave these two mutually dual sets of patterns, we propose an energy-decreasing strategy that adopts an iterative yet efficient procedure for determining the luminance phase and hue of each streamline in HSL color space. We also employ adaptive luminance interleaving in the direction perpendicular to the flow to increase the contrast between streamlines.

  2. Minimizing Cache Misses Using Minimum-Surface Bodies

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob; Biegel, Bryan (Technical Monitor)

    2002-01-01

    A number of known techniques for improving cache performance in scientific computations involve the reordering of the iteration space. Some of these reorderings can be considered as coverings of the iteration space with the sets having good surface-to-volume ratio. Use of such sets reduces the number of cache misses in computations of local operators having the iteration space as a domain. First, we derive lower bounds which any algorithm must suffer while computing a local operator on a grid. Then we explore coverings of iteration spaces represented by structured and unstructured grids which allow us to approach these lower bounds. For structured grids we introduce a covering by successive minima tiles of the interference lattice of the grid. We show that the covering has low surface-to-volume ratio and present a computer experiment showing actual reduction of the cache misses achieved by using these tiles. For planar unstructured grids we show existence of a covering which reduces the number of cache misses to the level of structured grids. On the other hand, we present a triangulation of a 3-dimensional cube such that any local operator on the corresponding grid has significantly larger number of cache misses than a similar operator on a structured grid.

  3. Comment on ``Nonlinear gyrokinetic theory with polarization drift'' [Phys. Plasmas 17, 082304 (2010)

    NASA Astrophysics Data System (ADS)

    Leerink, S.; Parra, F. I.; Heikkinen, J. A.

    2010-12-01

    In this comment, we show that by using the discrete particle distribution function the changes of the phase-space volume of gyrocenter coordinates due to the fluctuating E ×B velocity do not explicitly appear in the Poisson equation and the [Sosenko et al., Phys. Scr. 64, 264 (2001)] result is recovered. It is demonstrated that there is no contradiction between the work presented by Sosenko et al. and the work presented by [Wang et al., Phys. Plasmas 17, 082304 (2010)].

  4. Explicitly Stochastic Parameterization of Nonorographic Gravity-Wave Drag

    DTIC Science & Technology

    2010-01-01

    PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Research Laboratory,Space Science Division,4555 Overlook Avenue SW,Washington,DC,20375 8. PERFORMING... ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...τb exp [ − (c− coff ) 2 c2w ] , (1) τb = τ ∗ b F (φ, t), (2) with a phase-speed width cw = 30 m s −1. τb is the “background” momentum flux and is

  5. Orbital Tori Construction Using Trajectory Following Spectral Methods

    DTIC Science & Technology

    2010-09-01

    a Walker delta pattern scheme of 18/6/2. Explicitly, this means the 18 satellites were equally spaced in six planes , each inclined at 55 degrees, with...a relative phasing angle parameter of 2 [65]. The planes ’ inclinations were reduced from the original specification of 63 degrees to 55 degrees due...navigation performance specification for the SPS was ≤ 100 meters 8 in the horizontal plane , 95 percent of the time and ≤ 156 meters in the vertical plane

  6. A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Lv, Wen; Zhang, Tongtong

    2018-05-01

    We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.

  7. Solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators.

    PubMed

    Zhao, Jing; Zong, Haili

    2018-01-01

    In this paper, we propose parallel and cyclic iterative algorithms for solving the multiple-set split equality common fixed-point problem of firmly quasi-nonexpansive operators. We also combine the process of cyclic and parallel iterative methods and propose two mixed iterative algorithms. Our several algorithms do not need any prior information about the operator norms. Under mild assumptions, we prove weak convergence of the proposed iterative sequences in Hilbert spaces. As applications, we obtain several iterative algorithms to solve the multiple-set split equality problem.

  8. Performing prototype distortion tasks requires no contribution from the explicit memory systems: evidence from amnesic MCI patients in a new experimental paradigm.

    PubMed

    Zannino, Gian Daniele; Perri, Roberta; Zabberoni, Silvia; Caltagirone, Carlo; Marra, Camillo; Carlesimo, Giovanni A

    2012-10-01

    Evidence shows that amnesic patients are able to categorize new exemplars drawn from the same prototype as in previously encountered items. It is still unclear, however, whether this ability is due to a spared implicit learning system or residual explicit memory and/or working memory resources. In this study, we used a new paradigm devised expressly to rule out any possible contribution of episodic and working memory in performing a prototype distortion task. We enrolled patients with amnesic MCI and Normal Controls. Our paradigm consisted of a study phase and a test phase; two-thirds of the participants performed the study phase and all participants performed the test phase. In the study phase, participants had to judge how pleasant morphed faces, drawn from a single prototype, seemed to them. Half of the participants were shown faces drawn from the A-prototype and half from the B-prototype. A- and B-faces were opposite in a morphing space with a neutral human face at the center. In the test phase, participants had to judge the regularity of faces they had never seen before. Three different types of faces were shown in the test phase, that is, A-, B-, or neutral-faces. We expected that implicit learning of the category boundaries would lead to a category-specific increase in perceived regularity. The results confirmed our predictions. In fact, trained subjects (compared with subjects who did not undergo the study phase) assigned higher regularity scores to new faces drawn from the same prototype as the faces seen during training, and they gave lower regularity scores to new faces drawn from the opposite prototype. This effect was super imposable across subjects' groups. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Temperature-Dependent Implicit-Solvent Model of Polyethylene Glycol in Aqueous Solution.

    PubMed

    Chudoba, Richard; Heyda, Jan; Dzubiella, Joachim

    2017-12-12

    A temperature (T)-dependent coarse-grained (CG) Hamiltonian of polyethylene glycol/oxide (PEG/PEO) in aqueous solution is reported to be used in implicit-solvent material models in a wide temperature (i.e., solvent quality) range. The T-dependent nonbonded CG interactions are derived from a combined "bottom-up" and "top-down" approach. The pair potentials calculated from atomistic replica-exchange molecular dynamics simulations in combination with the iterative Boltzmann inversion are postrefined by benchmarking to experimental data of the radius of gyration. For better handling and a fully continuous transferability in T-space, the pair potentials are conveniently truncated and mapped to an analytic formula with three structural parameters expressed as explicit continuous functions of T. It is then demonstrated that this model without further adjustments successfully reproduces other experimentally known key thermodynamic properties of semidilute PEG solutions such as the full equation of state (i.e., T-dependent osmotic pressure) for various chain lengths as well as their cloud point (or collapse) temperature.

  10. Recursion equations in predicting band width under gradient elution.

    PubMed

    Liang, Heng; Liu, Ying

    2004-06-18

    The evolution of solute zone under gradient elution is a typical problem of non-linear continuity equation since the local diffusion coefficient and local migration velocity of the mass cells of solute zones are the functions of position and time due to space- and time-variable mobile phase composition. In this paper, based on the mesoscopic approaches (Lagrangian description, the continuity theory and the local equilibrium assumption), the evolution of solute zones in space- and time-dependent fields is described by the iterative addition of local probability density of the mass cells of solute zones. Furthermore, on macroscopic levels, the recursion equations have been proposed to simulate zone migration and spreading in reversed-phase high-performance liquid chromatography (RP-HPLC) through directly relating local retention factor and local diffusion coefficient to local mobile phase concentration. This new approach differs entirely from the traditional theories on plate concept with Eulerian description, since band width recursion equation is actually the accumulation of local diffusion coefficients of solute zones to discrete-time slices. Recursion equations and literature equations were used in dealing with same experimental data in RP-HPLC, and the comparison results show that the recursion equations can accurately predict band width under gradient elution.

  11. Quantum dressing orbits on compact groups

    NASA Astrophysics Data System (ADS)

    Jurčo, Branislav; Šťovíček, Pavel

    1993-02-01

    The quantum double is shown to imply the dressing transformation on quantum compact groups and the quantum Iwasawa decompositon in the general case. Quantum dressing orbits are described explicitly as *-algebras. The dual coalgebras consisting of differential operators are related to the quantum Weyl elements. Besides, the differential geometry on a quantum leaf allows a remarkably simple construction of irreducible *-representations of the algebras of quantum functions. Representation spaces then consist of analytic functions on classical phase spaces. These representations are also interpreted in the framework of quantization in the spirit of Berezin applied to symplectic leaves on classical compact groups. Convenient “coherent states” are introduced and a correspondence between classical and quantum observables is given.

  12. Hybrid propulsion technology program: Phase 1. Volume 3: Thiokol Corporation Space Operations

    NASA Technical Reports Server (NTRS)

    Schuler, A. L.; Wiley, D. R.

    1989-01-01

    Three candidate hybrid propulsion (HP) concepts were identified, optimized, evaluated, and refined through an iterative process that continually forced improvement to the systems with respect to safety, reliability, cost, and performance criteria. A full scale booster meeting Advanced Solid Rocket Motor (ASRM) thrust-time constraints and a booster application for 1/4 ASRM thrust were evaluated. Trade studies and analyses were performed for each of the motor elements related to SRM technology. Based on trade study results, the optimum HP concept for both full and quarter sized systems was defined. The three candidate hybrid concepts evaluated are illustrated.

  13. X-ray simulations method for the large field of view

    NASA Astrophysics Data System (ADS)

    Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.

    2018-03-01

    In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.

  14. Nonholonomic relativistic diffusion and exact solutions for stochastic Einstein spaces

    NASA Astrophysics Data System (ADS)

    Vacaru, S. I.

    2012-03-01

    We develop an approach to the theory of nonholonomic relativistic stochastic processes in curved spaces. The Itô and Stratonovich calculus are formulated for spaces with conventional horizontal (holonomic) and vertical (nonholonomic) splitting defined by nonlinear connection structures. Geometric models of the relativistic diffusion theory are elaborated for nonholonomic (pseudo) Riemannian manifolds and phase velocity spaces. Applying the anholonomic deformation method, the field equations in Einstein's gravity and various modifications are formally integrated in general forms, with generic off-diagonal metrics depending on some classes of generating and integration functions. Choosing random generating functions we can construct various classes of stochastic Einstein manifolds. We show how stochastic gravitational interactions with mixed holonomic/nonholonomic and random variables can be modelled in explicit form and study their main geometric and stochastic properties. Finally, the conditions when non-random classical gravitational processes transform into stochastic ones and inversely are analyzed.

  15. A stable and accurate partitioned algorithm for conjugate heat transfer

    NASA Astrophysics Data System (ADS)

    Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2017-09-01

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.

  16. A stable and accurate partitioned algorithm for conjugate heat transfer

    DOE PAGES

    Meng, F.; Banks, J. W.; Henshaw, W. D.; ...

    2017-04-25

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  17. Sierra/Solid Mechanics 4.48 User's Guide.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merewether, Mark Thomas; Crane, Nathan K; de Frias, Gabriel Jose

    Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutionsmore » of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.« less

  18. A local chaotic quasi-attractor in a kicked rotator

    NASA Astrophysics Data System (ADS)

    Jiang, Yu-Mei; Lu, Yun-Qing; Zhao, Jin-Gang; Wang, Xu-Ming; Chen, He-Sheng; He, Da-Ren

    2002-03-01

    Recently, Hu et al. reported a diffusion in a special kind of stochastic web observed in a kicked rotator described by a discontinuous but invertible two-dimensional area-preserving map^1. We modified the function form of the system so that the period of the kicking force becomes different in two parts of the space, and the conservative map becomes both discontinuous and noninvertible. It is found that when the ratio between both periods becomes smaller or larger than (but near to) 1, the chaotic diffusion in the web transfers to chaotic transients, which are attracted to the elliptic islands those existed inside the holes of the web earlier when the ratio equals 1. As soon as reaching the islands, the iteration follows the conservative laws exactly. Therefore we address these elliptic islands as "regular quasi-attractor"^2. When the ratio increases further and becomes far from 1, all the elliptic islands disappear and a local chaotic quasi-attractor appears instead. It attracts the iterations starting from most initial points in the phase space. This behavior may be considered as a kind of "confinement" of chaotic motion of a particle. ^1B. Hu et al., Phys.Rev.Lett.,82(1999)4224. ^2J. Wang et al., Phys.Rev.E, 64(2001)026202.

  19. Vectorized and multitasked solution of the few-group neutron diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zee, S.K.; Turinsky, P.J.; Shayer, Z.

    1989-03-01

    A numerical algorithm with parallelism was used to solve the two-group, multidimensional neutron diffusion equations on computers characterized by shared memory, vector pipeline, and multi-CPU architecture features. Specifically, solutions were obtained on the Cray X/MP-48, the IBM-3090 with vector facilities, and the FPS-164. The material-centered mesh finite difference method approximation and outer-inner iteration method were employed. Parallelism was introduced in the inner iterations using the cyclic line successive overrelaxation iterative method and solving in parallel across lines. The outer iterations were completed using the Chebyshev semi-iterative method that allows parallelism to be introduced in both space and energy groups. Formore » the three-dimensional model, power, soluble boron, and transient fission product feedbacks were included. Concentrating on the pressurized water reactor (PWR), the thermal-hydraulic calculation of moderator density assumed single-phase flow and a closed flow channel, allowing parallelism to be introduced in the solution across the radial plane. Using a pinwise detail, quarter-core model of a typical PWR in cycle 1, for the two-dimensional model without feedback the measured million floating point operations per second (MFLOPS)/vector speedups were 83/11.7. 18/2.2, and 2.4/5.6 on the Cray, IBM, and FPS without multitasking, respectively. Lower performance was observed with a coarser mesh, i.e., shorter vector length, due to vector pipeline start-up. For an 18 x 18 x 30 (x-y-z) three-dimensional model with feedback of the same core, MFLOPS/vector speedups of --61/6.7 and an execution time of 0.8 CPU seconds on the Cray without multitasking were measured. Finally, using two CPUs and the vector pipelines of the Cray, a multitasking efficiency of 81% was noted for the three-dimensional model.« less

  20. The quantum-field renormalization group in the problem of a growing phase boundary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonov, N.V.; Vasil`ev, A.N.

    1995-09-01

    Within the quantum-field renormalization-group approach we examine the stochastic equation discussed by S.I. Pavlik in describing a randomly growing phase boundary. We show that, in contrast to Pavlik`s assertion, the model is not multiplicatively renormalizable and that its consistent renormalization-group analysis requires introducing an infinite number of counterterms and the respective coupling constants ({open_quotes}charge{close_quotes}). An explicit calculation in the one-loop approximation shows that a two-dimensional surface of renormalization-group points exits in the infinite-dimensional charge space. If the surface contains an infrared stability region, the problem allows for scaling with the nonuniversal critical dimensionalities of the height of the phase boundarymore » and time, {delta}{sub h} and {delta}{sub t}, which satisfy the exact relationship 2 {delta}{sub h}= {delta}{sub t} + d, where d is the dimensionality of the phase boundary. 23 refs., 1 tab.« less

  1. Devil's staircases, quantum dimer models, and stripe formation in strong coupling models of quantum frustration.

    NASA Astrophysics Data System (ADS)

    Raman, Kumar; Papanikolaou, Stefanos; Fradkin, Eduardo

    2007-03-01

    We construct a two-dimensional microscopic model of interacting quantum dimers that displays an infinite number of periodic striped phases in its T=0 phase diagram. The phases form an incomplete devil's staircase and the period becomes arbitrarily large as the staircase is traversed. The Hamiltonian has purely short-range interactions, does not break any symmetries, and is generic in that it does not involve the fine tuning of a large number of parameters. Our model, a quantum mechanical analog of the Pokrovsky-Talapov model of fluctuating domain walls in two dimensional classical statistical mechanics, provides a mechanism by which striped phases with periods large compared to the lattice spacing can, in principle, form in frustrated quantum magnetic systems with only short-ranged interactions and no explicitly broken symmetries. Please see cond-mat/0611390 for more details.

  2. Kinetics of binary nucleation of vapors in size and composition space.

    PubMed

    Fisenko, Sergey P; Wilemski, Gerald

    2004-11-01

    We reformulate the kinetic description of binary nucleation in the gas phase using two natural independent variables: the total number of molecules g and the molar composition x of the cluster. The resulting kinetic equation can be viewed as a two-dimensional Fokker-Planck equation describing the simultaneous Brownian motion of the clusters in size and composition space. Explicit expressions for the Brownian diffusion coefficients in cluster size and composition space are obtained. For characterization of binary nucleation in gases three criteria are established. These criteria establish the relative importance of the rate processes in cluster size and composition space for different gas phase conditions and types of liquid mixtures. The equilibrium distribution function of the clusters is determined in terms of the variables g and x. We obtain an approximate analytical solution for the steady-state binary nucleation rate that has the correct limit in the transition to unary nucleation. To further illustrate our description, the nonequilibrium steady-state cluster concentrations are found by numerically solving the reformulated kinetic equation. For the reformulated transient problem, the relaxation or induction time for binary nucleation was calculated using Galerkin's method. This relaxation time is affected by processes in both size and composition space, but the contributions from each process can be separated only approximately.

  3. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  4. Software Estimates Costs of Testing Rocket Engines

    NASA Technical Reports Server (NTRS)

    Smith, C. L.

    2003-01-01

    Simulation-Based Cost Model (SiCM), a discrete event simulation developed in Extend , simulates pertinent aspects of the testing of rocket propulsion test articles for the purpose of estimating the costs of such testing during time intervals specified by its users. A user enters input data for control of simulations; information on the nature of, and activity in, a given testing project; and information on resources. Simulation objects are created on the basis of this input. Costs of the engineering-design, construction, and testing phases of a given project are estimated from numbers and labor rates of engineers and technicians employed in each phase, the duration of each phase; costs of materials used in each phase; and, for the testing phase, the rate of maintenance of the testing facility. The three main outputs of SiCM are (1) a curve, updated at each iteration of the simulation, that shows overall expenditures vs. time during the interval specified by the user; (2) a histogram of the total costs from all iterations of the simulation; and (3) table displaying means and variances of cumulative costs for each phase from all iterations. Other outputs include spending curves for each phase.

  5. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  6. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  7. Saturation: An efficient iteration strategy for symbolic state-space generation

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Luettgen, Gerald; Siminiceanu, Radu; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper presents a novel algorithm for generating state spaces of asynchronous systems using Multi-valued Decision Diagrams. In contrast to related work, the next-state function of a system is not encoded as a single Boolean function, but as cross-products of integer functions. This permits the application of various iteration strategies to build a system's state space. In particular, this paper introduces a new elegant strategy, called saturation, and implements it in the tool SMART. On top of usually performing several orders of magnitude faster than existing BDD-based state-space generators, the algorithm's required peak memory is often close to the nal memory needed for storing the overall state spaces.

  8. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    PubMed

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  9. An iterative reconstruction of cosmological initial density fields

    NASA Astrophysics Data System (ADS)

    Hada, Ryuichiro; Eisenstein, Daniel J.

    2018-05-01

    We present an iterative method to reconstruct the linear-theory initial conditions from the late-time cosmological matter density field, with the intent of improving the recovery of the cosmic distance scale from the baryon acoustic oscillations (BAOs). We present tests using the dark matter density field in both real and redshift space generated from an N-body simulation. In redshift space at z = 0.5, we find that the reconstructed displacement field using our iterative method are more than 80% correlated with the true displacement field of the dark matter particles on scales k < 0.10h Mpc-1. Furthermore, we show that the two-point correlation function of our reconstructed density field matches that of the initial density field substantially better, especially on small scales (<40h-1 Mpc). Our redshift-space results are improved if we use an anisotropic smoothing so as to account for the reduced small-scale information along the line of sight in redshift space.

  10. Experimental verification of free-space singular boundary conditions in an invisibility cloak

    NASA Astrophysics Data System (ADS)

    Wu, Qiannan; Gao, Fei; Song, Zhengyong; Lin, Xiao; Zhang, Youming; Chen, Huanyang; Zhang, Baile

    2016-04-01

    A major issue in invisibility cloaking, which caused intense mathematical discussions in the past few years but still remains physically elusive, is the plausible singular boundary conditions associated with the singular metamaterials at the inner boundary of an invisibility cloak. The perfect cloaking phenomenon, as originally proposed by Pendry et al for electromagnetic waves, cannot be treated as physical before a realistic inner boundary of a cloak is demonstrated. Although a recent demonstration has been done in a waveguide environment, the exotic singular boundary conditions should apply to a general environment as in free space. Here we fabricate a metamaterial surface that exhibits the singular boundary conditions and demonstrate its performance in free space. Particularly, the phase information of waves reflected from this metamaterial surface is explicitly measured, confirming the singular responses of boundary conditions for an invisibility cloak.

  11. Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments

    NASA Astrophysics Data System (ADS)

    Makri, Nancy

    2017-04-01

    The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.

  12. Numerical evaluation of mobile robot navigation in static indoor environment via EGAOR Iteration

    NASA Astrophysics Data System (ADS)

    Dahalan, A. A.; Saudi, A.; Sulaiman, J.; Din, W. R. W.

    2017-09-01

    One of the key issues in mobile robot navigation is the ability for the robot to move from an arbitrary start location to a specified goal location without colliding with any obstacles while traveling, also known as mobile robot path planning problem. In this paper, however, we examined the performance of a robust searching algorithm that relies on the use of harmonic potentials of the environment to generate smooth and safe path for mobile robot navigation in a static known indoor environment. The harmonic potentials will be discretized by using Laplacian’s operator to form a system of algebraic approximation equations. This algebraic linear system will be computed via 4-Point Explicit Group Accelerated Over-Relaxation (4-EGAOR) iterative method for rapid computation. The performance of the proposed algorithm will then be compared and analyzed against the existing algorithms in terms of number of iterations and execution time. The result shows that the proposed algorithm performed better than the existing methods.

  13. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  14. The Role and Reprocessing of Attitudes in Fostering Employee Work Happiness: An Intervention Study.

    PubMed

    Williams, Paige; Kern, Margaret L; Waters, Lea

    2017-01-01

    This intervention study examines the iterative reprocessing of explicit and implicit attitudes as the process underlying associations between positive employee attitudes (PsyCap), perception of positive organization culture (organizational virtuousness, OV), and work happiness. Using a quasi-experimental design, a group of school staff ( N = 69) completed surveys at three time points. After the first assessment, the treatment group ( n = 51) completed a positive psychology training intervention. Results suggest that employee PsyCap, OV, and work happiness are associated with one another through both implicit and explicit attitudes. Further, the Iterative-Reprocessing Model of attitudes (IRM) provides some insights into the processes underlying these associations. By examining the role and processes through which explicit and implicit attitudes relate to wellbeing at work, the study integrates theories on attitudes, positive organizational scholarship, positive organizational behavior and positive education. It is one of the first studies to apply the theory of the IRM to explain associations amongst PsyCap, OV and work happiness, and to test the IRM theory in a field-based setting. In applying attitude theory to wellbeing research, this study provides insights to mechanisms underlying workplace wellbeing that have not been previously examined and in doing so responds to calls for researchers to learn more about the mechanisms underlying wellbeing interventions. Further, it highlights the need to understand subconscious processes in future wellbeing research and to include implicit measures in positive psychology interventions measurement programs. Practically, this research calls attention to the importance of developing both the positive attitudes of employees and the organizational culture in developing employee work happiness.

  15. The Role and Reprocessing of Attitudes in Fostering Employee Work Happiness: An Intervention Study

    PubMed Central

    Williams, Paige; Kern, Margaret L.; Waters, Lea

    2017-01-01

    This intervention study examines the iterative reprocessing of explicit and implicit attitudes as the process underlying associations between positive employee attitudes (PsyCap), perception of positive organization culture (organizational virtuousness, OV), and work happiness. Using a quasi-experimental design, a group of school staff (N = 69) completed surveys at three time points. After the first assessment, the treatment group (n = 51) completed a positive psychology training intervention. Results suggest that employee PsyCap, OV, and work happiness are associated with one another through both implicit and explicit attitudes. Further, the Iterative-Reprocessing Model of attitudes (IRM) provides some insights into the processes underlying these associations. By examining the role and processes through which explicit and implicit attitudes relate to wellbeing at work, the study integrates theories on attitudes, positive organizational scholarship, positive organizational behavior and positive education. It is one of the first studies to apply the theory of the IRM to explain associations amongst PsyCap, OV and work happiness, and to test the IRM theory in a field-based setting. In applying attitude theory to wellbeing research, this study provides insights to mechanisms underlying workplace wellbeing that have not been previously examined and in doing so responds to calls for researchers to learn more about the mechanisms underlying wellbeing interventions. Further, it highlights the need to understand subconscious processes in future wellbeing research and to include implicit measures in positive psychology interventions measurement programs. Practically, this research calls attention to the importance of developing both the positive attitudes of employees and the organizational culture in developing employee work happiness. PMID:28154546

  16. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  17. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  18. Predictions for the Dirac C P -violating phase from sum rules

    NASA Astrophysics Data System (ADS)

    Delgadillo, Luis A.; Everett, Lisa L.; Ramos, Raymundo; Stuart, Alexander J.

    2018-05-01

    We explore the implications of recent results relating the Dirac C P -violating phase to predicted and measured leptonic mixing angles within a standard set of theoretical scenarios in which charged lepton corrections are responsible for generating a nonzero value of the reactor mixing angle. We employ a full set of leptonic sum rules as required by the unitarity of the lepton mixing matrix, which can be reduced to predictions for the observable mixing angles and the Dirac C P -violating phase in terms of model parameters. These sum rules are investigated within a given set of theoretical scenarios for the neutrino sector diagonalization matrix for several known classes of charged lepton corrections. The results provide explicit maps of the allowed model parameter space within each given scenario and assumed form of charged lepton perturbations.

  19. Optimal spiral phase modulation in Gerchberg-Saxton algorithm for wavefront reconstruction and correction

    NASA Astrophysics Data System (ADS)

    Baránek, M.; Běhal, J.; Bouchal, Z.

    2018-01-01

    In the phase retrieval applications, the Gerchberg-Saxton (GS) algorithm is widely used for the simplicity of implementation. This iterative process can advantageously be deployed in the combination with a spatial light modulator (SLM) enabling simultaneous correction of optical aberrations. As recently demonstrated, the accuracy and efficiency of the aberration correction using the GS algorithm can be significantly enhanced by a vortex image spot used as the target intensity pattern in the iterative process. Here we present an optimization of the spiral phase modulation incorporated into the GS algorithm.

  20. Quantitative phase measurement for wafer-level optics

    NASA Astrophysics Data System (ADS)

    Qu, Weijuan; Wen, Yongfu; Wang, Zhaomin; Yang, Fang; Huang, Lei; Zuo, Chao

    2015-07-01

    Wafer-level-optics now is widely used in smart phone camera, mobile video conferencing or in medical equipment that require tiny cameras. Extracting quantitative phase information has received increased interest in order to quantify the quality of manufactured wafer-level-optics, detect defective devices before packaging, and provide feedback for manufacturing process control, all at the wafer-level for high-throughput microfabrication. We demonstrate two phase imaging methods, digital holographic microscopy (DHM) and Transport-of-Intensity Equation (TIE) to measure the phase of the wafer-level lenses. DHM is a laser-based interferometric method based on interference of two wavefronts. It can perform a phase measurement in a single shot. While a minimum of two measurements of the spatial intensity of the optical wave in closely spaced planes perpendicular to the direction of propagation are needed to do the direct phase retrieval by solving a second-order differential equation, i.e., with a non-iterative deterministic algorithm from intensity measurements using the Transport-of-Intensity Equation (TIE). But TIE is a non-interferometric method, thus can be applied to partial-coherence light. We demonstrated the capability and disability for the two phase measurement methods for wafer-level optics inspection.

  1. Modulator Dynamics Shape the Design Space for Stepwise-Elution Simulated Moving Bed Chromatographic Separations.

    PubMed

    Wayne, Chris J; Velayudhan, Ajoy

    2018-03-31

    For proteins and other biological macromolecules, SMB chromatography is best operated non-isocratically. However, traditional modes of non-isocratic SMB operation generate significant mobile-phase modulator dynamics. The mechanisms by which these modulator dynamics affect a separation's success, and thus frame the design space, have yet to be explained quantitatively. Here, the dynamics of the modulator (e.g., salts in ion exchange and hydrophobic interaction chromatography) are explicitly accounted for. This leads to the elucidation of two new design constraints, presented as dimensionless numbers, which quantify the effects of the modulator phenomena and thus predict the success of a non-isocratic SMB separation. Consequently, these two new design constraints re-define the SMB design space. Computational and experimental studies at the boundaries of this design space corroborate the theoretical predictions. The design of efficient and robust operating conditions through use of the new design space is also demonstrated. © 2018 The Authors. Biotechnology Journal Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Fast higher-order MR image reconstruction using singular-vector separation.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pruessmann, Klaas P

    2012-07-01

    Medical resonance imaging (MRI) conventionally relies on spatially linear gradient fields for image encoding. However, in practice various sources of nonlinear fields can perturb the encoding process and give rise to artifacts unless they are suitably addressed at the reconstruction level. Accounting for field perturbations that are neither linear in space nor constant over time, i.e., dynamic higher-order fields, is particularly challenging. It was previously shown to be feasible with conjugate-gradient iteration. However, so far this approach has been relatively slow due to the need to carry out explicit matrix-vector multiplications in each cycle. In this work, it is proposed to accelerate higher-order reconstruction by expanding the encoding matrix such that fast Fourier transform can be employed for more efficient matrix-vector computation. The underlying principle is to represent the perturbing terms as sums of separable functions of space and time. Compact representations with this property are found by singular-vector analysis of the perturbing matrix. Guidelines for balancing the accuracy and speed of the resulting algorithm are derived by error propagation analysis. The proposed technique is demonstrated for the case of higher-order field perturbations due to eddy currents caused by diffusion weighting. In this example, image reconstruction was accelerated by two orders of magnitude.

  3. Higher order reconstruction for MRI in the presence of spatiotemporal field perturbations.

    PubMed

    Wilm, Bertram J; Barmet, Christoph; Pavan, Matteo; Pruessmann, Klaas P

    2011-06-01

    Despite continuous hardware advances, MRI is frequently subject to field perturbations that are of higher than first order in space and thus violate the traditional k-space picture of spatial encoding. Sources of higher order perturbations include eddy currents, concomitant fields, thermal drifts, and imperfections of higher order shim systems. In conventional MRI with Fourier reconstruction, they give rise to geometric distortions, blurring, artifacts, and error in quantitative data. This work describes an alternative approach in which the entire field evolution, including higher order effects, is accounted for by viewing image reconstruction as a generic inverse problem. The relevant field evolutions are measured with a third-order NMR field camera. Algebraic reconstruction is then formulated such as to jointly minimize artifacts and noise in the resulting image. It is solved by an iterative conjugate-gradient algorithm that uses explicit matrix-vector multiplication to accommodate arbitrary net encoding. The feasibility and benefits of this approach are demonstrated by examples of diffusion imaging. In a phantom study, it is shown that higher order reconstruction largely overcomes variable image distortions that diffusion gradients induce in EPI data. In vivo experiments then demonstrate that the resulting geometric consistency permits straightforward tensor analysis without coregistration. Copyright © 2011 Wiley-Liss, Inc.

  4. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  5. The thermodynamic scale of inorganic crystalline metastability

    PubMed Central

    Sun, Wenhao; Dacek, Stephen T.; Ong, Shyue Ping; Hautier, Geoffroy; Jain, Anubhav; Richards, William D.; Gamst, Anthony C.; Persson, Kristin A.; Ceder, Gerbrand

    2016-01-01

    The space of metastable materials offers promising new design opportunities for next-generation technological materials, such as complex oxides, semiconductors, pharmaceuticals, steels, and beyond. Although metastable phases are ubiquitous in both nature and technology, only a heuristic understanding of their underlying thermodynamics exists. We report a large-scale data-mining study of the Materials Project, a high-throughput database of density functional theory–calculated energetics of Inorganic Crystal Structure Database structures, to explicitly quantify the thermodynamic scale of metastability for 29,902 observed inorganic crystalline phases. We reveal the influence of chemistry and composition on the accessible thermodynamic range of crystalline metastability for polymorphic and phase-separating compounds, yielding new physical insights that can guide the design of novel metastable materials. We further assert that not all low-energy metastable compounds can necessarily be synthesized, and propose a principle of ‘remnant metastability’—that observable metastable crystalline phases are generally remnants of thermodynamic conditions where they were once the lowest free-energy phase. PMID:28138514

  6. Multishot cartesian turbo spin-echo diffusion imaging using iterative POCSMUSE Reconstruction.

    PubMed

    Zhang, Zhe; Zhang, Bing; Li, Ming; Liang, Xue; Chen, Xiaodong; Liu, Renyuan; Zhang, Xin; Guo, Hua

    2017-07-01

    To report a diffusion imaging technique insensitive to off-resonance artifacts and motion-induced ghost artifacts using multishot Cartesian turbo spin-echo (TSE) acquisition and iterative POCS-based reconstruction of multiplexed sensitivity encoded magnetic resonance imaging (MRI) (POCSMUSE) for phase correction. Phase insensitive diffusion preparation was used to deal with the violation of the Carr-Purcell-Meiboom-Gill (CPMG) conditions of TSE diffusion-weighted imaging (DWI), followed by a multishot Cartesian TSE readout for data acquisition. An iterative diffusion phase correction method, iterative POCSMUSE, was developed and implemented to eliminate the ghost artifacts in multishot TSE DWI. The in vivo human brain diffusion images (from one healthy volunteer and 10 patients) using multishot Cartesian TSE were acquired at 3T and reconstructed using iterative POCSMUSE, and compared with single-shot and multishot echo-planar imaging (EPI) results. These images were evaluated by two radiologists using visual scores (considering both image quality and distortion levels) from 1 to 5. The proposed iterative POCSMUSE reconstruction was able to correct the ghost artifacts in multishot DWI. The ghost-to-signal ratio of TSE DWI using iterative POCSMUSE (0.0174 ± 0.0024) was significantly (P < 0.0005) smaller than using POCSMUSE (0.0253 ± 0.0040). The image scores of multishot TSE DWI were significantly higher than single-shot (P = 0.004 and 0.006 from two reviewers) and multishot (P = 0.008 and 0.004 from two reviewers) EPI-based methods. The proposed multishot Cartesian TSE DWI using iterative POCSMUSE reconstruction can provide high-quality diffusion images insensitive to motion-induced ghost artifacts and off-resonance related artifacts such as chemical shifts and susceptibility-induced image distortions. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:167-174. © 2016 International Society for Magnetic Resonance in Medicine.

  7. The free energy landscape of small peptides as obtained from metadynamics with umbrella sampling corrections

    PubMed Central

    Babin, Volodymyr; Roland, Christopher; Darden, Thomas A.; Sagui, Celeste

    2007-01-01

    There is considerable interest in developing methodologies for the accurate evaluation of free energies, especially in the context of biomolecular simulations. Here, we report on a reexamination of the recently developed metadynamics method, which is explicitly designed to probe “rare events” and areas of phase space that are typically difficult to access with a molecular dynamics simulation. Specifically, we show that the accuracy of the free energy landscape calculated with the metadynamics method may be considerably improved when combined with umbrella sampling techniques. As test cases, we have studied the folding free energy landscape of two prototypical peptides: Ace-(Gly)2-Pro-(Gly)3-Nme in vacuo and trialanine solvated by both implicit and explicit water. The method has been implemented in the classical biomolecular code AMBER and is to be distributed in the next scheduled release of the code. © 2006 American Institute of Physics. PMID:17144742

  8. Investigation of the Iterative Phase Retrieval Algorithm for Interferometric Applications

    NASA Astrophysics Data System (ADS)

    Gombkötő, Balázs; Kornis, János

    2010-04-01

    Sequentially recorded intensity patterns reflected from a coherently illuminated diffuse object can be used to reconstruct the complex amplitude of the scattered beam. Several iterative phase retrieval algorithms are known in the literature to obtain the initially unknown phase from these longitudinally displaced intensity patterns. When two sequences are recorded in two different states of a centimeter sized object in optical setups that are similar to digital holographic interferometry-but omitting the reference wave-, displacement, deformation, or shape measurement is theoretically possible. To do this, the retrieved phase pattern should contain information not only about the intensities and locations of the point sources of the object surface, but their relative phase as well. Not only experiments require strict mechanical precision to record useful data, but even in simulations several parameters influence the capabilities of iterative phase retrieval, such as object to camera distance range, uniform or varying camera step sequence, speckle field characteristics, and sampling. Experiments were done to demonstrate this principle with an as large as 5×5 cm sized deformable object as well. Good initial results were obtained in an imaging setup, where the intensity pattern sequences were recorded near the image plane.

  9. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-06-01

    We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.

  10. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  11. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  12. Lagrangian Descriptors: A Method for Revealing Phase Space Structures of General Time Dependent Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mancho, Ana M.; Wiggins, Stephen; Curbelo, Jezabel; Mendoza, Carolina

    2013-11-01

    Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a ``heuristic argument'' that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (``time averages''). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We thank CESGA for computing facilities. This research was supported by MINECO grants: MTM2011-26696, I-Math C3-0104, ICMAT Severo Ochoa project SEV-2011-0087, and CSIC grant OCEANTECH. SW acknowledges the support of the ONR (Grant No. N00014-01-1-0769).

  13. Optimal bounds and extremal trajectories for time averages in dynamical systems

    NASA Astrophysics Data System (ADS)

    Tobasco, Ian; Goluskin, David; Doering, Charles

    2017-11-01

    For systems governed by differential equations it is natural to seek extremal solution trajectories, maximizing or minimizing the long-time average of a given quantity of interest. A priori bounds on optima can be proved by constructing auxiliary functions satisfying certain point-wise inequalities, the verification of which does not require solving the underlying equations. We prove that for any bounded autonomous ODE, the problems of finding extremal trajectories on the one hand and optimal auxiliary functions on the other are strongly dual in the sense of convex duality. As a result, auxiliary functions provide arbitrarily sharp bounds on optimal time averages. Furthermore, nearly optimal auxiliary functions provide volumes in phase space where maximal and nearly maximal trajectories must lie. For polynomial systems, such functions can be constructed by semidefinite programming. We illustrate these ideas using the Lorenz system, producing explicit volumes in phase space where extremal trajectories are guaranteed to reside. Supported by NSF Award DMS-1515161, Van Loo Postdoctoral Fellowships, and the John Simon Guggenheim Foundation.

  14. Implications of Network Topology on Stability

    PubMed Central

    Kinkhabwala, Ali

    2015-01-01

    In analogy to chemical reaction networks, I demonstrate the utility of expressing the governing equations of an arbitrary dynamical system (interaction network) as sums of real functions (generalized reactions) multiplied by real scalars (generalized stoichiometries) for analysis of its stability. The reaction stoichiometries and first derivatives define the network’s “influence topology”, a signed directed bipartite graph. Parameter reduction of the influence topology permits simplified expression of the principal minors (sums of products of non-overlapping bipartite cycles) and Hurwitz determinants (sums of products of the principal minors or the bipartite cycles directly) for assessing the network’s steady state stability. Visualization of the Hurwitz determinants over the reduced parameters defines the network’s stability phase space, delimiting the range of its dynamics (specifically, the possible numbers of unstable roots at each steady state solution). Any further explicit algebraic specification of the network will project onto this stability phase space. Stability analysis via this hierarchical approach is demonstrated on classical networks from multiple fields. PMID:25826219

  15. Concept of contrast transfer function for edge illumination x-ray phase-contrast imaging and its comparison with the free-space propagation technique.

    PubMed

    Diemoz, Paul C; Vittoria, Fabio A; Olivo, Alessandro

    2016-05-16

    Previous studies on edge illumination (EI) X-ray phase-contrast imaging (XPCi) have investigated the nature and amplitude of the signal provided by this technique. However, the response of the imaging system to different object spatial frequencies was never explicitly considered and studied. This is required in order to predict the performance of a given EI setup for different classes of objects. To this scope, in the present work we derive analytical expressions for the contrast transfer function of an EI imaging system, using the approximation of near-field regime, and study its dependence upon the main experimental parameters. We then exploit these results to compare the frequency response of an EI system with respect of that of a free-space propagation XPCi one. The results achieved in this work can be useful for predicting the signals obtainable for different types of objects and also as a basis for new retrieval methods.

  16. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  17. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  18. Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds

    NASA Astrophysics Data System (ADS)

    Mitra, Arpita

    2017-12-01

    The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.

  19. An analysis of iterated local search for job-shop scheduling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul

    2003-08-01

    Iterated local search, or ILS, is among the most straightforward meta-heuristics for local search. ILS employs both small-step and large-step move operators. Search proceeds via iterative modifications to a single solution, in distinct alternating phases. In the first phase, local neighborhood search (typically greedy descent) is used in conjunction with the small-step operator to transform solutions into local optima. In the second phase, the large-step operator is applied to generate perturbations to the local optima obtained in the first phase. Ideally, when local neighborhood search is applied to the resulting solution, search will terminate at a different local optimum, i.e.,more » the large-step perturbations should be sufficiently large to enable escape from the attractor basins of local optima. ILS has proven capable of delivering excellent performance on numerous N P-Hard optimization problems. [LMS03]. However, despite its implicity, very little is known about why ILS can be so effective, and under what conditions. The goal of this paper is to advance the state-of-the-art in the analysis of meta-heuristics, by providing answers to this research question. They focus on characterizing both the relationship between the structure of the underlying search space and ILS performance, and the dynamic behavior of ILS. The analysis proceeds in the context of the job-shop scheduling problem (JSP) [Tai94]. They begin by demonstrating that the attractor basins of local optima in the JSP are surprisingly weak, and can be escaped with high probaiblity by accepting a short random sequence of less-fit neighbors. this result is used to develop a new ILS algorithms for the JSP, I-JAR, whose performance is competitive with tabu search on difficult benchmark instances. They conclude by developing a very accurate behavioral model of I-JAR, which yields significant insights into the dynamics of search. The analysis is based on a set of 100 random 10 x 10 problem instances, in addition to some widely used benchmark instances. Both I-JAR and the tabu search algorithm they consider are based on the N1 move operator introduced by van Laarhoven et al. [vLAL92]. The N1 operator induces a connected search space, such that it is always possible to move from an arbitrary solution to an optimal solution; this property is integral to the development of a behavioral model of I-JAR. However, much of the analysis generalizes to other move operators, including that of Nowicki and Smutnick [NS96]. Finally the models are based on the distance between two solutions, which they take as the well-known disjunctive graph distance [MBK99].« less

  20. Strong imploding shock - The representative curve

    NASA Astrophysics Data System (ADS)

    Mishkin, E. A.; Alejaldre, C.

    1981-02-01

    The representative curve of the ideal gas behind the front of a spherically or cylindrically asymmetric strong imploding shock is derived. The partial differential equations of mass, momentum and energy conservation are reduced to a set of ordinary differential equations by the method of quasi-separation of variables, following which the reduced pressure and density as functions of the radius with respect to the shock front are explicit functions of coordinates defining the phase plane of the self-similar solution. The curve in phase space representing the state of the imploded gas behind the shock front is shown to pass through the point where the reduced pressure is maximum, which is located somewhat behind the shock front and ahead of the tail of the shock.

  1. Topological Band Theory for Non-Hermitian Hamiltonians

    NASA Astrophysics Data System (ADS)

    Shen, Huitao; Zhen, Bo; Fu, Liang

    2018-04-01

    We develop the topological band theory for systems described by non-Hermitian Hamiltonians, whose energy spectra are generally complex. After generalizing the notion of gapped band structures to the non-Hermitian case, we classify "gapped" bands in one and two dimensions by explicitly finding their topological invariants. We find nontrivial generalizations of the Chern number in two dimensions, and a new classification in one dimension, whose topology is determined by the energy dispersion rather than the energy eigenstates. We then study the bulk-edge correspondence and the topological phase transition in two dimensions. Different from the Hermitian case, the transition generically involves an extended intermediate phase with complex-energy band degeneracies at isolated "exceptional points" in momentum space. We also systematically classify all types of band degeneracies.

  2. Cooperative scattering and radiation pressure force in dense atomic clouds

    NASA Astrophysics Data System (ADS)

    Bachelard, R.; Piovella, N.; Courteille, Ph. W.

    2011-07-01

    Atomic clouds prepared in “timed Dicke” states, i.e. states where the phase of the oscillating atomic dipole moments linearly varies along one direction of space, are efficient sources of superradiant light emission [Scully , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.96.010501 96, 010501 (2006)]. Here, we show that, in contrast to previous assertions, timed Dicke states are not the states automatically generated by incident laser light. In reality, the atoms act back on the driving field because of the finite refraction of the cloud. This leads to nonuniform phase shifts, which, at higher optical densities, dramatically alter the cooperative scattering properties, as we show by explicit calculation of macroscopic observables, such as the radiation pressure force.

  3. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    NASA Astrophysics Data System (ADS)

    Schunke, B.; Bora, D.; Hemsworth, R.; Tanga, A.

    2009-03-01

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D- and capable of delivering 16.5 MW of D0 to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option [1]. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H- to 100 keV will inject ≈15 A equivalent of H0 for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion source as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D- and H- current densities as well as long-pulse operation [2, 3]. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R&D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.

  4. Status of the Negative Ion Based Heating and Diagnostic Neutral Beams for ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunke, B.; Bora, D.; Hemsworth, R.

    2009-03-12

    The current baseline of ITER foresees 2 Heating Neutral Beam (HNB's) systems based on negative ion technology, each accelerating to 1 MeV 40 A of D{sup -} and capable of delivering 16.5 MW of D{sup 0} to the ITER plasma, with a 3rd HNB injector foreseen as an upgrade option. In addition a dedicated Diagnostic Neutral Beam (DNB) accelerating 60 A of H{sup -} to 100 keV will inject {approx_equal}15 A equivalent of H{sup 0} for charge exchange recombination spectroscopy and other diagnostics. Recently the RF driven negative ion source developed by IPP Garching has replaced the filamented ion sourcemore » as the reference ITER design. The RF source developed at IPP, which is approximately a quarter scale of the source needed for ITER, is expected to have reduced caesium consumption compared to the filamented arc driven ion source. The RF driven source has demonstrated adequate accelerated D{sup -} and H{sup -} current densities as well as long-pulse operation. It is foreseen that the HNB's and the DNB will use the same negative ion source. Experiments with a half ITER-size ion source are on-going at IPP and the operation of a full-scale ion source will be demonstrated, at full power and pulse length, in the dedicated Ion Source Test Bed (ISTF), which will be part of the Neutral Beam Test Facility (NBTF), in Padua, Italy. This facility will carry out the necessary R and D for the HNB's for ITER and demonstrate operation of the full-scale HNB beamline. An overview of the current status of the neutral beam (NB) systems and the chosen configuration will be given and the ongoing integration effort into the ITER plant will be highlighted. It will be demonstrated how installation and maintenance logistics have influenced the design, notably the top access scheme facilitating access for maintenance and installation. The impact of the ITER Design Review and recent design change requests (DCRs) will be briefly discussed, including start-up and commissioning issues. The low current hydrogen phase now envisaged for start-up imposed specific requirements for operating the HNB's at full beam power. It has been decided to address the shinethrough issue by installing wall armour protection, which increases the operational space in all scenarios. Other NB related issues identified by the Design Review process will be discussed and the possible changes to the ITER baseline indicated.« less

  5. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  6. Development of ITER non-activation phase operation scenarios

    DOE PAGES

    Kim, S. H.; Poli, F. M.; Koechl, F.; ...

    2017-06-29

    Non-activation phase operations in ITER in hydrogen (H) and helium (He) will be important for commissioning of tokamak systems, such as diagnostics, heating and current drive (HCD) systems, coils and plasma control systems, and for validation of techniques necessary for establishing operations in DT. The assessment of feasible HCD schemes at various toroidal fields (2.65–5.3 T) has revealed that the previously applied assumptions need to be refined for the ITER non-activation phase H/He operations. A study of the ranges of plasma density and profile shape using the JINTRAC suite of codes has indicated that the hydrogen pellet fuelling into Hemore » plasmas should be utilized taking the optimization of IC power absorption, neutral beam shine-through density limit and H-mode access into account. The EPED1 estimation of the edge pedestal parameters has been extended to various H operation conditions, and the combined EPED1 and SOLPS estimation has provided guidance for modelling the edge pedestal in H/He operations. The availability of ITER HCD schemes, ranges of achievable plasma density and profile shape, and estimation of the edge pedestal parameters for H/He plasmas have been integrated into various time-dependent tokamak discharge simulations. In this paper, various H/He scenarios at a wide range of plasma current (7.5–15 MA) and field (2.65–5.3 T) have been developed for the ITER non-activation phase operation, and the sensitivity of the developed scenarios to the used assumptions has been investigated to provide guidance for further development.« less

  7. Development of ITER non-activation phase operation scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S. H.; Poli, F. M.; Koechl, F.

    Non-activation phase operations in ITER in hydrogen (H) and helium (He) will be important for commissioning of tokamak systems, such as diagnostics, heating and current drive (HCD) systems, coils and plasma control systems, and for validation of techniques necessary for establishing operations in DT. The assessment of feasible HCD schemes at various toroidal fields (2.65–5.3 T) has revealed that the previously applied assumptions need to be refined for the ITER non-activation phase H/He operations. A study of the ranges of plasma density and profile shape using the JINTRAC suite of codes has indicated that the hydrogen pellet fuelling into Hemore » plasmas should be utilized taking the optimization of IC power absorption, neutral beam shine-through density limit and H-mode access into account. The EPED1 estimation of the edge pedestal parameters has been extended to various H operation conditions, and the combined EPED1 and SOLPS estimation has provided guidance for modelling the edge pedestal in H/He operations. The availability of ITER HCD schemes, ranges of achievable plasma density and profile shape, and estimation of the edge pedestal parameters for H/He plasmas have been integrated into various time-dependent tokamak discharge simulations. In this paper, various H/He scenarios at a wide range of plasma current (7.5–15 MA) and field (2.65–5.3 T) have been developed for the ITER non-activation phase operation, and the sensitivity of the developed scenarios to the used assumptions has been investigated to provide guidance for further development.« less

  8. Magnetic flux density reconstruction using interleaved partial Fourier acquisitions in MREIT.

    PubMed

    Park, Hee Myung; Nam, Hyun Soo; Kwon, Oh In

    2011-04-07

    Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive modality to visualize the internal conductivity and/or current density of an electrically conductive object by the injection of current. In order to measure a magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels the systematic artifacts accumulated in phase signals and also reduces the random noise effect. However, it is important to reduce scan duration maintaining spatial resolution and sufficient contrast, in order to allow for practical in vivo implementation of MREIT. The purpose of this paper is to develop a coupled partial Fourier strategy in the interleaved sampling in order to reduce the total imaging time for an MREIT acquisition, whilst maintaining an SNR of the measured magnetic flux density comparable to what is achieved with complete k-space data. The proposed method uses two key steps: one is to update the magnetic flux density by updating the complex densities using the partially interleaved k-space data and the other is to fill in the missing k-space data iteratively using the updated background field inhomogeneity and magnetic flux density data. Results from numerical simulations and animal experiments demonstrate that the proposed method reduces considerably the scanning time and provides resolution of the recovered B(z) comparable to what is obtained from complete k-space data.

  9. Associative memory in an analog iterated-map neural network

    NASA Astrophysics Data System (ADS)

    Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.

    1990-03-01

    The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.

  10. Analytic Modeling of Pressurization and Cryogenic Propellant

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy H.

    2010-01-01

    An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.

  11. Functional renormalization group approach to SU(N ) Heisenberg models: Real-space renormalization group at arbitrary N

    NASA Astrophysics Data System (ADS)

    Buessen, Finn Lasse; Roscher, Dietrich; Diehl, Sebastian; Trebst, Simon

    2018-02-01

    The pseudofermion functional renormalization group (pf-FRG) is one of the few numerical approaches that has been demonstrated to quantitatively determine the ordering tendencies of frustrated quantum magnets in two and three spatial dimensions. The approach, however, relies on a number of presumptions and approximations, in particular the choice of pseudofermion decomposition and the truncation of an infinite number of flow equations to a finite set. Here we generalize the pf-FRG approach to SU (N )-spin systems with arbitrary N and demonstrate that the scheme becomes exact in the large-N limit. Numerically solving the generalized real-space renormalization group equations for arbitrary N , we can make a stringent connection between the physically most significant case of SU(2) spins and more accessible SU (N ) models. In a case study of the square-lattice SU (N ) Heisenberg antiferromagnet, we explicitly demonstrate that the generalized pf-FRG approach is capable of identifying the instability indicating the transition into a staggered flux spin liquid ground state in these models for large, but finite, values of N . In a companion paper [Roscher et al., Phys. Rev. B 97, 064416 (2018), 10.1103/PhysRevB.97.064416] we formulate a momentum-space pf-FRG approach for SU (N ) spin models that allows us to explicitly study the large-N limit and access the low-temperature spin liquid phase.

  12. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per cent of the complex amplitude be withheld from a log-amplitude and phase inversion rather than retaining them with large error-bars.

  13. Prospects of light sterile neutrino oscillation and C P violation searches at the Fermilab Short Baseline Neutrino Facility

    NASA Astrophysics Data System (ADS)

    Cianci, D.; Furmanski, A.; Karagiorgi, G.; Ross-Lonergan, M.

    2017-09-01

    We investigate the ability of the short baseline neutrino (SBN) experimental program at Fermilab to test the globally-allowed (3 +N ) sterile neutrino oscillation parameter space. We explicitly consider the globally-allowed parameter space for the (3 +1 ), (3 +2 ), and (3 +3 ) sterile neutrino oscillation scenarios. We find that SBN can probe with 5 σ sensitivity more than 85%, 95% and 55% of the parameter space currently allowed at 99% confidence level for the (3 +1 ), (3 +2 ) and (3 +3 ) scenarios, respectively, with the (3 +N ) allowed space used in these studies closely resembling that of previous studies [J. M. Conrad, C. M. Ignarra, G. Karagiorgi, M. H. Shaevitz, and J. Spitz, Adv. High Energy Phys. 2013, 1 (2013)., 10.1155/2013/163897], calculated using the same methodology. In the case of the (3 +2 ) and (3 +3 ) scenarios, C P -violating phases appear in the oscillation probability terms, leading to observable differences in the appearance probabilities of neutrinos and antineutrinos. We explore SBN's sensitivity to those phases for the (3 +2 ) scenario through the currently planned neutrino beam running, and investigate potential improvements through additional antineutrino beam running. We show that, if antineutrino exposure is considered, for maximal values of the (3 +2 ) C P -violating phase ϕ54, SBN could be the first experiment to directly observe ˜2 σ hints of C P violation associated with an extended lepton sector.

  14. Image grating metrology using phase-stepping interferometry in scanning beam interference lithography

    NASA Astrophysics Data System (ADS)

    Li, Minkang; Zhou, Changhe; Wei, Chunlong; Jia, Wei; Lu, Yancong; Xiang, Changcheng; Xiang, XianSong

    2016-10-01

    Large-sized gratings are essential optical elements in laser fusion and space astronomy facilities. Scanning beam interference lithography is an effective method to fabricate large-sized gratings. To minimize the nonlinear phase written into the photo-resist, the image grating must be measured to adjust the left and right beams to interfere at their waists. In this paper, we propose a new method to conduct wavefront metrology based on phase-stepping interferometry. Firstly, a transmission grating is used to combine the two beams to form an interferogram which is recorded by a charge coupled device(CCD). Phase steps are introduced by moving the grating with a linear stage monitored by a laser interferometer. A series of interferograms are recorded as the displacement is measured by the laser interferometer. Secondly, to eliminate the tilt and piston error during the phase stepping, the iterative least square phase shift method is implemented to obtain the wrapped phase. Thirdly, we use the discrete cosine transform least square method to unwrap the phase map. Experiment results indicate that the measured wavefront has a nonlinear phase around 0.05 λ@404.7nm. Finally, as the image grating is acquired, we simulate the print-error written into the photo-resist.

  15. Autocratic strategies for iterated games with arbitrary action spaces.

    PubMed

    McAvoy, Alex; Hauert, Christoph

    2016-03-29

    The recent discovery of zero-determinant strategies for the iterated prisoner's dilemma sparked a surge of interest in the surprising fact that a player can exert unilateral control over iterated interactions. These remarkable strategies, however, are known to exist only in games in which players choose between two alternative actions such as "cooperate" and "defect." Here we introduce a broader class of autocratic strategies by extending zero-determinant strategies to iterated games with more general action spaces. We use the continuous donation game as an example, which represents an instance of the prisoner's dilemma that intuitively extends to a continuous range of cooperation levels. Surprisingly, despite the fact that the opponent has infinitely many donation levels from which to choose, a player can devise an autocratic strategy to enforce a linear relationship between his or her payoff and that of the opponent even when restricting his or her actions to merely two discrete levels of cooperation. In particular, a player can use such a strategy to extort an unfair share of the payoffs from the opponent. Therefore, although the action space of the continuous donation game dwarfs that of the classic prisoner's dilemma, players can still devise relatively simple autocratic and, in particular, extortionate strategies.

  16. IDC Re-Engineering Phase 2 Iteration E2 Use Case Realizations Version 1.2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamlet, Benjamin R.; Harris, James M.; Burns, John F.

    2016-12-01

    This document contains 4 use case realizations generated from the model contained in Rational Software Architect. These use case realizations are the current versions of the realizations originally delivered in Elaboration Iteration 2.

  17. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less

  18. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.

    PubMed

    Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei

    2013-03-01

    A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.

  19. Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction

    PubMed Central

    Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei

    2013-01-01

    Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329

  20. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    NASA Astrophysics Data System (ADS)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  1. Determination of angle of light deflection in higher-derivative gravity theories

    NASA Astrophysics Data System (ADS)

    Xu, Chenmei; Yang, Yisong

    2018-03-01

    Gravitational light deflection is known as one of three classical tests of general relativity and the angle of deflection may be computed explicitly using approximate or exact solutions describing the gravitational force generated from a point mass. In various generalized gravity theories, however, such explicit determination is often impossible due to the difficulty in obtaining an exact expression for the deflection angle. In this work, we present some highly effective globally convergent iterative methods to determine the angle of semiclassical gravitational deflection in higher- and infinite-derivative formalisms of quantum gravity theories. We also establish the universal properties that the deflection angle always stays below the classical Einstein angle and is a strictly decreasing function of the incident photon energy, in these formalisms.

  2. On the application of multilevel modeling in environmental and ecological studies

    USGS Publications Warehouse

    Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.

    2010-01-01

    This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.

  3. Implicit Kalman filtering

    NASA Technical Reports Server (NTRS)

    Skliar, M.; Ramirez, W. F.

    1997-01-01

    For an implicitly defined discrete system, a new algorithm for Kalman filtering is developed and an efficient numerical implementation scheme is proposed. Unlike the traditional explicit approach, the implicit filter can be readily applied to ill-conditioned systems and allows for generalization to descriptor systems. The implementation of the implicit filter depends on the solution of the congruence matrix equation (A1)(Px)(AT1) = Py. We develop a general iterative method for the solution of this equation, and prove necessary and sufficient conditions for convergence. It is shown that when the system matrices of an implicit system are sparse, the implicit Kalman filter requires significantly less computer time and storage to implement as compared to the traditional explicit Kalman filter. Simulation results are presented to illustrate and substantiate the theoretical developments.

  4. Multi-Attribute Tradespace Exploration in Space System Design

    NASA Astrophysics Data System (ADS)

    Ross, A. M.; Hastings, D. E.

    2002-01-01

    The complexity inherent in space systems necessarily requires intense expenditures of resources both human and monetary. The high level of ambiguity present in the early design phases of these systems causes long, highly iterative, and costly design cycles. This paper looks at incorporating decision theory methods into the early design processes to streamline communication of wants and needs among stakeholders and between levels of design. Communication channeled through formal utility interviews and analysis enables engineers to better understand the key drivers for the system and allows a more thorough exploration of the design tradespace. Multi-Attribute Tradespace Exploration (MATE), an evolving process incorporating decision theory into model and simulation- based design, has been applied to several space system case studies at MIT. Preliminary results indicate that this process can improve the quality of communication to more quickly resolve project ambiguity, and enable the engineer to discover better value designs for multiple stakeholders. MATE is also being integrated into a concurrent design environment to facilitate the transfer knowledge of important drivers into higher fidelity design phases. Formal utility theory provides a mechanism to bridge the language barrier between experts of different backgrounds and differing needs (e.g. scientists, engineers, managers, etc). MATE with concurrent design couples decision makers more closely to the design, and most importantly, maintains their presence between formal reviews.

  5. TRUST84. Sat-Unsat Flow in Deformable Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narasimhan, T.N.

    1984-11-01

    TRUST84 solves for transient and steady-state flow in variably saturated deformable media in one, two, or three dimensions. It can handle porous media, fractured media, or fractured-porous media. Boundary conditions may be an arbitrary function of time. Sources or sinks may be a function of time or of potential. The theoretical model considers a general three-dimensional field of flow in conjunction with a one-dimensional vertical deformation field. The governing equation expresses the conservation of fluid mass in an elemental volume that has a constant volume of solids. Deformation of the porous medium may be nonelastic. Permeability and the compressibility coefficientsmore » may be nonlinearly related to effective stress. Relationships between permeability and saturation with pore water pressure in the unsaturated zone may be characterized by hysteresis. The relation between pore pressure change and effective stress change may be a function of saturation. The basic calculational model of the conductive heat transfer code TRUMP is applied in TRUST84 to the flow of fluids in porous media. The model combines an integrated finite difference algorithm for numerically solving the governing equation with a mixed explicit-implicit iterative scheme in which the explicit changes in potential are first computed for all elements in the system, after which implicit corrections are made only for those elements for which the stable time-step is less than the time-step being used. Time-step sizes are automatically controlled to optimize the number of iterations, to control maximum change to potential during a time-step, and to obtain desired output information. Time derivatives, estimated on the basis of system behavior during the two previous time-steps, are used to start the iteration process and to evaluate nonlinear coefficients. Both heterogeneity and anisotropy can be handled.« less

  6. Implicit unified gas-kinetic scheme for steady state solutions in all flow regimes

    NASA Astrophysics Data System (ADS)

    Zhu, Yajun; Zhong, Chengwen; Xu, Kun

    2016-06-01

    This paper presents an implicit unified gas-kinetic scheme (UGKS) for non-equilibrium steady state flow computation. The UGKS is a direct modeling method for flow simulation in all regimes with the updates of both macroscopic flow variables and microscopic gas distribution function. By solving the macroscopic equations implicitly, a predicted equilibrium state can be obtained first through iterations. With the newly predicted equilibrium state, the evolution equation of the gas distribution function and the corresponding collision term can be discretized in a fully implicit way for fast convergence through iterations as well. The lower-upper symmetric Gauss-Seidel (LU-SGS) factorization method is implemented to solve both macroscopic and microscopic equations, which improves the efficiency of the scheme. Since the UGKS is a direct modeling method and its physical solution depends on the mesh resolution and the local time step, a physical time step needs to be fixed before using an implicit iterative technique with a pseudo-time marching step. Therefore, the physical time step in the current implicit scheme is determined by the same way as that in the explicit UGKS for capturing the physical solution in all flow regimes, but the convergence to a steady state speeds up through the adoption of a numerical time step with large CFL number. Many numerical test cases in different flow regimes from low speed to hypersonic ones, such as the Couette flow, cavity flow, and the flow passing over a cylinder, are computed to validate the current implicit method. The overall efficiency of the implicit UGKS can be improved by one or two orders of magnitude in comparison with the explicit one.

  7. Optimal Design of Calibration Signals in Space-Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Ferroni, Valerio; hide

    2016-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterisation of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  8. Optimal Design of Calibration Signals in Space Borne Gravitational Wave Detectors

    NASA Technical Reports Server (NTRS)

    Nofrarias, Miquel; Karnesis, Nikolaos; Gibert, Ferran; Armano, Michele; Audley, Heather; Danzmann, Karsten; Diepholz, Ingo; Dolesi, Rita; Ferraioli, Luigi; Thorpe, James I.

    2014-01-01

    Future space borne gravitational wave detectors will require a precise definition of calibration signals to ensure the achievement of their design sensitivity. The careful design of the test signals plays a key role in the correct understanding and characterization of these instruments. In that sense, methods achieving optimal experiment designs must be considered as complementary to the parameter estimation methods being used to determine the parameters describing the system. The relevance of experiment design is particularly significant for the LISA Pathfinder mission, which will spend most of its operation time performing experiments to characterize key technologies for future space borne gravitational wave observatories. Here we propose a framework to derive the optimal signals in terms of minimum parameter uncertainty to be injected to these instruments during its calibration phase. We compare our results with an alternative numerical algorithm which achieves an optimal input signal by iteratively improving an initial guess. We show agreement of both approaches when applied to the LISA Pathfinder case.

  9. Multidisciplinary Thermal Analysis of Hot Aerospace Structures

    DTIC Science & Technology

    2010-05-02

    Seidel iteration. Such a strategy simplifies explicit/implicit treatment , subcycling, load balancing, software modularity, and replacements as better... Stefan -Boltzmann constant , E is the emissivity of the surface, f is the form factor from the surface to the reference surface, Br is the temperature of...Stokes equations using Gauss- Seidel line Relaxation, Computers and Fluids, 17, pp.l35-150, 1989. [22] Hung C.M. and MacCormack R.W., Numerical

  10. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  11. The Triangle of the Space Launch System Operations

    NASA Astrophysics Data System (ADS)

    Fayolle, Eric

    2010-09-01

    Firemen know it as “fire triangle”, mathematicians know it as “golden triangle”, sailormen know it as “Bermuda triangle”, politicians know it as “Weimar triangle”… This article aims to present a new aspect of that shape geometry in the space launch system world: “the triangle of the space launch system operations”. This triangle is composed of these three following topics, which have to be taken into account for any space launch system operation processing: design, safety and operational use. Design performance is of course taking into account since the early preliminary phase of a system development. This design performance is matured all along the development phases, thanks to consecutives iterations in order to respect the financial and timing constraints imposed to the development of the system. This process leads to a detailed and precise design to assess the required performance. Then, the operational use phase brings its batch of constraints during the use of the system. This phase is conducted by specific procedures for each operation. Each procedure has sequences for each sub-system, which have to be conducted in a very precise chronological way. These procedures can be processed by automatic way or manual way, with the necessity or not of the implication of operators, and in a determined environment. Safeguard aims to verify the respect of the specific constraints imposed to guarantee the safety of persons and property, the protection of public health and the environment. Safeguard has to be taken into account above the operational constraints of any space operation, without forgetting the highest safety level for the operators of the space operation, and of course without damaging the facilities or without disturbing the external environment. All space operations are the result of a “win-win” compromise between these three topics. Contrary to the fire triangle where one of the topics has to be suppressed in order to avoid the combustion, no topics at all should be suppressed in the triangle of the space launch system operations. Indeed, if safeguard is not considered since the beginning of the development phase, this development will not take into account safeguard constraints. Then, the operational phase will become very difficult because unavailable, to respect safety rules required for the operational use phase of the system. Taking into account safeguard constraints in late project phases will conduct to very high operational constraints, sometimes quite disturbing for the operator, even blocking to be able to consider the operational use phase as mature and optimized. On the contrary, if design performance is not taken into account in order to favor safeguard aspect in the operational use phase, system design will not be optimized, what will lead to high planning and timing impacts. The examples detailed in this article show the compromise for what each designer should confront with during the development of any system dealing with the safety of persons and property, the protection of public health and the environment.

  12. Cluster Free Energies from Simple Simulations of Small Numbers of Aggregants: Nucleation of Liquid MTBE from Vapor and Aqueous Phases.

    PubMed

    Patel, Lara A; Kindt, James T

    2017-03-14

    We introduce a global fitting analysis method to obtain free energies of association of noncovalent molecular clusters using equilibrated cluster size distributions from unbiased constant-temperature molecular dynamics (MD) simulations. Because the systems simulated are small enough that the law of mass action does not describe the aggregation statistics, the method relies on iteratively determining a set of cluster free energies that, using appropriately weighted sums over all possible partitions of N monomers into clusters, produces the best-fit size distribution. The quality of these fits can be used as an objective measure of self-consistency to optimize the cutoff distance that determines how clusters are defined. To showcase the method, we have simulated a united-atom model of methyl tert-butyl ether (MTBE) in the vapor phase and in explicit water solution over a range of system sizes (up to 95 MTBE in the vapor phase and 60 MTBE in the aqueous phase) and concentrations at 273 K. The resulting size-dependent cluster free energy functions follow a form derived from classical nucleation theory (CNT) quite well over the full range of cluster sizes, although deviations are more pronounced for small cluster sizes. The CNT fit to cluster free energies yielded surface tensions that were in both cases lower than those for the simulated planar interfaces. We use a simple model to derive a condition for minimizing non-ideal effects on cluster size distributions and show that the cutoff distance that yields the best global fit is consistent with this condition.

  13. Development of a clinical definition for acute respiratory distress syndrome using the Delphi technique.

    PubMed

    Ferguson, Niall D; Davis, Aileen M; Slutsky, Arthur S; Stewart, Thomas E

    2005-06-01

    The objective of this study is to describe the implementation of formal consensus techniques in the development of a clinical definition for acute respiratory distress syndrome. A Delphi consensus process was conducted using e-mail. Sixteen panelists who were both researchers and opinion leaders were systematically recruited. The Delphi technique was performed over 4 rounds on the background of an explicit definition framework. Item generation was performed in round 1, item reduction in rounds 2 and 3, and definition evaluation in round 4. Explicit consensus thresholds were used throughout. Of the 16 panelists, 11 actually participated in developing a definition that met a priori consensus rules on the third iteration. New incorporations in the Delphi definition include the use of a standardized oxygenation assessment and the documentation of either a predisposing factor or decreased thoracic compliance. The panelists rated the Delphi definition as acceptable to highly acceptable (median score, 6; range, 5-7 on a 7-point Likert scale). We conclude that it is feasible to consider using formal consensus in the development of future definitions of acute respiratory distress syndrome. Testing of sensibility, reliability, and validity are needed for this preliminary definition; these test results should be incorporated into future iterations of this definition.

  14. Explicit Finite Element Techniques Used to Characterize Splashdown of the Space Shuttle Solid Rocket Booster Aft Skirt

    NASA Technical Reports Server (NTRS)

    Melis, Matthew E.

    2003-01-01

    NASA Glenn Research Center s Structural Mechanics Branch has years of expertise in using explicit finite element methods to predict the outcome of ballistic impact events. Shuttle engineers from the NASA Marshall Space Flight Center and NASA Kennedy Space Flight Center required assistance in assessing the structural loads that a newly proposed thrust vector control system for the space shuttle solid rocket booster (SRB) aft skirt would expect to see during its recovery splashdown.

  15. Multidirectional hybrid algorithm for the split common fixed point problem and application to the split common null point problem.

    PubMed

    Li, Xia; Guo, Meifang; Su, Yongfu

    2016-01-01

    In this article, a new multidirectional monotone hybrid iteration algorithm for finding a solution to the split common fixed point problem is presented for two countable families of quasi-nonexpansive mappings in Banach spaces. Strong convergence theorems are proved. The application of the result is to consider the split common null point problem of maximal monotone operators in Banach spaces. Strong convergence theorems for finding a solution of the split common null point problem are derived. This iteration algorithm can accelerate the convergence speed of iterative sequence. The results of this paper improve and extend the recent results of Takahashi and Yao (Fixed Point Theory Appl 2015:87, 2015) and many others .

  16. Illustrating dynamical symmetries in classical mechanics: The Laplace-Runge-Lenz vector revisited

    NASA Astrophysics Data System (ADS)

    O'Connell, Ross C.; Jagannathan, Kannan

    2003-03-01

    The inverse square force law admits a conserved vector that lies in the plane of motion. This vector has been associated with the names of Laplace, Runge, and Lenz, among others. Many workers have explored aspects of the symmetry and degeneracy associated with this vector and with analogous dynamical symmetries. We define a conserved dynamical variable α that characterizes the orientation of the orbit in two-dimensional configuration space for the Kepler problem and an analogous variable β for the isotropic harmonic oscillator. This orbit orientation variable is canonically conjugate to the angular momentum component normal to the plane of motion. We explore the canonical one-parameter group of transformations generated by α(β). Because we have an obvious pair of conserved canonically conjugate variables, it is desirable to use them as a coordinate-momentum pair. In terms of these phase space coordinates, the form of the Hamiltonian is nearly trivial because neither member of the pair can occur explicitly in the Hamiltonian. From these considerations we gain a simple picture of dynamics in phase space. The procedure we use is in the spirit of the Hamilton-Jacobi method.

  17. String modular phases in Calabi-Yau families

    NASA Astrophysics Data System (ADS)

    Kadir, Shabnam; Lynker, Monika; Schimmrigk, Rolf

    2011-12-01

    We investigate the structure of singular Calabi-Yau varieties in moduli spaces that contain a Brieskorn-Pham point. Our main tool is a construction of families of deformed motives over the parameter space. We analyze these motives for general fibers and explicitly compute the L-series for singular fibers for several families. We find that the resulting motivic L-functions agree with the L-series of modular forms whose weight depends both on the rank of the motive and the degree of the degeneration of the variety. Surprisingly, these motivic L-functions are identical in several cases to L-series derived from weighted Fermat hypersurfaces. This shows that singular Calabi-Yau spaces of non-conifold type can admit a string worldsheet interpretation, much like rational theories, and that the corresponding irrational conformal field theories inherit information from the Gepner conformal field theory of the weighted Fermat fiber of the family. These results suggest that phase transitions via non-conifold configurations are physically plausible. In the case of severe degenerations we find a dimensional transmutation of the motives. This suggests further that singular configurations with non-conifold singularities may facilitate transitions between Calabi-Yau varieties of different dimensions.

  18. A Navier-Stokes solution of the three-dimensional viscous compressible flow in a centrifugal compressor impeller

    NASA Technical Reports Server (NTRS)

    Harp, J. L., Jr.

    1977-01-01

    A two-dimensional time-dependent computer code was utilized to calculate the three-dimensional steady flow within the impeller blading. The numerical method is an explicit time marching scheme in two spatial dimensions. Initially, an inviscid solution is generated on the hub blade-to-blade surface by the method of Katsanis and McNally (1973). Starting with the known inviscid solution, the viscous effects are calculated through iteration. The approach makes it possible to take into account principal impeller fluid-mechanical effects. It is pointed out that the second iterate provides a complete solution to the three-dimensional, compressible, Navier-Stokes equations for flow in a centrifugal impeller. The problems investigated are related to the study of a radial impeller and a backswept impeller.

  19. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  20. The phase-space structure of nearby dark matter as constrained by the SDSS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, Florent; Percival, Will; Jasche, Jens

    Previous studies using numerical simulations have demonstrated that the shape of the cosmic web can be described by studying the Lagrangian displacement field. We extend these analyses, showing that it is now possible to perform a Lagrangian description of cosmic structure in the nearby Universe based on large-scale structure observations. Building upon recent Bayesian large-scale inference of initial conditions, we present a cosmographic analysis of the dark matter distribution and its evolution, referred to as the dark matter phase-space sheet, in the nearby universe as probed by the Sloan Digital Sky Survey main galaxy sample. We consider its stretchings andmore » foldings using a tetrahedral tessellation of the Lagrangian lattice. The method provides extremely accurate estimates of nearby density and velocity fields, even in regions of low galaxy density. It also measures the number of matter streams, and the deformation and parity reversals of fluid elements, which were previously thought inaccessible using observations. We illustrate the approach by showing the phase-space structure of known objects of the nearby Universe such as the Sloan Great Wall, the Coma cluster and the Boötes void. We dissect cosmic structures into four distinct components (voids, sheets, filaments, and clusters), using the Lagrangian classifiers DIVA, ORIGAMI, and a new scheme which we introduce and call LICH. Because these classifiers use information other than the sheer local density, identified structures explicitly carry physical information about their formation history. Accessing the phase-space structure of dark matter in galaxy surveys opens the way for new confrontations of observational data and theoretical models. We have made our data products publicly available.« less

  1. The phase-space structure of nearby dark matter as constrained by the SDSS

    NASA Astrophysics Data System (ADS)

    Leclercq, Florent; Jasche, Jens; Lavaux, Guilhem; Wandelt, Benjamin; Percival, Will

    2017-06-01

    Previous studies using numerical simulations have demonstrated that the shape of the cosmic web can be described by studying the Lagrangian displacement field. We extend these analyses, showing that it is now possible to perform a Lagrangian description of cosmic structure in the nearby Universe based on large-scale structure observations. Building upon recent Bayesian large-scale inference of initial conditions, we present a cosmographic analysis of the dark matter distribution and its evolution, referred to as the dark matter phase-space sheet, in the nearby universe as probed by the Sloan Digital Sky Survey main galaxy sample. We consider its stretchings and foldings using a tetrahedral tessellation of the Lagrangian lattice. The method provides extremely accurate estimates of nearby density and velocity fields, even in regions of low galaxy density. It also measures the number of matter streams, and the deformation and parity reversals of fluid elements, which were previously thought inaccessible using observations. We illustrate the approach by showing the phase-space structure of known objects of the nearby Universe such as the Sloan Great Wall, the Coma cluster and the Boötes void. We dissect cosmic structures into four distinct components (voids, sheets, filaments, and clusters), using the Lagrangian classifiers DIVA, ORIGAMI, and a new scheme which we introduce and call LICH. Because these classifiers use information other than the sheer local density, identified structures explicitly carry physical information about their formation history. Accessing the phase-space structure of dark matter in galaxy surveys opens the way for new confrontations of observational data and theoretical models. We have made our data products publicly available.

  2. 2009 Space Shuttle Probabilistic Risk Assessment Overview

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri L.; Canga, Michael A.; Boyer, Roger L.; Thigpen, Eric B.

    2010-01-01

    Loss of a Space Shuttle during flight has severe consequences, including loss of a significant national asset; loss of national confidence and pride; and, most importantly, loss of human life. The Shuttle Probabilistic Risk Assessment (SPRA) is used to identify risk contributors and their significance; thus, assisting management in determining how to reduce risk. In 2006, an overview of the SPRA Iteration 2.1 was presented at PSAM 8 [1]. Like all successful PRAs, the SPRA is a living PRA and has undergone revisions since PSAM 8. The latest revision to the SPRA is Iteration 3. 1, and it will not be the last as the Shuttle program progresses and more is learned. This paper discusses the SPRA scope, overall methodology, and results, as well as provides risk insights. The scope, assumptions, uncertainties, and limitations of this assessment provide risk-informed perspective to aid management s decision-making process. In addition, this paper compares the Iteration 3.1 analysis and results to the Iteration 2.1 analysis and results presented at PSAM 8.

  3. Floquet topological phases with symmetry in all dimensions

    NASA Astrophysics Data System (ADS)

    Roy, Rahul; Harper, Fenner

    2017-05-01

    Dynamical systems may host a number of remarkable symmetry-protected phases that are qualitatively different from their static analogs. In this work, we consider the phase space of symmetry-respecting unitary evolutions in detail and identify several distinct classes of evolution that host dynamical order. Using ideas from group cohomology, we construct a set of interacting Floquet drives that generate dynamical symmetry-protected topological order for each nontrivial cohomology class in every dimension, illustrating our construction with explicit two-dimensional examples. We also identify a set of symmetry-protected Floquet drives that lie outside of the group cohomology construction, and a further class of symmetry-respecting topological drives which host chiral edge modes. We use these special drives to define a notion of phase (stable to a class of local perturbations in the bulk) and the concepts of relative and absolute topological order, which can be applied to many different classes of unitary evolutions. These include fully many-body localized unitary evolutions and time crystals.

  4. Re-examining the effects of verbal instructional type on early stage motor learning.

    PubMed

    Bobrownicki, Ray; MacPherson, Alan C; Coleman, Simon G S; Collins, Dave; Sproule, John

    2015-12-01

    The present study investigated the differential effects of analogy and explicit instructions on early stage motor learning and movement in a modified high jump task. Participants were randomly assigned to one of three experimental conditions: analogy, explicit light (reduced informational load), or traditional explicit (large informational load). During the two-day learning phase, participants learned a novel high jump technique based on the 'scissors' style using the instructions for their respective conditions. For the single-day testing phase, participants completed both a retention test and task-relevant pressure test, the latter of which featured a rising high-jump-bar pressure manipulation. Although analogy learners demonstrated slightly more efficient technique and reported fewer technical rules on average, the differences between the conditions were not statistically significant. There were, however, significant differences in joint variability with respect to instructional type, as variability was lowest for the analogy condition during both the learning and testing phases, and as a function of block, as joint variability decreased for all conditions during the learning phase. Findings suggest that reducing the informational volume of explicit instructions may mitigate the deleterious effects on performance previously associated with explicit learning in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  6. Iterative cross section sequence graph for handwritten character segmentation.

    PubMed

    Dawoud, Amer

    2007-08-01

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.

  7. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  8. Regularized iterative integration combined with non-linear diffusion filtering for phase-contrast x-ray computed tomography.

    PubMed

    Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter

    2014-12-29

    Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.

  9. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  10. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  11. A finite difference solution for the propagation of sound in near sonic flows

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Lester, H. C.

    1983-01-01

    An explicit time/space finite difference procedure is used to model the propagation of sound in a quasi one-dimensional duct containing high Mach number subsonic flow. Nonlinear acoustic equations are derived by perturbing the time-dependent Euler equations about a steady, compressible mean flow. The governing difference relations are based on a fourth-order, two-step (predictor-corrector) MacCormack scheme. The solution algorithm functions by switching on a time harmonic source and allowing the difference equations to iterate to a steady state. The principal effect of the non-linearities was to shift acoustical energy to higher harmonics. With increased source strengths, wave steepening was observed. This phenomenon suggests that the acoustical response may approach a shock behavior at at higher sound pressure level as the throat Mach number aproaches unity. On a peak level basis, good agreement between the nonlinear finite difference and linear finite element solutions was observed, even through a peak sound pressure level of about 150 dB occurred in the throat region. Nonlinear steady state waveform solutions are shown to be in excellent agreement with a nonlinear asymptotic theory.

  12. Development of the ITER magnetic diagnostic set and specification.

    PubMed

    Vayakis, G; Arshad, S; Delhom, D; Encheva, A; Giacomin, T; Jones, L; Patel, K M; Pérez-Lasala, M; Portales, M; Prieto, D; Sartori, F; Simrock, S; Snipes, J A; Udintsev, V S; Watts, C; Winter, A; Zabeo, L

    2012-10-01

    ITER magnetic diagnostics are now in their detailed design and R&D phase. They have passed their conceptual design reviews and a working diagnostic specification has been prepared aimed at the ITER project requirements. This paper highlights specific design progress, in particular, for the in-vessel coils, steady state sensors, saddle loops and divertor sensors. Key changes in the measurement specifications, and a working concept of software and electronics are also outlined.

  13. Face-name association learning in early Alzheimer's disease: a comparison of learning methods and their underlying mechanisms.

    PubMed

    Bier, Nathalie; Van Der Linden, Martial; Gagnon, Lise; Desrosiers, Johanne; Adam, Stephane; Louveaux, Stephanie; Saint-Mleux, Julie

    2008-06-01

    This study compared the efficacy of five learning methods in the acquisition of face-name associations in early dementia of Alzheimer type (AD). The contribution of error production and implicit memory to the efficacy of each method was also examined. Fifteen participants with early AD and 15 matched controls were exposed to five learning methods: spaced retrieval, vanishing cues, errorless, and two trial-and-error methods, one with explicit and one with implicit memory task instructions. Under each method, participants had to learn a list of five face-name associations, followed by free recall, cued recall and recognition. Delayed recall was also assessed. For AD, results showed that all methods were efficient but there were no significant differences between them. The number of errors produced during the learning phases varied between the five methods but did not influence learning. There were no significant differences between implicit and explicit memory task instructions on test performances. For the control group, there were no differences between the five methods. Finally, no significant correlations were found between the performance of the AD participants in free recall and their cognitive profile, but generally, the best performers had better remaining episodic memory. Also, case study analyses showed that spaced retrieval was the method for which the greatest number of participants (four) obtained results as good as the controls. This study suggests that the five methods are effective for new learning of face-name associations in AD. It appears that early AD patients can learn, even in the context of error production and explicit memory conditions.

  14. A hybrid formalism of aerosol gas phase interaction for 3-D global models

    NASA Astrophysics Data System (ADS)

    Benduhn, F.

    2009-04-01

    Aerosol chemical composition is a relevant factor to the global climate system with respect to both atmospheric chemistry and the aerosol direct and indirect effects. Aerosol chemical composition determines the capacity of aerosol particles to act as cloud condensation nuclei both explicitly via particle size and implicitly via the aerosol hygroscopic property. Due to the primary role of clouds in the climate system and the sensitivity of cloud formation and radiative properties to the cloud droplet number it is necessary to determine with accuracy the chemical composition of the aerosol. Dissolution, although a formally fairly well known process, may be subject to numerically prohibitive properties that result from the chemical interaction of the species engaged. So-far approaches to model the dissolution of inorganics into the aerosol liquid phase in the framework of a 3-D global model were based on an equilibrium, transient or hybrid equilibrium-transient approach. All of these methods present the disadvantage of a priori assumptions with respect to the mechanism and/or are numerically not manageable in the context of a global climate system model. In this paper a new hybrid formalism to aerosol gas phase interaction is presented within the framework of the H2SO4/HNO3/HCl/NH3 system and a modal approach of aerosol size discretisation. The formalism is distinct from prior hybrid approaches in as much as no a priori assumption on the nature of the regime a particular aerosol mode is in is made. Whether a particular mode is set to be in the equilibrium or the transitory regime is continuously determined during each time increment against relevant criteria considering the estimated equilibration time interval and the interdependence of the aerosol modes relative to the partitioning of the dissolving species. Doing this the aerosol composition range of numerical stiffness due to species interaction during transient dissolution is effectively eluded, and the numerical expense of dissolution in the transient regime is reduced through the minimisation of the number of modes in this regime and a larger time step. Containment of the numerical expense of the modes in the equilibrium regime is ensured through the usage of either an analytical equilibrium solver that requires iteration among the equilibrium modes, or a simple numerical solver based on a differential approach that requires iteration among the chemical species. Both equilibrium solvers require iteration over the water content and the activity coefficients. Decision for using either one or the other solver is made upon the consideration of the actual equilibrating mechanism, either chemical interaction or gas phase partial pressure variation, respectively. The formalism should thus ally appropriate process simplification resulting in reasonable computation time to a high degree of real process conformity as it is ensured by a transitory representation of dissolution. The resulting effectiveness and limits of the formalism are illustrated with numerical examples.

  15. Representations of spacetime diffeomorphisms. I. Canonical parametrized field theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isham, C.J.; Kuchar, K.V.

    The super-Hamiltonian and supermomentum in canonical geometrodynamics or in a parametried field theory on a given Riemannian background have Poisson brackets which obey the Dirac relations. By smearing the supermomentum with vector fields VepsilonL Diff..sigma.. on the space manifold ..sigma.., the Lie algebra L Diff ..sigma.. of the spatial diffeomorphism group Diff ..sigma.. can be mapped antihomomorphically into the Poisson bracket algebra on the phase space of the system. The explicit dependence of the Poisson brackets between two super-Hamiltonians on canonical coordinates (spatial metrics in geometrodynamics and embedding variables in parametrized theories) is usually regarded as an indication that themore » Dirac relations cannot be connected with a representation of the complete Lie algebra L Diff M of spacetime diffeomorphisms.« less

  16. NNLO jet cross sections by subtraction

    NASA Astrophysics Data System (ADS)

    Somogyi, G.; Bolzoni, P.; Trócsányi, Z.

    2010-08-01

    We report on the computation of a class of integrals that appear when integrating the so-called iterated singly-unresolved approximate cross section of the NNLO subtraction scheme of Refs. [G. Somogyi, Z. Trócsányi, and V. Del Duca, JHEP 06, 024 (2005), arXiv:hep-ph/0502226; G. Somogyi and Z. Trócsányi, (2006), arXiv:hep-ph/0609041; G. Somogyi, Z. Trócsányi, and V. Del Duca, JHEP 01, 070 (2007), arXiv:hep-ph/0609042; G. Somogyi and Z. Trócsányi, JHEP 01, 052 (2007), arXiv:hep-ph/0609043] over the factorised phase space of unresolved partons. The integrated approximate cross section itself can be written as the product of an insertion operator (in colour space) times the Born cross section. We give selected results for the insertion operator for processes with two and three hard partons in the final state.

  17. A PRESTO-SENSE sequence with alternating partial-Fourier encoding for rapid susceptibility-weighted 3D MRI time series.

    PubMed

    Klarhöfer, Markus; Dilharreguy, Bixente; van Gelderen, Peter; Moonen, Chrit T W

    2003-10-01

    A 3D sequence for dynamic susceptibility imaging is proposed which combines echo-shifting principles (such as PRESTO), sensitivity encoding (SENSE), and partial-Fourier acquisition. The method uses a moderate SENSE factor of 2 and takes advantage of an alternating partial k-space acquisition in the "slow" phase encode direction allowing an iterative reconstruction using high-resolution phase estimates. Offering an isotropic spatial resolution of 4 x 4 x 4 mm(3), the novel sequence covers the whole brain including parts of the cerebellum in 0.5 sec. Its temporal signal stability is comparable to that of a full-Fourier, full-FOV EPI sequence having the same dynamic scan time but much less brain coverage. Initial functional MRI experiments showed consistent activation in the motor cortex with an average signal change slightly less than that of EPI. Copyright 2003 Wiley-Liss, Inc.

  18. A resolution-enhancing image reconstruction method for few-view differential phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Guan, Huifeng; Anastasio, Mark A.

    2017-03-01

    It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.

  19. Human Engineering of Space Vehicle Displays and Controls

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Holden, Kritina L.; Boyer, Jennifer; Stephens, John-Paul; Ezer, Neta; Sandor, Aniko

    2010-01-01

    Proper attention to the integration of the human needs in the vehicle displays and controls design process creates a safe and productive environment for crew. Although this integration is critical for all phases of flight, for crew interfaces that are used during dynamic phases (e.g., ascent and entry), the integration is particularly important because of demanding environmental conditions. This panel addresses the process of how human engineering involvement ensures that human-system integration occurs early in the design and development process and continues throughout the lifecycle of a vehicle. This process includes the development of requirements and quantitative metrics to measure design success, research on fundamental design questions, human-in-the-loop evaluations, and iterative design. Processes and results from research on displays and controls; the creation and validation of usability, workload, and consistency metrics; and the design and evaluation of crew interfaces for NASA's Crew Exploration Vehicle are used as case studies.

  20. 3D coherent X-ray diffractive imaging of an Individual colloidal crystal grain

    NASA Astrophysics Data System (ADS)

    Shabalin, A.; Meijer, J.-M.; Sprung, M.; Petukhov, A. V.; Vartanyants, I. A.

    Self-assembled colloidal crystals represent an important model system to study nucleation phenomena and solid-solid phase transitions. They are attractive for applications in photonics and sensorics. We present results of a coherent x-ray diffractive imaging experiment performed on a single colloidal crystal grain. The full three-dimensional (3D) reciprocal space map measured by an azimuthal rotational scan contained several orders of Bragg reflections together with the coherent interference signal between them. Applying the iterative phase retrieval approach, the 3D structure of the crystal grain was reconstructed and positions of individual colloidal particles were resolved. We identified an exact stacking sequence of hexagonal close-packed layers including planar and linear defects. Our results open up a breakthrough in applications of coherent x-ray diffraction for visualization of the inner 3D structure of different mesoscopic materials, such as photonic crystals. Present address: University of California - San Diego, USA.

  1. An Iterative Information-Reduced Quadriphase-Shift-Keyed Carrier Synchronization Scheme Using Decision Feedback for Low Signal-to-Noise Ratio Applications

    NASA Technical Reports Server (NTRS)

    Simon, M.; Tkacenko, A.

    2006-01-01

    In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.

  2. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E-plane end-fire direction. Because of the alternating slot offsets, grating lobes called butterfly lobes are produced in non-principal planes close to the H-plane. An attempt to reduce the influence of such grating lobes resulted in a symmetric design.

  3. Love-type waves in functionally graded piezoelectric material (FGPM) sandwiched between initially stressed layer and elastic substrate

    NASA Astrophysics Data System (ADS)

    Saroj, Pradeep K.; Sahu, S. A.; Chaudhary, S.; Chattopadhyay, A.

    2015-10-01

    This paper investigates the propagation behavior of Love-type surface waves in three-layered composite structure with initial stress. The composite structure has been taken in such a way that a functionally graded piezoelectric material (FGPM) layer is bonded between initially stressed piezoelectric upper layer and an elastic substrate. Using the method of separation of variables, frequency equation for the considered wave has been established in the form of determinant for electrical open and short cases on free surface. The bisection method iteration technique has been used to find the roots of the dispersion relations which give the modes for electrical open and short cases. The effects of gradient variation of material constant and initial stress on the phase velocity of surface waves are discussed. Dependence of thickness on each parameter of the study has been shown explicitly. Study has been also done to show the existence of cut-off frequency. Graphical representation has been done to exhibit the findings. The obtained results are significant for the investigation and characterization of Love-type waves in FGPM-layered media.

  4. Phase retrieval on broadband and under-sampled images for the JWST testbed telescope

    NASA Astrophysics Data System (ADS)

    Smith, J. Scott; Aronstein, David L.; Dean, Bruce H.; Acton, D. Scott

    2009-08-01

    The James Webb Space Telescope (JWST) consists of an optical telescope element (OTE) that sends light to five science instruments. The initial steps for commissioning the telescope are performed with the Near-Infrared Camera (NIRCam) instrument, but low-order optical aberrations in the remaining science instruments must be determined (using phase retrieval) in order to ensure good performance across the entire field of view. These remaining instruments were designed to collect science data, and not to serve as wavefront sensors. Thus, the science cameras are not ideal phase-retrieval imagers for several reasons: they record under-sampled data and have a limited range of diversity defocus, and only one instrument has an internal, narrowband filter. To address these issues, we developed the capability of sensing these aberrations using an extension of image-based iterative-transform phase retrieval called Variable Sampling Mapping (VSM). The results show that VSM-based phase retrieval is capable of sensing low-order aberrations to a few nm RMS from images that are consistent with the non-ideal conditions expected during JWST multi-field commissioning. The algorithm is validated using data collected from the JWST Testbed Telescope (TBT).

  5. RF Pulse Design using Nonlinear Gradient Magnetic Fields

    PubMed Central

    Kopanoglu, Emre; Constable, R. Todd

    2014-01-01

    Purpose An iterative k-space trajectory and radio-frequency (RF) pulse design method is proposed for Excitation using Nonlinear Gradient Magnetic fields (ENiGMa). Theory and Methods The spatial encoding functions (SEFs) generated by nonlinear gradient fields (NLGFs) are linearly dependent in Cartesian-coordinates. Left uncorrected, this may lead to flip-angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a Matching-Pursuit algorithm, and the RF pulse is designed using a Conjugate-Gradient algorithm. Three variants of the proposed approach are given: the full-algorithm, a computationally-cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. Results The method is compared to other iterative (Matching-Pursuit and Conjugate Gradient) and non-iterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity significantly. Conclusion An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. PMID:25203286

  6. Autocratic strategies for iterated games with arbitrary action spaces

    PubMed Central

    2016-01-01

    The recent discovery of zero-determinant strategies for the iterated prisoner’s dilemma sparked a surge of interest in the surprising fact that a player can exert unilateral control over iterated interactions. These remarkable strategies, however, are known to exist only in games in which players choose between two alternative actions such as “cooperate” and “defect.” Here we introduce a broader class of autocratic strategies by extending zero-determinant strategies to iterated games with more general action spaces. We use the continuous donation game as an example, which represents an instance of the prisoner’s dilemma that intuitively extends to a continuous range of cooperation levels. Surprisingly, despite the fact that the opponent has infinitely many donation levels from which to choose, a player can devise an autocratic strategy to enforce a linear relationship between his or her payoff and that of the opponent even when restricting his or her actions to merely two discrete levels of cooperation. In particular, a player can use such a strategy to extort an unfair share of the payoffs from the opponent. Therefore, although the action space of the continuous donation game dwarfs that of the classic prisoner’s dilemma, players can still devise relatively simple autocratic and, in particular, extortionate strategies. PMID:26976578

  7. Using sparsity information for iterative phase retrieval in x-ray propagation imaging.

    PubMed

    Pein, A; Loock, S; Plonka, G; Salditt, T

    2016-04-18

    For iterative phase retrieval algorithms in near field x-ray propagation imaging experiments with a single distance measurement, it is indispensable to have a strong constraint based on a priori information about the specimen; for example, information about the specimen's support. Recently, Loock and Plonka proposed to use the a priori information that the exit wave is sparsely represented in a certain directional representation system, a so-called shearlet system. In this work, we extend this approach to complex-valued signals by applying the new shearlet constraint to amplitude and phase separately. Further, we demonstrate its applicability to experimental data.

  8. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry.

    PubMed

    Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.

  9. Explicit Filtering Based Low-Dose Differential Phase Reconstruction Algorithm with the Grating Interferometry

    PubMed Central

    Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang

    2015-01-01

    X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971

  10. Transient analysis of a thermal storage unit involving a phase change material

    NASA Technical Reports Server (NTRS)

    Griggs, E. I.; Pitts, D. R.; Humphries, W. R.

    1974-01-01

    The transient response of a single cell of a typical phase change material type thermal capacitor has been modeled using numerical conductive heat transfer techniques. The cell consists of a base plate, an insulated top, and two vertical walls (fins) forming a two-dimensional cavity filled with a phase change material. Both explicit and implicit numerical formulations are outlined. A mixed explicit-implicit scheme which treats the fin implicity while treating the phase change material explicitly is discussed. A band algorithmic scheme is used to reduce computer storage requirements for the implicit approach while retaining a relatively fine grid. All formulations are presented in dimensionless form thereby enabling application to geometrically similar problems. Typical parametric results are graphically presented for the case of melting with constant heat input to the base of the cell.

  11. Explicit symplectic orbit and spin tracking method for electric storage ring

    DOE PAGES

    Hwang, Kilean; Lee, S. Y.

    2016-08-18

    We develop a symplectic charged particle tracking method for phase space coordinates and polarization in all electric storage rings. Near the magic energy, the spin precession tune is proportional to the fractional momentum deviation δ m from the magic energy, and the amplitude of the radial and longitudinal spin precession is proportional to η/δ m, where η is the electric dipole moment for an initially vertically polarized beam. As a result, the method can be used to extract the electron electric dipole moment of a charged particle by employing narrow band frequency analysis of polarization around the magic energy.

  12. Clock synchronization by accelerated observers - Metric construction for arbitrary congruences of world lines

    NASA Technical Reports Server (NTRS)

    Henriksen, R. N.; Nelson, L. A.

    1985-01-01

    Clock synchronization in an arbitrarily accelerated observer congruence is considered. A general solution is obtained that maintains the isotropy and coordinate independence of the one-way speed of light. Attention is also given to various particular cases including, rotating disk congruence or ring congruence. An explicit, congruence-based spacetime metric is constructed according to Einstein's clock synchronization procedure and the equation for the geodesics of the space-time was derived using Hamilton-Jacobi method. The application of interferometric techniques (absolute phase radio interferometry, VLBI) to the detection of the 'global Sagnac effect' is also discussed.

  13. Simulation and Analysis of Launch Teams (SALT)

    NASA Technical Reports Server (NTRS)

    2008-01-01

    A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.

  14. Recurrence Quantification of Fractal Structures

    PubMed Central

    Webber, Charles L.

    2012-01-01

    By definition, fractal structures possess recurrent patterns. At different levels repeating patterns can be visualized at higher magnifications. The purpose of this chapter is threefold. First, general characteristics of dynamical systems are addressed from a theoretical mathematical perspective. Second, qualitative and quantitative recurrence analyses are reviewed in brief, but the reader is directed to other sources for explicit details. Third, example mathematical systems that generate strange attractors are explicitly defined, giving the reader the ability to reproduce the rich dynamics of continuous chaotic flows or discrete chaotic iterations. The challenge is then posited for the reader to study for themselves the recurrent structuring of these different dynamics. With a firm appreciation of the power of recurrence analysis, the reader will be prepared to turn their sights on real-world systems (physiological, psychological, mechanical, etc.). PMID:23060808

  15. New ab initio adiabatic potential energy surfaces and bound state calculations for the singlet ground X˜ 1A1 and excited C˜ 1B2(21A') states of SO2

    NASA Astrophysics Data System (ADS)

    Kłos, Jacek; Alexander, Millard H.; Kumar, Praveen; Poirier, Bill; Jiang, Bin; Guo, Hua

    2016-05-01

    We report new and more accurate adiabatic potential energy surfaces (PESs) for the ground X˜ 1A1 and electronically excited C˜ 1B2(21A') states of the SO2 molecule. Ab initio points are calculated using the explicitly correlated internally contracted multi-reference configuration interaction (icMRCI-F12) method. A second less accurate PES for the ground X ˜ state is also calculated using an explicitly correlated single-reference coupled-cluster method with single, double, and non-iterative triple excitations [CCSD(T)-F12]. With these new three-dimensional PESs, we determine energies of the vibrational bound states and compare these values to existing literature data and experiment.

  16. Fast and Epsilon-Optimal Discretized Pursuit Learning Automata.

    PubMed

    Zhang, JunQi; Wang, Cheng; Zhou, MengChu

    2015-10-01

    Learning automata (LA) are powerful tools for reinforcement learning. A discretized pursuit LA is the most popular one among them. During an iteration its operation consists of three basic phases: 1) selecting the next action; 2) finding the optimal estimated action; and 3) updating the state probability. However, when the number of actions is large, the learning becomes extremely slow because there are too many updates to be made at each iteration. The increased updates are mostly from phases 1 and 3. A new fast discretized pursuit LA with assured ε -optimality is proposed to perform both phases 1 and 3 with the computational complexity independent of the number of actions. Apart from its low computational complexity, it achieves faster convergence speed than the classical one when operating in stationary environments. This paper can promote the applications of LA toward the large-scale-action oriented area that requires efficient reinforcement learning tools with assured ε -optimality, fast convergence speed, and low computational complexity for each iteration.

  17. Implicit and Explicit Number-Space Associations Differentially Relate to Interference Control in Young Adults With ADHD

    PubMed Central

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2018-01-01

    Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra-individual differences in number-space associations. PMID:29881363

  18. Lie symmetry analysis, explicit solutions and conservation laws for the space-time fractional nonlinear evolution equations

    NASA Astrophysics Data System (ADS)

    Inc, Mustafa; Yusuf, Abdullahi; Aliyu, Aliyu Isa; Baleanu, Dumitru

    2018-04-01

    This paper studies the symmetry analysis, explicit solutions, convergence analysis, and conservation laws (Cls) for two different space-time fractional nonlinear evolution equations with Riemann-Liouville (RL) derivative. The governing equations are reduced to nonlinear ordinary differential equation (ODE) of fractional order using their Lie point symmetries. In the reduced equations, the derivative is in Erdelyi-Kober (EK) sense, power series technique is applied to derive an explicit solutions for the reduced fractional ODEs. The convergence of the obtained power series solutions is also presented. Moreover, the new conservation theorem and the generalization of the Noether operators are developed to construct the nonlocal Cls for the equations . Some interesting figures for the obtained explicit solutions are presented.

  19. Dynamic phasing of multichannel cw laser radiation by means of a stochastic gradient algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, V A; Volkov, M V; Garanin, S G

    2013-09-30

    The phasing of a multichannel laser beam by means of an iterative stochastic parallel gradient (SPG) algorithm has been numerically and experimentally investigated. The operation of the SPG algorithm is simulated, the acceptable range of amplitudes of probe phase shifts is found, and the algorithm parameters at which the desired Strehl number can be obtained with a minimum number of iterations are determined. An experimental bench with phase modulators based on lithium niobate, which are controlled by a multichannel electronic unit with a real-time microcontroller, has been designed. Phasing of 16 cw laser beams at a system response bandwidth ofmore » 3.7 kHz and phase thermal distortions in a frequency band of about 10 Hz is experimentally demonstrated. The experimental data are in complete agreement with the calculation results. (control of laser radiation parameters)« less

  20. Cardiac phase-synchronized myocardial thallium-201 single-photon emission tomography using list mode data acquisition and iterative tomographic reconstruction.

    PubMed

    Vemmer, T; Steinbüchel, C; Bertram, J; Eschner, W; Kögler, A; Luig, H

    1997-03-01

    The purpose of this study was to determine whether data acquisition in the list mode and iterative tomographic reconstruction would render feasible cardiac phase-synchronized thallium-201 single-photon emission tomography (SPET) of the myocardium under routine conditions without modifications in tracer dose, acquisition time, or number of steps of the a gamma camera. Seventy non-selected patients underwent 201T1 SPET imaging according to a routine protocol (74 MBq/2 mCi 201T1, 180 degrees rotation of the gamma camera, 32 steps, 30 min). Gamma camera data, ECG, and a time signal were recorded in list mode. The cardiac cycle was divided into eight phases, the end-diastolic phase encompassing the QRS complex, and the end-systolic phase the T wave. Both phase- and non-phase-synchronized tomograms based on the same list mode data were reconstructed iteratively. Phase-synchronized and non-synchronized images were compared. Patients were divided into two groups depending on whether or not coronary artery disease had been definitely diagnosed prior to SPET imaging. The numbers of patients in both groups demonstrating defects visible on the phase-synchronized but not on the non-synchronized images were compared. It was found that both postexercise and redistribution phase tomograms were suited for interpretation. The changes from end-diastolic to end-systolic images allowed a comparative assessment of regional wall motility and tracer uptake. End-diastolic tomograms provided the best definition of defects. Additional defects not apparent on non-synchronized images were visible in 40 patients, six of whom did not show any defect on the non-synchronized images. Of 42 patients in whom coronary artery disease had been definitely diagnosed, 19 had additional defects not visible on the non-synchronized images, in comparison to 21 of 28 in whom coronary artery disease was suspected (P < 0.02; chi 2). It is concluded that cardiac phase-synchronized 201T1 SPET of the myocardium was made feasible by list mode data acquisition and iterative reconstruction. The additional findings on the phase-synchronized tomograms, not visible on the non-synchronized ones, represented genuine defects. Cardiac phase-synchronized 201T1 SPET is advantageous in allowing simultaneous assessment of regional wall motion and tracer uptake, and in visualizing smaller defects.

  1. Hybrid diversity method utilizing adaptive diversity function for recovering unknown aberrations in an optical system

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2009-01-01

    A method of recovering unknown aberrations in an optical system includes collecting intensity data produced by the optical system, generating an initial estimate of a phase of the optical system, iteratively performing a phase retrieval on the intensity data to generate a phase estimate using an initial diversity function corresponding to the intensity data, generating a phase map from the phase retrieval phase estimate, decomposing the phase map to generate a decomposition vector, generating an updated diversity function by combining the initial diversity function with the decomposition vector, generating an updated estimate of the phase of the optical system by removing the initial diversity function from the phase map. The method may further include repeating the process beginning with iteratively performing a phase retrieval on the intensity data using the updated estimate of the phase of the optical system in place of the initial estimate of the phase of the optical system, and using the updated diversity function in place of the initial diversity function, until a predetermined convergence is achieved.

  2. Experiments and Simulations of ITER-like Plasmas in Alcator C-Mod

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    .R. Wilson, C.E. Kessel, S. Wolfe, I.H. Hutchinson, P. Bonoli, C. Fiore, A.E. Hubbard, J. Hughes, Y. Lin, Y. Ma, D. Mikkelsen, M. Reinke, S. Scott, A.C.C. Sips, S. Wukitch and the C-Mod Team

    Alcator C-Mod is performing ITER-like experiments to benchmark and verify projections to 15 MA ELMy H-mode Inductive ITER discharges. The main focus has been on the transient ramp phases. The plasma current in C-Mod is 1.3 MA and toroidal field is 5.4 T. Both Ohmic and ion cyclotron (ICRF) heated discharges are examined. Plasma current rampup experiments have demonstrated that (ICRF and LH) heating in the rise phase can save voltseconds (V-s), as was predicted for ITER by simulations, but showed that the ICRF had no effect on the current profile versus Ohmic discharges. Rampdown experiments show an overcurrent inmore » the Ohmic coil (OH) at the H to L transition, which can be mitigated by remaining in H-mode into the rampdown. Experiments have shown that when the EDA H-mode is preserved well into the rampdown phase, the density and temperature pedestal heights decrease during the plasma current rampdown. Simulations of the full C-Mod discharges have been done with the Tokamak Simulation Code (TSC) and the Coppi-Tang energy transport model is used with modified settings to provide the best fit to the experimental electron temperature profile. Other transport models have been examined also. __________________________________________________« less

  3. Revealing Asymmetries in the HD181327 Debris Disk: A Recent Massive Collision or Interstellar Medium Warping

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Schneider, Glenn; Weinberger, Alycia J.; Debes, John H.; Grady, Carol A.; Jang-Condell, Hannah; Kuchner, Marc J.

    2014-01-01

    New multi-roll coronagraphic images of the HD181327 debris disk obtained using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope reveal the debris ring in its entirety at high signal-to-noise ratio and unprecedented spatial resolution. We present and apply a new multi-roll image processing routine to identify and further remove quasi-static point-spread function-subtraction residuals and quantify systematic uncertainties. We also use a new iterative image deprojection technique to constrain the true disk geometry and aggressively remove any surface brightness asymmetries that can be explained without invoking dust density enhancements/ deficits. The measured empirical scattering phase function for the disk is more forward scattering than previously thought and is not well-fit by a Henyey-Greenstein function. The empirical scattering phase function varies with stellocentric distance, consistent with the expected radiation pressured-induced size segregation exterior to the belt. Within the belt, the empirical scattering phase function contradicts unperturbed debris ring models, suggesting the presence of an unseen planet. The radial profile of the flux density is degenerate with a radially varying scattering phase function; therefore estimates of the ring's true width and edge slope may be highly uncertain.We detect large scale asymmetries in the disk, consistent with either the recent catastrophic disruption of a body with mass greater than 1% the mass of Pluto, or disk warping due to strong interactions with the interstellar medium.

  4. It's Only a Phase: Applying the 5 Phases of Clinical Trials to the NSCR Model Improvement Process

    NASA Technical Reports Server (NTRS)

    Elgart, S. R.; Milder, C. M.; Chappell, L. J.; Semones, E. J.

    2017-01-01

    NASA limits astronaut radiation exposures to a 3% risk of exposure-induced death from cancer (REID) at the upper 95% confidence level. Since astronauts approach this limit, it is important that the estimate of REID be as accurate as possible. The NASA Space Cancer Risk 2012 (NSCR-2012) model has been the standard for NASA's space radiation protection guidelines since its publication in 2013. The model incorporates elements from U.S. baseline statistics, Japanese atomic bomb survivor research, animal models, cellular studies, and radiation transport to calculate astronaut baseline risk of cancer and REID. The NSCR model is under constant revision to ensure emerging research is incorporated into radiation protection standards. It is important to develop guidelines, however, to determine what new research is appropriate for integration. Certain standards of transparency are necessary in order to assess data quality, statistical quality, and analytical quality. To this effect, all original source code and any raw data used to develop the code are required to confirm there are no errors which significantly change reported outcomes. It is possible to apply a clinical trials approach to select and assess the improvement concepts that will be incorporated into future iterations of NSCR. This poster describes the five phases of clinical trials research, pre-clinical research, and clinical research phases I-IV, explaining how each step can be translated into an appropriate NSCR model selection guideline.

  5. Revealing asymmetries in the HD 181327 debris disk: A recent massive collision or interstellar medium warping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stark, Christopher C.; Kuchner, Marc J.; Schneider, Glenn

    New multi-roll coronagraphic images of the HD 181327 debris disk obtained using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope reveal the debris ring in its entirety at high signal-to-noise ratio and unprecedented spatial resolution. We present and apply a new multi-roll image processing routine to identify and further remove quasi-static point-spread function-subtraction residuals and quantify systematic uncertainties. We also use a new iterative image deprojection technique to constrain the true disk geometry and aggressively remove any surface brightness asymmetries that can be explained without invoking dust density enhancements/deficits. The measured empirical scattering phase function for themore » disk is more forward scattering than previously thought and is not well-fit by a Henyey-Greenstein function. The empirical scattering phase function varies with stellocentric distance, consistent with the expected radiation pressured-induced size segregation exterior to the belt. Within the belt, the empirical scattering phase function contradicts unperturbed debris ring models, suggesting the presence of an unseen planet. The radial profile of the flux density is degenerate with a radially varying scattering phase function; therefore estimates of the ring's true width and edge slope may be highly uncertain. We detect large scale asymmetries in the disk, consistent with either the recent catastrophic disruption of a body with mass >1% the mass of Pluto, or disk warping due to strong interactions with the interstellar medium.« less

  6. Revealing Asymmetries in the HD 181327 Debris Disk: A Recent Massive Collision or Interstellar Medium Warping

    NASA Astrophysics Data System (ADS)

    Stark, Christopher C.; Schneider, Glenn; Weinberger, Alycia J.; Debes, John H.; Grady, Carol A.; Jang-Condell, Hannah; Kuchner, Marc J.

    2014-07-01

    New multi-roll coronagraphic images of the HD 181327 debris disk obtained using the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope reveal the debris ring in its entirety at high signal-to-noise ratio and unprecedented spatial resolution. We present and apply a new multi-roll image processing routine to identify and further remove quasi-static point-spread function-subtraction residuals and quantify systematic uncertainties. We also use a new iterative image deprojection technique to constrain the true disk geometry and aggressively remove any surface brightness asymmetries that can be explained without invoking dust density enhancements/deficits. The measured empirical scattering phase function for the disk is more forward scattering than previously thought and is not well-fit by a Henyey-Greenstein function. The empirical scattering phase function varies with stellocentric distance, consistent with the expected radiation pressured-induced size segregation exterior to the belt. Within the belt, the empirical scattering phase function contradicts unperturbed debris ring models, suggesting the presence of an unseen planet. The radial profile of the flux density is degenerate with a radially varying scattering phase function; therefore estimates of the ring's true width and edge slope may be highly uncertain. We detect large scale asymmetries in the disk, consistent with either the recent catastrophic disruption of a body with mass >1% the mass of Pluto, or disk warping due to strong interactions with the interstellar medium.

  7. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  8. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  9. A FFT-based formulation for discrete dislocation dynamics in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Bertin, N.; Capolungo, L.

    2018-02-01

    In this paper, an extension of the DDD-FFT approach presented in [1] is developed for heterogeneous elasticity. For such a purpose, an iterative spectral formulation in which convolutions are calculated in the Fourier space is developed to solve for the mechanical state associated with the discrete eigenstrain-based microstructural representation. With this, the heterogeneous DDD-FFT approach is capable of treating anisotropic and heterogeneous elasticity in a computationally efficient manner. In addition, a GPU implementation is presented to allow for further acceleration. As a first example, the approach is used to investigate the interaction between dislocations and second-phase particles, thereby demonstrating its ability to inherently incorporate image forces arising from elastic inhomogeneities.

  10. Ambient Assisted Living spaces validation by services and devices simulation.

    PubMed

    Fernández-Llatas, Carlos; Mocholí, Juan Bautista; Sala, Pilar; Naranjo, Juan Carlos; Pileggi, Salvatore F; Guillén, Sergio; Traver, Vicente

    2011-01-01

    The design of Ambient Assisted Living (AAL) products is a very demanding challenge. AAL products creation is a complex iterative process which must accomplish exhaustive prerequisites about accessibility and usability. In this process the early detection of errors is crucial to create cost-effective systems. Computer-assisted tools can suppose a vital help to usability designers in order to avoid design errors. Specifically computer simulation of products in AAL environments can be used in all the design phases to support the validation. In this paper, a computer simulation tool for supporting usability designers in the creation of innovative AAL products is presented. This application will benefit their work saving time and improving the final system functionality.

  11. Extended Quantum Field Theory, Index Theory, and the Parity Anomaly

    NASA Astrophysics Data System (ADS)

    Müller, Lukas; Szabo, Richard J.

    2018-06-01

    We use techniques from functorial quantum field theory to provide a geometric description of the parity anomaly in fermionic systems coupled to background gauge and gravitational fields on odd-dimensional spacetimes. We give an explicit construction of a geometric cobordism bicategory which incorporates general background fields in a stack, and together with the theory of symmetric monoidal bicategories we use it to provide the concrete forms of invertible extended quantum field theories which capture anomalies in both the path integral and Hamiltonian frameworks. Specialising this situation by using the extension of the Atiyah-Patodi-Singer index theorem to manifolds with corners due to Loya and Melrose, we obtain a new Hamiltonian perspective on the parity anomaly. We compute explicitly the 2-cocycle of the projective representation of the gauge symmetry on the quantum state space, which is defined in a parity-symmetric way by suitably augmenting the standard chiral fermionic Fock spaces with Lagrangian subspaces of zero modes of the Dirac Hamiltonian that naturally appear in the index theorem. We describe the significance of our constructions for the bulk-boundary correspondence in a large class of time-reversal invariant gauge-gravity symmetry-protected topological phases of quantum matter with gapless charged boundary fermions, including the standard topological insulator in 3 + 1 dimensions.

  12. Thermodynamic and kinetic characterization of a beta-hairpin peptide in solution: an extended phase space sampling by molecular dynamics simulations in explicit water.

    PubMed

    Daidone, Isabella; Amadei, Andrea; Di Nola, Alfredo

    2005-05-15

    The folding of the amyloidogenic H1 peptide MKHMAGAAAAGAVV taken from the syrian hamster prion protein is explored in explicit aqueous solution at 300 K using long time scale all-atom molecular dynamics simulations for a total simulation time of 1.1 mus. The system, initially modeled as an alpha-helix, preferentially adopts a beta-hairpin structure and several unfolding/refolding events are observed, yielding a very short average beta-hairpin folding time of approximately 200 ns. The long time scale accessed by our simulations and the reversibility of the folding allow to properly explore the configurational space of the peptide in solution. The free energy profile, as a function of the principal components (essential eigenvectors) of motion, describing the main conformational transitions, shows the characteristic features of a funneled landscape, with a downhill surface toward the beta-hairpin folded basin. However, the analysis of the peptide thermodynamic stability, reveals that the beta-hairpin in solution is rather unstable. These results are in good agreement with several experimental evidences, according to which the isolated H1 peptide adopts very rapidly in water beta-sheet structure, leading to amyloid fibril precipitates [Nguyen et al., Biochemistry 1995;34:4186-4192; Inouye et al., J Struct Biol 1998;122:247-255]. Moreover, in this article we also characterize the diffusion behavior in conformational space, investigating its relations with folding/unfolding conditions. Copyright 2005 Wiley-Liss, Inc.

  13. Fractal nematic colloids

    NASA Astrophysics Data System (ADS)

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  14. Contact stresses in pin-loaded orthotropic plates

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Klang, E. C.

    1984-01-01

    The effects of pin elasticity, friction, and clearance on the stresses near the hole in a pin-loaded orthotropic plate are described. The problem is modeled as a contact elasticity problem using complex variable theory, the pin and the plate being two elastic bodies interacting through contact. This modeling is in contrast to previous works which assumed that the pin is rigid or that it exerts a known cosinusoidal radial traction on the hole boundary. Neither of these approaches explicitly involves a pin. A collocation procedure and iteration were used to obtain numerical results for a variety of plate and pin elastic properties and various levels of friction and clearance. Collocation was used to enforce the boundary and iteration was used to find the contact and no-slip regions on the boundary. Details of the numerical scheme are discussed.

  15. A gauged finite-element potential formulation for accurate inductive and galvanic modelling of 3-D electromagnetic problems

    NASA Astrophysics Data System (ADS)

    Ansari, S. M.; Farquharson, C. G.; MacLachlan, S. P.

    2017-07-01

    In this paper, a new finite-element solution to the potential formulation of the geophysical electromagnetic (EM) problem that explicitly implements the Coulomb gauge, and that accurately computes the potentials and hence inductive and galvanic components, is proposed. The modelling scheme is based on using unstructured tetrahedral meshes for domain subdivision, which enables both realistic Earth models of complex geometries to be considered and efficient spatially variable refinement of the mesh to be done. For the finite-element discretization edge and nodal elements are used for approximating the vector and scalar potentials respectively. The issue of non-unique, incorrect potentials from the numerical solution of the usual incomplete-gauged potential system is demonstrated for a benchmark model from the literature that uses an electric-type EM source, through investigating the interface continuity conditions for both the normal and tangential components of the potential vectors, and by showing inconsistent results obtained from iterative and direct linear equation solvers. By explicitly introducing the Coulomb gauge condition as an extra equation, and by augmenting the Helmholtz equation with the gradient of a Lagrange multiplier, an explicitly gauged system for the potential formulation is formed. The solution to the discretized form of this system is validated for the above-mentioned example and for another classic example that uses a magnetic EM source. In order to stabilize the iterative solution of the gauged system, a block diagonal pre-conditioning scheme that is based upon the Schur complement of the potential system is used. For all examples, both the iterative and direct solvers produce the same responses for the potentials, demonstrating the uniqueness of the numerical solution for the potentials and fixing the problems with the interface conditions between cells observed for the incomplete-gauged system. These solutions of the gauged system also produce the physically anticipated behaviours for the inductive and galvanic components of the electric field. For a realistic geophysical scenario, the gauged scheme is also used to synthesize the magnetic field response of a model of the Ovoid ore deposit at Voisey's Bay, Labrador, Canada. The results are in good agreement with the helicopter-borne EM data from the real survey, and the inductive and galvanic parts of the current density show expected behaviours.

  16. Potential energy surface and rate coefficients of protonated cyanogen (HNCCN+) induced by collision with helium (He) at low temperature

    NASA Astrophysics Data System (ADS)

    Bop, Cheikh T.; Faye, N. AB; Hammami, K.

    2018-05-01

    Nitriles have been identified in space. Accurately modeling their abundance requires calculations of collisional rate coefficients. These data are obtained by first computing potential energy surfaces (PES) and cross-sections using high accurate quantum methods. In this paper, we report the first interaction potential of the HNCCN+-He collisional system along with downward rate coefficients among the 11 lowest rotational levels of HNCCN+. The PES was calculated using the explicitly correlated coupled cluster approach with simple, second and non-iterative triple excitation (CCSD(T)-F12) in conjunction with the augmented-correlation consistent-polarized valence triple zeta (aug-cc-pVTZ) Gaussian basis set. It presents two local minima of ˜283 and ˜136 cm-1, the deeper one is located at R = 9 a0 towards the H end (HeṡṡṡHNCCN+). Using the so-computed PES, we calculated rotational cross-sections of HNCCN+ induced by collision with He for energies ranging up to 500 cm-1 with the exact quantum mechanical close coupling (CC) method. Downward rate coefficients were then worked out by thermally averaging the cross-sections at low temperature (T ≤ 100 K). The discussion on propensity rules showed that the odd Δj transitions were favored. The results obtained in this work may be crucially needed to accurately model the abundance of cyanogen and its protonated form in space.

  17. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  18. A Locally Optimal Algorithm for Estimating a Generating Partition from an Observed Time Series and Its Application to Anomaly Detection.

    PubMed

    Ghalyan, Najah F; Miller, David J; Ray, Asok

    2018-06-12

    Estimation of a generating partition is critical for symbolization of measurements from discrete-time dynamical systems, where a sequence of symbols from a (finite-cardinality) alphabet may uniquely specify the underlying time series. Such symbolization is useful for computing measures (e.g., Kolmogorov-Sinai entropy) to identify or characterize the (possibly unknown) dynamical system. It is also useful for time series classification and anomaly detection. The seminal work of Hirata, Judd, and Kilminster (2004) derives a novel objective function, akin to a clustering objective, that measures the discrepancy between a set of reconstruction values and the points from the time series. They cast estimation of a generating partition via the minimization of their objective function. Unfortunately, their proposed algorithm is nonconvergent, with no guarantee of finding even locally optimal solutions with respect to their objective. The difficulty is a heuristic-nearest neighbor symbol assignment step. Alternatively, we develop a novel, locally optimal algorithm for their objective. We apply iterative nearest-neighbor symbol assignments with guaranteed discrepancy descent, by which joint, locally optimal symbolization of the entire time series is achieved. While most previous approaches frame generating partition estimation as a state-space partitioning problem, we recognize that minimizing the Hirata et al. (2004) objective function does not induce an explicit partitioning of the state space, but rather the space consisting of the entire time series (effectively, clustering in a (countably) infinite-dimensional space). Our approach also amounts to a novel type of sliding block lossy source coding. Improvement, with respect to several measures, is demonstrated over popular methods for symbolizing chaotic maps. We also apply our approach to time-series anomaly detection, considering both chaotic maps and failure application in a polycrystalline alloy material.

  19. Brain activation for spontaneous and explicit false belief tasks overlaps: new fMRI evidence on belief processing and violation of expectation.

    PubMed

    Bardi, Lara; Desmet, Charlotte; Nijhof, Annabel; Wiersema, Jan R; Brass, Marcel

    2017-03-01

    There is extensive discussion on whether spontaneous and explicit forms of ToM are based on the same cognitive/neural mechanisms or rather reflect qualitatively different processes. For the first time, we analyzed the BOLD signal for false belief processing by directly comparing spontaneous and explicit ToM task versions. In both versions, participants watched videos of a scene including an agent who acquires a true or false belief about the location of an object (belief formation phase). At the end of the movies (outcome phase), participants had to react to the presence of the object. During the belief formation phase, greater activity was found for false vs true belief trials in the right posterior parietal cortex. The ROI analysis of the right temporo-parietal junction (TPJ), confirmed this observation. Moreover, the anterior medial prefrontal cortex (aMPFC) was active during the outcome phase, being sensitive to violation of both the participant's and agent's expectations about the location of the object. Activity in the TPJ and aMPFC was not modulated by the spontaneous/explicit task. Overall, these data show that neural mechanisms for spontaneous and explicit ToM overlap. Interestingly, a dissociation between TPJ and aMPFC for belief tracking and outcome evaluation, respectively, was also found. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  20. Brain activation for spontaneous and explicit false belief tasks overlaps: new fMRI evidence on belief processing and violation of expectation

    PubMed Central

    Desmet, Charlotte; Nijhof, Annabel; Wiersema, Jan R.; Brass, Marcel

    2017-01-01

    Abstract There is extensive discussion on whether spontaneous and explicit forms of ToM are based on the same cognitive/neural mechanisms or rather reflect qualitatively different processes. For the first time, we analyzed the BOLD signal for false belief processing by directly comparing spontaneous and explicit ToM task versions. In both versions, participants watched videos of a scene including an agent who acquires a true or false belief about the location of an object (belief formation phase). At the end of the movies (outcome phase), participants had to react to the presence of the object. During the belief formation phase, greater activity was found for false vs true belief trials in the right posterior parietal cortex. The ROI analysis of the right temporo-parietal junction (TPJ), confirmed this observation. Moreover, the anterior medial prefrontal cortex (aMPFC) was active during the outcome phase, being sensitive to violation of both the participant’s and agent’s expectations about the location of the object. Activity in the TPJ and aMPFC was not modulated by the spontaneous/explicit task. Overall, these data show that neural mechanisms for spontaneous and explicit ToM overlap. Interestingly, a dissociation between TPJ and aMPFC for belief tracking and outcome evaluation, respectively, was also found. PMID:27683425

  1. BRST Quantization of the Proca Model Based on the BFT and the BFV Formalism

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.

    The BRST quantization of the Abelian Proca model is performed using the Batalin-Fradkin-Tyutin and the Batalin-Fradkin-Vilkovisky formalism. First, the BFT Hamiltonian method is applied in order to systematically convert a second class constraint system of the model into an effectively first class one by introducing new fields. In finding the involutive Hamiltonian we adopt a new approach which is simpler than the usual one. We also show that in our model the Dirac brackets of the phase space variables in the original second class constraint system are exactly the same as the Poisson brackets of the corresponding modified fields in the extended phase space due to the linear character of the constraints comparing the Dirac or Faddeev-Jackiw formalisms. Then, according to the BFV formalism we obtain that the desired resulting Lagrangian preserving BRST symmetry in the standard local gauge fixing procedure naturally includes the Stückelberg scalar related to the explicit gauge symmetry breaking effect due to the presence of the mass term. We also analyze the nonstandard nonlocal gauge fixing procedure.

  2. The ZpiM algorithm: a method for interferometric image reconstruction in SAR/SAS.

    PubMed

    Dias, José M B; Leitao, José M N

    2002-01-01

    This paper presents an effective algorithm for absolute phase (not simply modulo-2-pi) estimation from incomplete, noisy and modulo-2pi observations in interferometric aperture radar and sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic resonance imaging and diffraction tomography. The Bayesian viewpoint is adopted; the observation density is 2-pi-periodic and accounts for the interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modeled by a compound Gauss-Markov random field (CGMRF) tailored to piecewise smooth absolute phase images. We propose an iterative scheme for the computation of the maximum a posteriori probability (MAP) absolute phase estimate. Each iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques and an iterative conditional modes (ICM) step (pi-step). Accordingly, the algorithm is termed ZpiM, where the letter M stands for maximization. An important contribution of the paper is the simultaneous implementation of phase unwrapping (inference of the 2pi-multiples) and smoothing (denoising of the observations). This improves considerably the accuracy of the absolute phase estimates compared to methods in which the data is low-pass filtered prior to unwrapping. A set of experimental results, comparing the proposed algorithm with alternative methods, illustrates the effectiveness of our approach.

  3. Phase-space dependent critical gradient behavior of fast-ion transport due to Alfvén eigenmodes

    DOE PAGES

    Collins, C. S.; Heidbrink, W. W.; Podestà, M.; ...

    2017-06-09

    Experiments in the DIII-D tokamak show that many overlapping small-amplitude Alfv en eigenmodes (AEs) cause fast-ion transport to sharply increase above a critical threshold, leading to fast-ion density profile resilience and reduced fusion performance. The threshold is above the AE linear stability limit and varies between diagnostics that are sensitive to different parts of fast-ion phase-space. A comparison with theoretical analysis using the nova and orbit codes shows that, for the neutral particle diagnostic, the threshold corresponds to the onset of stochastic particle orbits due to wave-particle resonances with AEs in the measured region of phase space. We manipulated themore » bulk fast-ion distribution and instability behavior through variations in beam deposition geometry, and no significant differences in the onset threshold outside of measurement uncertainties were found, in agreement with the theoretical stochastic threshold analysis. Simulations using the `kick model' produce beam ion density gradients consistent with the empirically measured radial critical gradient and highlight the importance of including the energy and pitch dependence of the fast-ion distribution function in critical gradient models. The addition of electron cyclotron heating changes the types of AEs present in the experiment, comparatively increasing the measured fast-ion density and radial gradient. Our studies provide the basis for understanding how to avoid AE transport that can undesirably redistribute current and cause fast-ion losses, and the measurements are being used to validate AE-induced transport models that use the critical gradient paradigm, giving greater confidence when applied to ITER.« less

  4. Phase-space dependent critical gradient behavior of fast-ion transport due to Alfvén eigenmodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, C. S.; Heidbrink, W. W.; Podestà, M.

    Experiments in the DIII-D tokamak show that many overlapping small-amplitude Alfv en eigenmodes (AEs) cause fast-ion transport to sharply increase above a critical threshold, leading to fast-ion density profile resilience and reduced fusion performance. The threshold is above the AE linear stability limit and varies between diagnostics that are sensitive to different parts of fast-ion phase-space. A comparison with theoretical analysis using the nova and orbit codes shows that, for the neutral particle diagnostic, the threshold corresponds to the onset of stochastic particle orbits due to wave-particle resonances with AEs in the measured region of phase space. We manipulated themore » bulk fast-ion distribution and instability behavior through variations in beam deposition geometry, and no significant differences in the onset threshold outside of measurement uncertainties were found, in agreement with the theoretical stochastic threshold analysis. Simulations using the `kick model' produce beam ion density gradients consistent with the empirically measured radial critical gradient and highlight the importance of including the energy and pitch dependence of the fast-ion distribution function in critical gradient models. The addition of electron cyclotron heating changes the types of AEs present in the experiment, comparatively increasing the measured fast-ion density and radial gradient. Our studies provide the basis for understanding how to avoid AE transport that can undesirably redistribute current and cause fast-ion losses, and the measurements are being used to validate AE-induced transport models that use the critical gradient paradigm, giving greater confidence when applied to ITER.« less

  5. Quantum Monte Carlo studies of solvated systems

    NASA Astrophysics Data System (ADS)

    Schwarz, Kathleen; Letchworth Weaver, Kendra; Arias, T. A.; Hennig, Richard G.

    2011-03-01

    Solvation qualitatively alters the energetics of diverse processes from protein folding to reactions on catalytic surfaces. An explicit description of the solvent in quantum-mechanical calculations requires both a large number of electrons and exploration of a large number of configurations in the phase space of the solvent. These problems can be circumvented by including the effects of solvent through a rigorous classical density-functional description of the liquid environment, thereby yielding free energies and thermodynamic averages directly, while eliminating the need for explicit consideration of the solvent electrons. We have implemented and tested this approach within the CASINO Quantum Monte Carlo code. Our method is suitable for calculations in any basis within CASINO, including b-spline and plane wave trial wavefunctions, and is equally applicable to molecules, surfaces, and crystals. For our preliminary test calculations, we use a simplified description of the solvent in terms of an isodensity continuum dielectric solvation approach, though the method is fully compatible with more reliable descriptions of the solvent we shall employ in the future.

  6. Computer-Based Tools for Evaluating Graphical User Interfaces

    NASA Technical Reports Server (NTRS)

    Moore, Loretta A.

    1997-01-01

    The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.

  7. Model-free iterative control of repetitive dynamics for high-speed scanning in atomic force microscopy.

    PubMed

    Li, Yang; Bechhoefer, John

    2009-01-01

    We introduce an algorithm for calculating, offline or in real time and with no explicit system characterization, the feedforward input required for repetitive motions of a system. The algorithm is based on the secant method of numerical analysis and gives accurate motion at frequencies limited only by the signal-to-noise ratio and the actuator power and range. We illustrate the secant-solver algorithm on a stage used for atomic force microscopy.

  8. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  9. 76 FR 56848 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Order Granting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-14

    ... Rule Change To Amend FINRA Rule 9251 to Explicitly Protect From Discovery Those Documents That Federal... explicitly protect from discovery those documents that federal law prohibits FINRA from disclosing. The... the discovery phase of a disciplinary proceeding. The rule also explicitly shields certain types of...

  10. Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.

    PubMed

    Pinton, Gianmarco F

    2017-03-01

    Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.

  11. Programmable Iterative Optical Image And Data Processing

    NASA Technical Reports Server (NTRS)

    Jackson, Deborah J.

    1995-01-01

    Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.

  12. Circuit model of the ITER-like antenna for JET and simulation of its control algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durodié, Frédéric, E-mail: frederic.durodie@rma.ac.be; Křivská, Alena; Dumortier, Pierre

    2015-12-10

    The ITER-like Antenna (ILA) for JET [1] is a 2 toroidal by 2 poloidal array of Resonant Double Loops (RDL) featuring in-vessel matching capacitors feeding RF current straps in conjugate-T manner, a low impedance quarter-wave impedance transformer, a service stub allowing hydraulic actuator and water cooling services to reach the aforementioned capacitors and a 2nd stage phase-shifter-stub matching circuit allowing to correct/choose the conjugate-T working impedance. Toroidally adjacent RDLs are fed from a 3dB hybrid splitter. It has been operated at 33, 42 and 47MHz on plasma (2008-2009) while it presently estimated frequency range is from 29 to 49MHz. Atmore » the time of the design (2001-2004) as well as the experiments the circuit models of the ILA were quite basic. The ILA front face and strap array Topica model was relatively crude and failed to correctly represent the poloidal central septum, Faraday Screen attachment as well as the segmented antenna central septum limiter. The ILA matching capacitors, T-junction, Vacuum Transmission Line (VTL) and Service Stubs were represented by lumped circuit elements and simple transmission line models. The assessment of the ILA results carried out to decide on the repair of the ILA identified that achieving routine full array operation requires a better understanding of the RF circuit, a feedback control algorithm for the 2nd stage matching as well as tighter calibrations of RF measurements. The paper presents the progress in modelling of the ILA comprising a more detailed Topica model of the front face for various plasma Scrape Off Layer profiles, a comprehensive HFSS model of the matching capacitors including internal bellows and electrode cylinders, 3D-EM models of the VTL including vacuum ceramic window, Service stub, a transmission line model of the 2nd stage matching circuit and main transmission lines including the 3dB hybrid splitters. A time evolving simulation using the improved circuit model allowed to design and simulate the effectiveness of a feedback control algorithm for the 2nd stage matching and demonstrates the simultaneous matching and control of the 4 RDLs: 11 feedback loops control 21 actuators (8 capacitors, 4 phase shifters and 4 stubs for the 2nd stage matching, 4 main phase shifters controlling of the toroidal phasing and the electronically controlled phase between RF sources feeding top and bottom parts of the array and determines the poloidal phasing of the array which is solved explicitly at each time step) on (simulated) ELMy plasmas.« less

  13. Preliminary consideration of CFETR ITER-like case diagnostic system.

    PubMed

    Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X

    2016-11-01

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.

  14. Preliminary consideration of CFETR ITER-like case diagnostic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G. S.; Liu, Y. K.; Gao, X.

    2016-11-15

    Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basicmore » control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.« less

  15. Strong Convergence of Iteration Processes for Infinite Family of General Extended Mappings

    NASA Astrophysics Data System (ADS)

    Hussein Maibed, Zena

    2018-05-01

    The aim of this paper, we introduce a concept of general extended mapping which is independent of nonexpansive mapping and give an iteration process of families of quasi nonexpansive and of general extended mappings. Also, the existence of common fixed point are studied for these process in the Hilbert spaces.

  16. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.

    1997-01-01

    Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.

  17. The key to success in elite athletes? Explicit and implicit motor learning in youth elite and non-elite soccer players.

    PubMed

    Verburgh, L; Scherder, E J A; van Lange, P A M; Oosterlaan, J

    2016-09-01

    In sports, fast and accurate execution of movements is required. It has been shown that implicitly learned movements might be less vulnerable than explicitly learned movements to stressful and fast changing circumstances that exist at the elite sports level. The present study provides insight in explicit and implicit motor learning in youth soccer players with different expertise levels. Twenty-seven youth elite soccer players and 25 non-elite soccer players (aged 10-12) performed a serial reaction time task (SRTT). In the SRTT, one of the sequences must be learned explicitly, the other was implicitly learned. No main effect of group was found for implicit and explicit learning on mean reaction time (MRT) and accuracy. However, for MRT, an interaction was found between learning condition, learning phase and group. Analyses showed no group effects for the explicit learning condition, but youth elite soccer players showed better learning in the implicit learning condition. In particular, during implicit motor learning youth elite soccer showed faster MRTs in the early learning phase and earlier reached asymptote performance in terms of MRT. Present findings may be important for sports because children with superior implicit learning abilities in early learning phases may be able to learn more (durable) motor skills in a shorter time period as compared to other children.

  18. Endocavitary thermal therapy by MRI-guided phased-array contact ultrasound: experimental and numerical studies on the multi-input single-output PID temperature controller's convergence and stability.

    PubMed

    Salomir, Rares; Rata, Mihaela; Cadis, Daniela; Petrusca, Lorena; Auboiroux, Vincent; Cotton, François

    2009-10-01

    Endocavitary high intensity contact ultrasound (HICU) may offer interesting therapeutic potential for fighting localized cancer in esophageal or rectal wall. On-line MR guidance of the thermotherapy permits both excellent targeting of the pathological volume and accurate preoperatory monitoring of the temperature elevation. In this article, the authors address the issue of the automatic temperature control for endocavitary phased-array HICU and propose a tailor-made thermal model for this specific application. The convergence and stability of the feedback loop were investigated against tuning errors in the controller's parameters and against input noise, through ex vivo experimental studies and through numerical simulations in which nonlinear response of tissue was considered as expected in vivo. An MR-compatible, 64-element, cooled-tip, endorectal cylindrical phased-array applicator of contact ultrasound was integrated with fast MR thermometry to provide automatic feedback control of the temperature evolution. An appropriate phase law was applied per set of eight adjacent transducers to generate a quasiplanar wave, or a slightly convergent one (over the circular dimension). A 2D physical model, compatible with on-line numerical implementation, took into account (1) the ultrasound-mediated energy deposition, (2) the heat diffusion in tissue, and (3) the heat sink effect in the tissue adjacent to the tip-cooling balloon. This linear model was coupled to a PID compensation algorithm to obtain a multi-input single-output static-tuning temperature controller. Either the temperature at one static point in space (situated on the symmetry axis of the beam) or the maximum temperature in a user-defined ROI was tracked according to a predefined target curve. The convergence domain in the space of controller's parameters was experimentally explored ex vivo. The behavior of the static-tuning PID controller was numerically simulated based on a discrete-time iterative solution of the bioheat transfer equation in 3D and considering temperature-dependent ultrasound absorption and blood perfusion. The intrinsic accuracy of the implemented controller was approximately 1% in ex vivo trials when providing correct estimates for energy deposition and heat diffusivity. Moreover, the feedback loop demonstrated excellent convergence and stability over a wide range of the controller's parameters, deliberately set to erroneous values. In the extreme case of strong underestimation of the ultrasound energy deposition in tissue, the temperature tracking curve alone, at the initial stage of the MR-controlled HICU treatment, was not a sufficient indicator for a globally stable behavior of the feedback loop. Our simulations predicted that the controller would be able to compensate for tissue perfusion and for temperature-dependent ultrasound absorption, although these effects were not included in the controller's equation. The explicit pattern of acoustic field was not required as input information for the controller, avoiding time-consuming numerical operations. The study demonstrated the potential advantages of PID-based automatic temperature control adapted to phased-array MR-guided HICU therapy. Further studies will address the integration of this ultrasound device with a miniature RF coil for high resolution MRI and, subsequently, the experimental behavior of the controller in vivo.

  19. Sequence analysis by iterated maps, a review.

    PubMed

    Almeida, Jonas S

    2014-05-01

    Among alignment-free methods, Iterated Maps (IMs) are on a particular extreme: they are also scale free (order free). The use of IMs for sequence analysis is also distinct from other alignment-free methodologies in being rooted in statistical mechanics instead of computational linguistics. Both of these roots go back over two decades to the use of fractal geometry in the characterization of phase-space representations. The time series analysis origin of the field is betrayed by the title of the manuscript that started this alignment-free subdomain in 1990, 'Chaos Game Representation'. The clash between the analysis of sequences as continuous series and the better established use of Markovian approaches to discrete series was almost immediate, with a defining critique published in same journal 2 years later. The rest of that decade would go by before the scale-free nature of the IM space was uncovered. The ensuing decade saw this scalability generalized for non-genomic alphabets as well as an interest in its use for graphic representation of biological sequences. Finally, in the past couple of years, in step with the emergence of BigData and MapReduce as a new computational paradigm, there is a surprising third act in the IM story. Multiple reports have described gains in computational efficiency of multiple orders of magnitude over more conventional sequence analysis methodologies. The stage appears to be now set for a recasting of IMs with a central role in processing nextgen sequencing results.

  20. Variable aperture-based ptychographical iterative engine method

    NASA Astrophysics Data System (ADS)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  1. The scale invariant power spectrum of the primordial curvature perturbations from the coupled scalar tachyon bounce cosmos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Changhong; Cheung, Yeuk-Kwan E., E-mail: chellifegood@gmail.com, E-mail: cheung@nju.edu.cn

    2014-07-01

    We investigate the spectrum of cosmological perturbations in a bounce cosmos modeled by a scalar field coupled to the string tachyon field (CSTB cosmos). By explicit computation of its primordial spectral index we show the power spectrum of curvature perturbations, generated during the tachyon matter dominated contraction phase, to be nearly scale invariant. We propose a unified parameter space for a systematic study of inflationary and bounce cosmologies. The CSTB cosmos is dual-in Wands's sense-to slow-roll inflation as can be visualized with the aid of this parameter space. Guaranteed by the dynamical attractor behavior of the CSTB Cosmos, the scalemore » invariance of its power spectrum is free of the fine-tuning problem, in contrast to the slow-roll inflation model.« less

  2. Morphological similarities between DBM and a microeconomic model of sprawl

    NASA Astrophysics Data System (ADS)

    Caruso, Geoffrey; Vuidel, Gilles; Cavailhès, Jean; Frankhauser, Pierre; Peeters, Dominique; Thomas, Isabelle

    2011-03-01

    We present a model that simulates the growth of a metropolitan area on a 2D lattice. The model is dynamic and based on microeconomics. Households show preferences for nearby open spaces and neighbourhood density. They compete on the land market. They travel along a road network to access the CBD. A planner ensures the connectedness and maintenance of the road network. The spatial pattern of houses, green spaces and road network self-organises, emerging from agents individualistic decisions. We perform several simulations and vary residential preferences. Our results show morphologies and transition phases that are similar to Dieletric Breakdown Models (DBM). Such similarities were observed earlier by other authors, but we show here that it can be deducted from the functioning of the land market and thus explicitly connected to urban economic theory.

  3. Fitness in time-dependent environments includes a geometric phase contribution

    PubMed Central

    Tănase-Nicola, Sorin; Nemenman, Ilya

    2012-01-01

    Phenotypic evolution implies sequential rise in frequency of new genomic sequences. The speed of the rise depends, in part, on the relative fitness (selection coefficient) of the mutant versus the ancestor. Using a simple population dynamics model, we show that the relative fitness in dynamical environments is not equal to the geometric average of the fitness over individual environments. Instead, it includes a term that explicitly depends on the sequence of the environments. For slowly varying environments, this term depends only on the oriented area enclosed by the trajectory taken by the system in the environment state space. It is closely related to the well-studied geometric phases in classical and quantum physical systems. We discuss possible biological implications of these observations, focusing on evolution of novel metabolic or stress-resistant functions. PMID:22112653

  4. Calculation of the angular radiance distribution for a coupled atmosphere and canopy

    NASA Technical Reports Server (NTRS)

    Liang, Shunlin; Strahler, Alan H.

    1993-01-01

    The radiative transfer equations for a coupled atmosphere and canopy are solved numerically by an improved Gauss-Seidel iteration algorithm. The radiation field is decomposed into three components: unscattered sunlight, single scattering, and multiple scattering radiance for which the corresponding equations and boundary conditions are set up and their analytical or iterational solutions are explicitly derived. The classic Gauss-Seidel algorithm has been widely applied in atmospheric research. This is its first application for calculating the multiple scattering radiance of a coupled atmosphere and canopy. This algorithm enables us to obtain the internal radiation field as well as radiances at boundaries. Any form of bidirectional reflectance distribution function (BRDF) as a boundary condition can be easily incorporated into the iteration procedure. The hotspot effect of the canopy is accommodated by means of the modification of the extinction coefficients of upward single scattering radiation and unscattered sunlight using the formulation of Nilson and Kuusk. To reduce the computation for the case of large optical thickness, an improved iteration formula is derived to speed convergence. The upwelling radiances have been evaluated for different atmospheric conditions, leaf area index (LAI), leaf angle distribution (LAD), leaf size and so on. The formulation presented in this paper is also well suited to analyze the relative magnitude of multiple scattering radiance and single scattering radiance in both the visible and near infrared regions.

  5. Construction of non-Abelian gauge theories on noncommutative spaces

    NASA Astrophysics Data System (ADS)

    Jurčo, B.; Möller, L.; Schraml, S.; Schupp, P.; Wess, J.

    We present a formalism to explicitly construct non-Abelian gauge theories on noncommutative spaces (induced via a star product with a constant Poisson tensor) from a consistency relation. This results in an expansion of the gauge parameter, the noncommutative gauge potential and fields in the fundamental representation, in powers of a parameter of the noncommutativity. This allows the explicit construction of actions for these gauge theories.

  6. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Zuo, Chao; Idir, Mourad

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  7. Phase retrieval by coherent modulation imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R.

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less

  8. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE PAGES

    Huang, Lei; Zuo, Chao; Idir, Mourad; ...

    2015-04-21

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  9. Phase retrieval by coherent modulation imaging

    DOE PAGES

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...

    2016-11-18

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less

  10. Phase retrieval by coherent modulation imaging.

    PubMed

    Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K

    2016-11-18

    Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.

  11. Usability-driven evolution of a space instrument

    NASA Astrophysics Data System (ADS)

    McCalden, Alec

    2012-09-01

    The use of resources in the cradle-to-grave timeline of a space instrument might be significantly improved by considering the concept of usability from the start of the mission. The methodology proposed here includes giving early priority in a programme to the iterative development of a simulator that models instrument operation, and allowing this to evolve ahead of the actual instrument specification and fabrication. The advantages include reduction of risk in software development by shifting much of it to earlier in a programme than is typical, plus a test programme that uses and thereby proves the same support systems that may be used for flight. A new development flow for an instrument is suggested, showing how the system engineering phases used by the space agencies could be reworked in line with these ideas. This methodology is also likely to contribute to a better understanding between the various disciplines involved in the creation of a new instrument. The result should better capture the science needs, implement them more accurately with less wasted effort, and more fully allow the best ideas from all team members to be considered.

  12. Experiments on sparsity assisted phase retrieval of phase objects

    NASA Astrophysics Data System (ADS)

    Gaur, Charu; Lochab, Priyanka; Khare, Kedar

    2017-05-01

    Iterative phase retrieval algorithms such as the Gerchberg-Saxton method and the Fienup hybrid input-output method are known to suffer from the twin image stagnation problem, particularly when the solution to be recovered is complex valued and has centrosymmetric support. Recently we showed that the twin image stagnation problem can be addressed using image sparsity ideas (Gaur et al 2015 J. Opt. Soc. Am. A 32 1922). In this work we test this sparsity assisted phase retrieval method with experimental single shot Fourier transform intensity data frames corresponding to phase objects displayed on a spatial light modulator. The standard iterative phase retrieval algorithms are combined with an image sparsity based penalty in an adaptive manner. Illustrations for both binary and continuous phase objects are provided. It is observed that image sparsity constraint has an important role to play in obtaining meaningful phase recovery without encountering the well-known stagnation problems. The results are valuable for enabling single shot coherent diffraction imaging of phase objects for applications involving illumination wavelengths over a wide range of electromagnetic spectrum.

  13. 76 FR 44645 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-26

    ... Change To Amend FINRA Rule 9251 to Explicitly Protect From Discovery Those Documents That Federal Law... to amend FINRA Rule 9251 to explicitly protect from discovery those documents that federal law... produce to respondents during the discovery phase of a disciplinary proceeding. The rule also explicitly...

  14. Multi-dimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations in curvilinear geometry

    NASA Astrophysics Data System (ADS)

    Chen, Guangye; Chacon, Luis

    2015-11-01

    We discuss a new, conservative, fully implicit 2D3V Vlasov-Darwin particle-in-cell algorithm in curvilinear geometry for non-radiative, electromagnetic kinetic plasma simulations. Unlike standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. Here, we extend these algorithms to curvilinear geometry. The algorithm retains its exact conservation properties in curvilinear grids. The nonlinear iteration is effectively accelerated with a fluid preconditioner for weakly to modestly magnetized plasmas, which allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D (slow shock) and 2D (island coalescense).

  15. Explicit Reflective Nature of Science Instruction: Evolution, Intelligent Design, and Umbrellaology

    NASA Astrophysics Data System (ADS)

    Scharmann, Lawrence C.; Smith, Mike U.; James, Mark C.; Jensen, Murray

    2005-02-01

    The investigators sought to design an instructional unit to enhance an understanding of the nature of science (NOS) by taking into account both instructional best practices and suggestions made by noted science philosopher Thomas Kuhn. Preservice secondary science teachers enrolled in a course, Laboratory Techniques in the Teaching of Science, served as participants in action research. Sources of data used to inform instructional decisions included students written reaction papers to the assigned readings, transcribed verbal comments made during class discussions and other in-class activities, and final reflection essays. Three iterative implementations of the instructional unit were attempted. The objectives of the study were essentially met. The instructional unit was able to provoke preservice teachers into wrestling with many substantive issues associated with the NOS. Implications concerning the design of explicit reflective NOS instruction are included.

  16. Promise-based management: the essence of execution.

    PubMed

    Sull, Donald N; Spinosa, Charles

    2007-04-01

    Critical initiatives stall for a variety of reasons--employee disengagement, a lack of coordination between functions, complex organizational structures that obscure accountability, and so on. To overcome such obstacles, managers must fundamentally rethink how work gets done. Most of the challenges stem from broken or poorly crafted commitments. That's because every company is, at its heart, a dynamic network of promises made between employees and colleagues, customers, outsourcing partners, or other stakeholders. Executives can overcome many problems in the short-term and foster productive, reliable workforces for the long-term by practicing what the authors call "promise-based management," which involves cultivating and coordinating commitments in a systematic way. Good promises share five qualities: They are public, active, voluntary, explicit, and mission based. To develop and execute an effective promise, the "provider" and the "customer" in the deal should go through three phases of conversation. The first, achieving a meeting of minds, entails exploring the fundamental questions of coordinated effort: What do you mean? Do you understand what I mean? What should I do? What will you do? Who else should we talk to? In the next phase, making it happen, the provider executes on the promise. In the final phase, closing the loop, the customer publicly declares that the provider has either delivered the goods or failed to do so. Leaders must weave and manage their webs of promises with great care-encouraging iterative conversation and making sure commitments are fulfilled reliably. If they do, they can enhance coordination and cooperation among colleagues, build the organizational agility required to seize new business opportunities, and tap employees' entrepreneurial energies.

  17. Exploring connections between statistical mechanics and Green's functions for realistic systems: Temperature dependent electronic entropy and internal energy from a self-consistent second-order Green's function

    NASA Astrophysics Data System (ADS)

    Welden, Alicia Rae; Rusakov, Alexander A.; Zgid, Dominika

    2016-11-01

    Including finite-temperature effects from the electronic degrees of freedom in electronic structure calculations of semiconductors and metals is desired; however, in practice it remains exceedingly difficult when using zero-temperature methods, since these methods require an explicit evaluation of multiple excited states in order to account for any finite-temperature effects. Using a Matsubara Green's function formalism remains a viable alternative, since in this formalism it is easier to include thermal effects and to connect the dynamic quantities such as the self-energy with static thermodynamic quantities such as the Helmholtz energy, entropy, and internal energy. However, despite the promising properties of this formalism, little is known about the multiple solutions of the non-linear equations present in the self-consistent Matsubara formalism and only a few cases involving a full Coulomb Hamiltonian were investigated in the past. Here, to shed some light onto the iterative nature of the Green's function solutions, we self-consistently evaluate the thermodynamic quantities for a one-dimensional (1D) hydrogen solid at various interatomic separations and temperatures using the self-energy approximated to second-order (GF2). At many points in the phase diagram of this system, multiple phases such as a metal and an insulator exist, and we are able to determine the most stable phase from the analysis of Helmholtz energies. Additionally, we show the evolution of the spectrum of 1D boron nitride to demonstrate that GF2 is capable of qualitatively describing the temperature effects influencing the size of the band gap.

  18. The application of contraction theory to an iterative formulation of electromagnetic scattering

    NASA Technical Reports Server (NTRS)

    Brand, J. C.; Kauffman, J. F.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  19. The Iterative Design Process in Research and Development: A Work Experience Paper

    NASA Technical Reports Server (NTRS)

    Sullivan, George F. III

    2013-01-01

    The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.

  20. A theory of phase singularities for image representation and its applications to object tracking and image matching.

    PubMed

    Qiao, Yu; Wang, Wei; Minematsu, Nobuaki; Liu, Jianzhuang; Takeda, Mitsuo; Tang, Xiaoou

    2009-10-01

    This paper studies phase singularities (PSs) for image representation. We show that PSs calculated with Laguerre-Gauss filters contain important information and provide a useful tool for image analysis. PSs are invariant to image translation and rotation. We introduce several invariant features to characterize the core structures around PSs and analyze the stability of PSs to noise addition and scale change. We also study the characteristics of PSs in a scale space, which lead to a method to select key scales along phase singularity curves. We demonstrate two applications of PSs: object tracking and image matching. In object tracking, we use the iterative closest point algorithm to determine the correspondences of PSs between two adjacent frames. The use of PSs allows us to precisely determine the motions of tracked objects. In image matching, we combine PSs and scale-invariant feature transform (SIFT) descriptor to deal with the variations between two images and examine the proposed method on a benchmark database. The results indicate that our method can find more correct matching pairs with higher repeatability rates than some well-known methods.

  1. Image encryption based on fractal-structured phase mask in fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Zhao, Meng-Dan; Gao, Xu-Zhen; Pan, Yue; Zhang, Guan-Lin; Tu, Chenghou; Li, Yongnan; Wang, Hui-Tian

    2018-04-01

    We present an optical encryption approach based on the combination of fractal Fresnel lens (FFL) and fractional Fourier transform (FrFT). Our encryption approach is in fact a four-fold encryption scheme, including the random phase encoding produced by the Gerchberg–Saxton algorithm, a FFL, and two FrFTs. A FFL is composed of a Sierpinski carpet fractal plate and a Fresnel zone plate. In our encryption approach, the security is enhanced due to the more expandable key spaces and the use of FFL overcomes the alignment problem of the optical axis in optical system. Only using the perfectly matched parameters of the FFL and the FrFT, the plaintext can be recovered well. We present an image encryption algorithm that from the ciphertext we can get two original images by the FrFT with two different phase distribution keys, obtained by performing 100 iterations between the two plaintext and ciphertext, respectively. We test the sensitivity of our approach to various parameters such as the wavelength of light, the focal length of FFL, and the fractional orders of FrFT. Our approach can resist various attacks.

  2. Coupling fluid-structure interaction with phase-field fracture

    NASA Astrophysics Data System (ADS)

    Wick, Thomas

    2016-12-01

    In this work, a concept for coupling fluid-structure interaction with brittle fracture in elasticity is proposed. The fluid-structure interaction problem is modeled in terms of the arbitrary Lagrangian-Eulerian technique and couples the isothermal, incompressible Navier-Stokes equations with nonlinear elastodynamics using the Saint-Venant Kirchhoff solid model. The brittle fracture model is based on a phase-field approach for cracks in elasticity and pressurized elastic solids. In order to derive a common framework, the phase-field approach is re-formulated in Lagrangian coordinates to combine it with fluid-structure interaction. A crack irreversibility condition, that is mathematically characterized as an inequality constraint in time, is enforced with the help of an augmented Lagrangian iteration. The resulting problem is highly nonlinear and solved with a modified Newton method (e.g., error-oriented) that specifically allows for a temporary increase of the residuals. The proposed framework is substantiated with several numerical tests. In these examples, computational stability in space and time is shown for several goal functionals, which demonstrates reliability of numerical modeling and algorithmic techniques. But also current limitations such as the necessity of using solid damping are addressed.

  3. Scientific and technical challenges on the road towards fusion electricity

    NASA Astrophysics Data System (ADS)

    Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.

    2017-10-01

    The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.

  4. The influence of vertical motor responses on explicit and incidental processing of power words.

    PubMed

    Jiang, Tianjiao; Sun, Lining; Zhu, Lei

    2015-07-01

    There is increasing evidence demonstrating that power judgment is affected by vertical information. Such interaction between vertical space and power (i.e., response facilitation under space-power congruent conditions) is generally elicited in paradigms that require participants to explicitly evaluate the power of the presented words. The current research explored the possibility that explicit evaluative processing is not a prerequisite for the emergence of this effect. Here we compared the influence of vertical information on a standard explicit power evaluation task with influence on a task that linked power with stimuli in a more incidental manner, requiring participants to report whether the words represented people or animals or the font of the words. The results revealed that although the effect is more modest, the interaction between responses and power is also evident in an incidental task. Furthermore, we also found that explicit semantic processing is a prerequisite to ensure such an effect. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Conformational space comparison of GnRH and lGnRH-III using molecular dynamics, cluster analysis and Monte Carlo thermodynamic integration.

    PubMed

    Watts, C R; Mezei, M; Murphy, R F; Lovas, S

    2001-04-01

    The conformational space available to GnRH and lGnRH-III was compared using 5.2 ns constant temperature and pressure molecular dynamics simulations with explicit TIP3P solvation and the AMBER v. 5.0 force field. Cluster analysis of both trajectories resulted in two groups of conformations. Results of free energy calculations, in agreement with previous experimental data, indicate that a conformation with a turn from residues 5 through 8 is preferred for GnRH in an aqueous environment. By contrast, a conformation with a helix from residues 2 through 7 with a bend from residues 6 through 10 is preferred for lGnRH-III in an aqueous environment. The side chains of His2 and Trp3 in lGnRH-III occupy different regions of phase space and participate in weakly polar interactions different from those in GnRH. The unique conformational properties of lGnRH-III may account for its specific anti cancer activity.

  6. An iterative bidirectional heuristic placement algorithm for solving the two-dimensional knapsack packing problem

    NASA Astrophysics Data System (ADS)

    Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae

    2018-02-01

    This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.

  7. Penalized maximum likelihood reconstruction for x-ray differential phase-contrast tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brendel, Bernhard, E-mail: bernhard.brendel@philips.com; Teuffenbach, Maximilian von; Noël, Peter B.

    2016-01-15

    Purpose: The purpose of this work is to propose a cost function with regularization to iteratively reconstruct attenuation, phase, and scatter images simultaneously from differential phase contrast (DPC) acquisitions, without the need of phase retrieval, and examine its properties. Furthermore this reconstruction method is applied to an acquisition pattern that is suitable for a DPC tomographic system with continuously rotating gantry (sliding window acquisition), overcoming the severe smearing in noniterative reconstruction. Methods: We derive a penalized maximum likelihood reconstruction algorithm to directly reconstruct attenuation, phase, and scatter image from the measured detector values of a DPC acquisition. The proposed penaltymore » comprises, for each of the three images, an independent smoothing prior. Image quality of the proposed reconstruction is compared to images generated with FBP and iterative reconstruction after phase retrieval. Furthermore, the influence between the priors is analyzed. Finally, the proposed reconstruction algorithm is applied to experimental sliding window data acquired at a synchrotron and results are compared to reconstructions based on phase retrieval. Results: The results show that the proposed algorithm significantly increases image quality in comparison to reconstructions based on phase retrieval. No significant mutual influence between the proposed independent priors could be observed. Further it could be illustrated that the iterative reconstruction of a sliding window acquisition results in images with substantially reduced smearing artifacts. Conclusions: Although the proposed cost function is inherently nonconvex, it can be used to reconstruct images with less aliasing artifacts and less streak artifacts than reconstruction methods based on phase retrieval. Furthermore, the proposed method can be used to reconstruct images of sliding window acquisitions with negligible smearing artifacts.« less

  8. Isomonodromy for the Degenerate Fifth Painlevé Equation

    NASA Astrophysics Data System (ADS)

    Acosta-Humánez, Primitivo B.; van der Put, Marius; Top, Jaap

    2017-05-01

    This is a sequel to papers by the last two authors making the Riemann-Hilbert correspondence and isomonodromy explicit. For the degenerate fifth Painlevé equation, the moduli spaces for connections and for monodromy are explicitly computed. It is proven that the extended Riemann-Hilbert morphism is an isomorphism. As a consequence these equations have the Painlevé property and the Okamoto-Painlevé space is identified with a moduli space of connections. Using MAPLE computations, one obtains formulas for the degenerate fifth Painlevé equation, for the Bäcklund transformations.

  9. 3D modelling of non-linear visco-elasto-plastic crustal and lithospheric processes using LaMEM

    NASA Astrophysics Data System (ADS)

    Popov, Anton; Kaus, Boris

    2016-04-01

    LaMEM (Lithosphere and Mantle Evolution Model) is a three-dimensional thermo-mechanical numerical code to simulate crustal and lithospheric deformation. The code is based on a staggered finite difference (FDSTAG) discretization in space, which is a stable and very efficient technique to solve the (nearly) incompressible Stokes equations that does not suffer from spurious pressure modes or artificial compressibility (a typical feature of low-order finite element techniques). Higher order finite element methods are more accurate than FDSTAG methods under idealized test cases where the jump in viscosity is exactly aligned with the boundaries of the elements. Yet, geodynamically more realistic cases involve evolving subduction zones, nonlinear rheologies or localized plastic shear bands. In these cases, the viscosity pattern evolves spontaneously during a simulation or even during nonlinear iterations, and the advantages of higher order methods disappear and they all converge with approximately first order accuracy, similar to that of FDSTAG [1]. Yet, since FDSTAG methods have considerably less degrees of freedom than quadratic finite element methods, they require about an order of magnitude less memory for the same number of nodes in 3D which also implies that every matrix-vector multiplication is significantly faster. LaMEM is build on top of the PETSc library and uses the particle-in-cell technique to track material properties, history variables which makes it straightforward to incorporate effects like phase changes or chemistry. An internal free surface is present, together with (simple) erosion and sedimentation processes, and a number of methods are available to import complex geometries into the code (e.g, http://geomio.bitbucket.org). Customized Galerkin coupled geometric multigrid preconditioners are implemented which resulted in a good parallel scalability of the code (we have tested LaMEM on 458'752 cores [2]). Yet, the drawback of using FDSTAG discretizations is that the Jacobian, which is a key component for fast and robust convergence of Newton-Raphson nonlinear iterative solvers, is more difficult to implement than in FE codes and actually results in a larger stencil. Rather than discretizing it explicitly, we therefore developed a matrix-free analytical Jacobian implementation for the coupled sets of momentum, mass, and energy conservation equations, combined with visco-elasto-plastic rheologies. Tests show that for simple nonlinear viscous rheologies there is little advantage of the MF approach over the standard MFFD PETSc approach, but that iterations converge slightly faster if plasticity is present. Results also show that the Newton solver usually converges in a quadratic manner even for pressure-dependent Drucker-Prager rheologies and without harmonic viscosity averaging of plastic and viscous rheologies. Yet, if the timestep is too large (and the model becomes effectively viscoplastic), or if the shear band pattern changes dramatically, stagnation of iterations might occur. This can be remedied with an appropriate regularization, which we discuss. LaMEM is available as open source software. [1] Thielmann, M., May, D.A., and Kaus, B., 2014, Discretization Errors in the Hybrid Finite Element Particle-in-cell Method: Pure and Applied Geophysics,, doi: 10.1007/s00024-014-0808-9. [2] Kaus B.J.P., Popov A.A., Baumann T.S., Püsök A.E., Bauville A., Fernandez N., Collignon M. (2015) Forward and inverse modelling of lithospheric deformation on geological timescales. NIC Symposium 2016 - Proceedings. NIC Series. Vol. 48.

  10. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  11. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    NASA Astrophysics Data System (ADS)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  12. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  13. Adaptive management: Chapter 1

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  14. Adaptive management

    USGS Publications Warehouse

    Allen, Craig R.; Garmestani, Ahjond S.

    2015-01-01

    Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.

  15. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  16. Fractal nematic colloids

    PubMed Central

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter. PMID:28117325

  17. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  18. Application of the Feynman-tree theorem together with BCFW recursion relations

    NASA Astrophysics Data System (ADS)

    Maniatis, M.

    2018-03-01

    Recently, it has been shown that on-shell scattering amplitudes can be constructed by the Feynman-tree theorem combined with the BCFW recursion relations. Since the BCFW relations are restricted to tree diagrams, the preceding application of the Feynman-tree theorem is essential. In this way, amplitudes can be constructed by on-shell and gauge-invariant tree amplitudes. Here, we want to apply this method to the electron-photon vertex correction. We present all the single, double, and triple phase-space tensor integrals explicitly and show that the sum of amplitudes coincides with the result of the conventional calculation of a virtual loop correction.

  19. Distinguishability notion based on Wootters statistical distance: Application to discrete maps

    NASA Astrophysics Data System (ADS)

    Gomez, Ignacio S.; Portesi, M.; Lamberti, P. W.

    2017-08-01

    We study the distinguishability notion given by Wootters for states represented by probability density functions. This presents the particularity that it can also be used for defining a statistical distance in chaotic unidimensional maps. Based on that definition, we provide a metric d ¯ for an arbitrary discrete map. Moreover, from d ¯ , we associate a metric space with each invariant density of a given map, which results to be the set of all distinguished points when the number of iterations of the map tends to infinity. Also, we give a characterization of the wandering set of a map in terms of the metric d ¯ , which allows us to identify the dissipative regions in the phase space. We illustrate the results in the case of the logistic and the circle maps numerically and analytically, and we obtain d ¯ and the wandering set for some characteristic values of their parameters. Finally, an extension of the metric space associated for arbitrary probability distributions (not necessarily invariant densities) is given along with some consequences. The statistical properties of distributions given by histograms are characterized in terms of the cardinal of the associated metric space. For two conjugate variables, the uncertainty principle is expressed in terms of the diameters of the associated metric space with those variables.

  20. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE PAGES

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    2016-12-07

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  1. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraintmore » is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.« less

  2. Principal components and iterative regression analysis of geophysical series: Application to Sunspot number (1750 2004)

    NASA Astrophysics Data System (ADS)

    Nordemann, D. J. R.; Rigozo, N. R.; de Souza Echer, M. P.; Echer, E.

    2008-11-01

    We present here an implementation of a least squares iterative regression method applied to the sine functions embedded in the principal components extracted from geophysical time series. This method seems to represent a useful improvement for the non-stationary time series periodicity quantitative analysis. The principal components determination followed by the least squares iterative regression method was implemented in an algorithm written in the Scilab (2006) language. The main result of the method is to obtain the set of sine functions embedded in the series analyzed in decreasing order of significance, from the most important ones, likely to represent the physical processes involved in the generation of the series, to the less important ones that represent noise components. Taking into account the need of a deeper knowledge of the Sun's past history and its implication to global climate change, the method was applied to the Sunspot Number series (1750-2004). With the threshold and parameter values used here, the application of the method leads to a total of 441 explicit sine functions, among which 65 were considered as being significant and were used for a reconstruction that gave a normalized mean squared error of 0.146.

  3. Comparison of Knowledge-based Iterative Model Reconstruction and Hybrid Reconstruction Techniques for Liver CT Evaluation of Hypervascular Hepatocellular Carcinoma.

    PubMed

    Park, Hyun Jeong; Lee, Jeong Min; Park, Sung Bin; Lee, Jong Beum; Jeong, Yoong Ki; Yoon, Jeong Hee

    The purpose of this work was to evaluate the image quality, lesion conspicuity, and dose reduction provided by knowledge-based iterative model reconstruction (IMR) in computed tomography (CT) of the liver compared with hybrid iterative reconstruction (IR) and filtered back projection (FBP) in patients with hepatocellular carcinoma (HCC). Fifty-six patients with 61 HCCs who underwent multiphasic reduced-dose CT (RDCT; n = 33) or standard-dose CT (SDCT; n = 28) were retrospectively evaluated. Reconstructed images with FBP, hybrid IR (iDose), IMR were evaluated for image quality using CT attenuation and image noise. Objective and subjective image quality of RDCT and SDCT sets were independently assessed by 2 observers in a blinded manner. Image quality and lesion conspicuity were better with IMR for both RDCT and SDCT than either FBP or IR (P < 0.001). Contrast-to-noise ratio of HCCs in IMR-RDCT was significantly higher on delayed phase (DP) (P < 0.001), and comparable on arterial phase, than with IR-SDCT (P = 0.501). Iterative model reconstruction RDCT was significantly superior to FBP-SDCT (P < 0.001). Compared with IR-SDCT, IMR-RDCT was comparable in image sharpness and tumor conspicuity on arterial phase, and superior in image quality, noise, and lesion conspicuity on DP. With the use of IMR, a 27% reduction of effective dose was achieved with RDCT (12.7 ± 0.6 mSv) compared with SDCT (17.4 ± 1.1 mSv) without loss of image quality (P < 0.001). Iterative model reconstruction provides better image quality and tumor conspicuity than FBP and IR with considerable noise reduction. In addition, more than comparable results were achieved with IMR-RDCT to IR-SDCT for the evaluation of HCCs.

  4. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  5. A self-adapting system for the automated detection of inter-ictal epileptiform discharges.

    PubMed

    Lodder, Shaun S; van Putten, Michel J A M

    2014-01-01

    Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form "IED nominations", each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20-30 min recordings 1 took approximately 5 min. The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents.

  6. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  7. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  8. On the use of the energy probability distribution zeros in the study of phase transitions

    NASA Astrophysics Data System (ADS)

    Mól, L. A. S.; Rodrigues, R. G. M.; Stancioli, R. A.; Rocha, J. C. S.; Costa, B. V.

    2018-04-01

    This contribution is devoted to cover some technical aspects related to the use of the recently proposed energy probability distribution zeros in the study of phase transitions. This method is based on the partial knowledge of the partition function zeros and has been shown to be extremely efficient to precisely locate phase transition temperatures. It is based on an iterative method in such a way that the transition temperature can be approached at will. The iterative method will be detailed and some convergence issues that has been observed in its application to the 2D Ising model and to an artificial spin ice model will be shown, together with ways to circumvent them.

  9. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  10. Efficient entanglement distillation without quantum memory.

    PubMed

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J; Fiurášek, Jaromír; Schnabel, Roman

    2016-05-31

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution.

  11. Efficient entanglement distillation without quantum memory

    PubMed Central

    Abdelkhalek, Daniela; Syllwasschy, Mareike; Cerf, Nicolas J.; Fiurášek, Jaromír; Schnabel, Roman

    2016-01-01

    Entanglement distribution between distant parties is an essential component to most quantum communication protocols. Unfortunately, decoherence effects such as phase noise in optical fibres are known to demolish entanglement. Iterative (multistep) entanglement distillation protocols have long been proposed to overcome decoherence, but their probabilistic nature makes them inefficient since the success probability decays exponentially with the number of steps. Quantum memories have been contemplated to make entanglement distillation practical, but suitable quantum memories are not realised to date. Here, we present the theory for an efficient iterative entanglement distillation protocol without quantum memories and provide a proof-of-principle experimental demonstration. The scheme is applied to phase-diffused two-mode-squeezed states and proven to distil entanglement for up to three iteration steps. The data are indistinguishable from those that an efficient scheme using quantum memories would produce. Since our protocol includes the final measurement it is particularly promising for enhancing continuous-variable quantum key distribution. PMID:27241946

  12. Iterative simulated quenching for designing irregular-spot-array generators.

    PubMed

    Gillet, J N; Sheng, Y

    2000-07-10

    We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.

  13. Variable aperture-based ptychographical iterative engine method.

    PubMed

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. Design of a dispersion interferometer combined with a polarimeter to increase the electron density measurement reliability on ITER

    NASA Astrophysics Data System (ADS)

    Akiyama, T.; Sirinelli, A.; Watts, C.; Shigin, P.; Vayakis, G.; Walsh, M.

    2016-11-01

    A dispersion interferometer is a reliable density measurement system and is being designed as a complementary density diagnostic on ITER. The dispersion interferometer is inherently insensitive to mechanical vibrations, and a combined polarimeter with the same line of sight can correct fringe jump errors. A proof of the principle of the CO2 laser dispersion interferometer combined with the PEM polarimeter was recently conducted, where the phase shift and the polarization angle were successfully measured simultaneously. Standard deviations of the line-average density and the polarization angle measurements over 1 s are 9 × 1016 m-2 and 0.19°, respectively, with a time constant of 100 μs. Drifts of the zero point, which determine the resolution in steady-state operation, correspond to 0.25% and 1% of the phase shift and the Faraday rotation angle expected on ITER.

  15. Design of a dispersion interferometer combined with a polarimeter to increase the electron density measurement reliability on ITER.

    PubMed

    Akiyama, T; Sirinelli, A; Watts, C; Shigin, P; Vayakis, G; Walsh, M

    2016-11-01

    A dispersion interferometer is a reliable density measurement system and is being designed as a complementary density diagnostic on ITER. The dispersion interferometer is inherently insensitive to mechanical vibrations, and a combined polarimeter with the same line of sight can correct fringe jump errors. A proof of the principle of the CO 2 laser dispersion interferometer combined with the PEM polarimeter was recently conducted, where the phase shift and the polarization angle were successfully measured simultaneously. Standard deviations of the line-average density and the polarization angle measurements over 1 s are 9 × 10 16 m -2 and 0.19°, respectively, with a time constant of 100 μs. Drifts of the zero point, which determine the resolution in steady-state operation, correspond to 0.25% and 1% of the phase shift and the Faraday rotation angle expected on ITER.

  16. US NDC Modernization Iteration E1 Prototyping Report: User Interface Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lober, Randall R.

    2014-12-01

    During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team completed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the User Interface Framework (UIF) in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.

  17. US NDC Modernization Iteration E1 Prototyping Report: Common Object Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Jennifer E.; Hess, Michael M.

    2014-12-01

    During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team completed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the Common Object Interface (COI) in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.

  18. US NDC Modernization Iteration E1 Prototyping Report: Processing Control Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Ryan; Hamlet, Benjamin R.

    2014-12-01

    During the first iteration of the US NDC Modernization Elaboration phase (E1), the SNL US NDC modernization project team developed an initial survey of applicable COTS solutions, and established exploratory prototyping related to the processing control framework in support of system architecture definition. This report summarizes these activities and discusses planned follow-on work.

  19. Measurement of the complex transmittance of large optical elements with Ptychographical Iterative Engine.

    PubMed

    Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang

    2014-01-27

    Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.

  20. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Du; Yang, Weitao

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  1. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  2. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE PAGES

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  3. Space-based solar power conversion and delivery systems study. Volume 2: Engineering analysis

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The technical and economic feasibility of Satellite Solar Power Systems was studied with emphasis on the analysis and definition of an integrated strawman configuration concept, from which credible cost data could be estimated. Specifically, system concepts for each of the major subprogram areas were formulated, analyzed, and iterated to the degree necessary for establishing an overall, workable baseline system design. Cost data were estimated for the baseline and used to conduct economic analyses. The baseline concept selected was a 5-GW crystal silicon truss-type photovoltaic configuration, which represented the most mature concept available. The overall results and major findings, and the results of technical analyses performed during the final phase of the study efforts are reported.

  4. Uncertainties in SOA Formation from the Photooxidation of α-pinene

    NASA Astrophysics Data System (ADS)

    McVay, R.; Zhang, X.; Aumont, B.; Valorso, R.; Camredon, M.; La, S.; Seinfeld, J.

    2015-12-01

    Explicit chemical models such as GECKO-A (the Generator for Explicit Chemistry and Kinetics of Organics in the Atmosphere) enable detailed modeling of gas-phase photooxidation and secondary organic aerosol (SOA) formation. Comparison between these explicit models and chamber experiments can provide insight into processes that are missing or unknown in these models. GECKO-A is used to model seven SOA formation experiments from α-pinene photooxidation conducted at varying seed particle concentrations with varying oxidation rates. We investigate various physical and chemical processes to evaluate the extent of agreement between the experiments and the model predictions. We examine the effect of vapor wall loss on SOA formation and how the importance of this effect changes at different oxidation rates. Proposed gas-phase autoxidation mechanisms are shown to significantly affect SOA predictions. The potential effects of particle-phase dimerization and condensed-phase photolysis are investigated. We demonstrate the extent to which SOA predictions in the α-pinene photooxidation system depend on uncertainties in the chemical mechanism.

  5. A Predictive Model for Toxicity Effects Assessment of Biotransformed Hepatic Drugs Using Iterative Sampling Method.

    PubMed

    Tharwat, Alaa; Moemen, Yasmine S; Hassanien, Aboul Ella

    2016-12-09

    Measuring toxicity is one of the main steps in drug development. Hence, there is a high demand for computational models to predict the toxicity effects of the potential drugs. In this study, we used a dataset, which consists of four toxicity effects:mutagenic, tumorigenic, irritant and reproductive effects. The proposed model consists of three phases. In the first phase, rough set-based methods are used to select the most discriminative features for reducing the classification time and improving the classification performance. Due to the imbalanced class distribution, in the second phase, different sampling methods such as Random Under-Sampling, Random Over-Sampling and Synthetic Minority Oversampling Technique are used to solve the problem of imbalanced datasets. ITerative Sampling (ITS) method is proposed to avoid the limitations of those methods. ITS method has two steps. The first step (sampling step) iteratively modifies the prior distribution of the minority and majority classes. In the second step, a data cleaning method is used to remove the overlapping that is produced from the first step. In the third phase, Bagging classifier is used to classify an unknown drug into toxic or non-toxic. The experimental results proved that the proposed model performed well in classifying the unknown samples according to all toxic effects in the imbalanced datasets.

  6. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. The PRIMA Test Facility: SPIDER and MITICA test-beds for ITER neutral beam injectors

    NASA Astrophysics Data System (ADS)

    Toigo, V.; Piovan, R.; Dal Bello, S.; Gaio, E.; Luchetta, A.; Pasqualotto, R.; Zaccaria, P.; Bigi, M.; Chitarin, G.; Marcuzzi, D.; Pomaro, N.; Serianni, G.; Agostinetti, P.; Agostini, M.; Antoni, V.; Aprile, D.; Baltador, C.; Barbisan, M.; Battistella, M.; Boldrin, M.; Brombin, M.; Dalla Palma, M.; De Lorenzi, A.; Delogu, R.; De Muri, M.; Fellin, F.; Ferro, A.; Fiorentin, A.; Gambetta, G.; Gnesotto, F.; Grando, L.; Jain, P.; Maistrello, A.; Manduchi, G.; Marconato, N.; Moresco, M.; Ocello, E.; Pavei, M.; Peruzzo, S.; Pilan, N.; Pimazzoni, A.; Recchia, M.; Rizzolo, A.; Rostagni, G.; Sartori, E.; Siragusa, M.; Sonato, P.; Sottocornola, A.; Spada, E.; Spagnolo, S.; Spolaore, M.; Taliercio, C.; Valente, M.; Veltri, P.; Zamengo, A.; Zaniol, B.; Zanotto, L.; Zaupa, M.; Boilson, D.; Graceffa, J.; Svensson, L.; Schunke, B.; Decamps, H.; Urbani, M.; Kushwah, M.; Chareyre, J.; Singh, M.; Bonicelli, T.; Agarici, G.; Garbuglia, A.; Masiello, A.; Paolucci, F.; Simon, M.; Bailly-Maitre, L.; Bragulat, E.; Gomez, G.; Gutierrez, D.; Mico, G.; Moreno, J.-F.; Pilard, V.; Kashiwagi, M.; Hanada, M.; Tobari, H.; Watanabe, K.; Maejima, T.; Kojima, A.; Umeda, N.; Yamanaka, H.; Chakraborty, A.; Baruah, U.; Rotti, C.; Patel, H.; Nagaraju, M. V.; Singh, N. P.; Patel, A.; Dhola, H.; Raval, B.; Fantz, U.; Heinemann, B.; Kraus, W.; Hanke, S.; Hauer, V.; Ochoa, S.; Blatchford, P.; Chuilon, B.; Xue, Y.; De Esch, H. P. L.; Hemsworth, R.; Croci, G.; Gorini, G.; Rebai, M.; Muraro, A.; Tardocchi, M.; Cavenago, M.; D'Arienzo, M.; Sandri, S.; Tonti, A.

    2017-08-01

    The ITER Neutral Beam Test Facility (NBTF), called PRIMA (Padova Research on ITER Megavolt Accelerator), is hosted in Padova, Italy and includes two experiments: MITICA, the full-scale prototype of the ITER heating neutral beam injector, and SPIDER, the full-size radio frequency negative-ions source. The NBTF realization and the exploitation of SPIDER and MITICA have been recognized as necessary to make the future operation of the ITER heating neutral beam injectors efficient and reliable, fundamental to the achievement of thermonuclear-relevant plasma parameters in ITER. This paper reports on design and R&D carried out to construct PRIMA, SPIDER and MITICA, and highlights the huge progress made in just a few years, from the signature of the agreement for the NBTF realization in 2011, up to now—when the buildings and relevant infrastructures have been completed, SPIDER is entering the integrated commissioning phase and the procurements of several MITICA components are at a well advanced stage.

  8. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Elliptic polylogarithms and iterated integrals on elliptic curves. Part I: general formalism

    NASA Astrophysics Data System (ADS)

    Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo

    2018-05-01

    We introduce a class of iterated integrals, defined through a set of linearly independent integration kernels on elliptic curves. As a direct generalisation of multiple polylogarithms, we construct our set of integration kernels ensuring that they have at most simple poles, implying that the iterated integrals have at most logarithmic singularities. We study the properties of our iterated integrals and their relationship to the multiple elliptic polylogarithms from the mathematics literature. On the one hand, we find that our iterated integrals span essentially the same space of functions as the multiple elliptic polylogarithms. On the other, our formulation allows for a more direct use to solve a large variety of problems in high-energy physics. We demonstrate the use of our functions in the evaluation of the Laurent expansion of some hypergeometric functions for values of the indices close to half integers.

  10. Efficient design of nanoplasmonic waveguide devices using the space mapping algorithm.

    PubMed

    Dastmalchi, Pouya; Veronis, Georgios

    2013-12-30

    We show that the space mapping algorithm, originally developed for microwave circuit optimization, can enable the efficient design of nanoplasmonic waveguide devices which satisfy a set of desired specifications. Space mapping utilizes a physics-based coarse model to approximate a fine model accurately describing a device. Here the fine model is a full-wave finite-difference frequency-domain (FDFD) simulation of the device, while the coarse model is based on transmission line theory. We demonstrate that simply optimizing the transmission line model of the device is not enough to obtain a device which satisfies all the required design specifications. On the other hand, when the iterative space mapping algorithm is used, it converges fast to a design which meets all the specifications. In addition, full-wave FDFD simulations of only a few candidate structures are required before the iterative process is terminated. Use of the space mapping algorithm therefore results in large reductions in the required computation time when compared to any direct optimization method of the fine FDFD model.

  11. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  12. Conservative supra-characteristics method for splitting the hyperbolic systems of gasdynamics for real and perfect gases

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.

    1982-01-01

    A conservative flux difference splitting is presented for the hyperbolic systems of gasdynamics. The stable robust method is suitable for wide application in a variety of schemes, explicit or implicit, iterative or direct, for marching in either time or space. The splitting is modeled on the local quasi one dimensional characteristics system for multi-dimensional flow similar to Chakravarthy's nonconservative split coefficient matrix method; but, as the result of maintaining global conservation, the method is able to capture sharp shocks correctly. The embedded characteristics formulation is cast in a primitive variable the volumetric internal energy (rather than the pressure) that is effective for treating real as well as perfect gases. Finally the relationship of the splitting to characteristics boundary conditions is discussed and the associated conservative matrix formulation for a computed blown wall boundary condition is developed as an example. The theoretical development employs and extends the notion of Roe of constructing stable upwind difference formulae by sending split simple one sided flux difference pieces to appropriate mesh sites. The developments are also believed to have the potential for aiding in the analysis of both existing and new conservative difference schemes.

  13. Block correlated second order perturbation theory with a generalized valence bond reference function.

    PubMed

    Xu, Enhua; Li, Shuhua

    2013-11-07

    The block correlated second-order perturbation theory with a generalized valence bond (GVB) reference (GVB-BCPT2) is proposed. In this approach, each geminal in the GVB reference is considered as a "multi-orbital" block (a subset of spin orbitals), and each occupied or virtual spin orbital is also taken as a single block. The zeroth-order Hamiltonian is set to be the summation of the individual Hamiltonians of all blocks (with explicit two-electron operators within each geminal) so that the GVB reference function and all excited configuration functions are its eigenfunctions. The GVB-BCPT2 energy can be directly obtained without iteration, just like the second order Mo̸ller-Plesset perturbation method (MP2), both of which are size consistent. We have applied this GVB-BCPT2 method to investigate the equilibrium distances and spectroscopic constants of 7 diatomic molecules, conformational energy differences of 8 small molecules, and bond-breaking potential energy profiles in 3 systems. GVB-BCPT2 is demonstrated to have noticeably better performance than MP2 for systems with significant multi-reference character, and provide reasonably accurate results for some systems with large active spaces, which are beyond the capability of all CASSCF-based methods.

  14. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  15. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less

  16. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  17. Analysis of correlation structures in the Synechocystis PCC6803 genome.

    PubMed

    Wu, Zuo-Bing

    2014-12-01

    Transfer of nucleotide strings in the Synechocystis sp. PCC6803 genome is investigated to exhibit periodic and non-periodic correlation structures by using the recurrence plot method and the phase space reconstruction technique. The periodic correlation structures are generated by periodic transfer of several substrings in long periodic or non-periodic nucleotide strings embedded in the coding regions of genes. The non-periodic correlation structures are generated by non-periodic transfer of several substrings covering or overlapping with the coding regions of genes. In the periodic and non-periodic transfer, some gaps divide the long nucleotide strings into the substrings and prevent their global transfer. Most of the gaps are either the replacement of one base or the insertion/reduction of one base. In the reconstructed phase space, the points generated from two or three steps for the continuous iterative transfer via the second maximal distance can be fitted by two lines. It partly reveals an intrinsic dynamics in the transfer of nucleotide strings. Due to the comparison of the relative positions and lengths, the substrings concerned with the non-periodic correlation structures are almost identical to the mobile elements annotated in the genome. The mobile elements are thus endowed with the basic results on the correlation structures. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. van der Waals criticality in AdS black holes: A phenomenological study

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Krishnakanta; Majhi, Bibhas Ranjan; Samanta, Saurav

    2017-10-01

    Anti-de Sitter black holes exhibit van der Waals-type phase transition. In the extended phase-space formalism, the critical exponents for any spacetime metric are identical to the standard ones. Motivated by this fact, we give a general expression for the Helmholtz free energy near the critical point, which correctly reproduces these exponents. The idea is similar to the Landau model, which gives a phenomenological description of the usual second-order phase transition. Here, two main inputs are taken into account for the analysis: (a) black holes should have van der Waals-like isotherms, and (b) free energy can be expressed solely as a function of thermodynamic volume and horizon temperature. Resulting analysis shows that the form of Helmholtz free energy correctly encapsulates the features of the Landau function. We also discuss the isolated critical point accompanied by nonstandard values of critical exponents. The whole formalism is then extended to two other criticalities, namely, Y -X and T -S (based on the standard; i.e., nonextended phase space), where X and Y are generalized force and displacement, whereas T and S are the horizon temperature and entropy. We observe that in the former case Gibbs free energy plays the role of Landau function, whereas in the later case, that role is played by the internal energy (here, it is the black hole mass). Our analysis shows that, although the existence of a van der Waals phase transition depends on the explicit form of the black hole metric, the values of the critical exponents are universal in nature.

  19. Admissible perturbations and false instabilities in PT -symmetric quantum systems

    NASA Astrophysics Data System (ADS)

    Znojil, Miloslav

    2018-03-01

    One of the most characteristic mathematical features of the PT -symmetric quantum mechanics is the explicit Hamiltonian dependence of its physical Hilbert space of states H =H (H ) . Some of the most important physical consequences are discussed, with emphasis on the dynamical regime in which the system is close to phase transition. Consistent perturbation treatment of such a regime is proposed. An illustrative application of the innovated perturbation theory to a non-Hermitian but PT -symmetric user-friendly family of J -parametric "discrete anharmonic" quantum Hamiltonians H =H (λ ⃗) is provided. The models are shown to admit the standard probabilistic interpretation if and only if the parameters remain compatible with the reality of the spectrum, λ ⃗∈D(physical ) . In contradiction to conventional wisdom, the systems are then shown to be stable with respect to admissible perturbations, inside the domain D(physical ), even in the immediate vicinity of the phase-transition boundaries ∂ D(physical ) .

  20. Reflections on Gibbs: From Critical Phenomena to the Amistad

    NASA Astrophysics Data System (ADS)

    Kadanoff, Leo P.

    2003-03-01

    J. Willard Gibbs, the younger was the first American theorist. He was one of the inventors of statistical physics. His introduction and development of the concepts of phase space, phase transitions, and thermodynamic surfaces was remarkably correct and elegant. These three concepts form the basis of different but related areas of physics. The connection among these areas has been a subject of deep reflection from Gibbs' time to our own. I shall talk about these connections by using concepts suggested by the work of Michael Berry and explicitly put forward by the philosopher Robert Batterman. This viewpoint relates theory-connection to the applied mathematics concepts of asymptotic analysis and singular perturbations. J. Willard Gibbs, the younger, had all his achievements concentrated in science. His father, also J. Willard Gibbs, also a Professor at Yale, had one great achievement that remains unmatched in our day. I shall describe it.

Top