Sample records for iterative self-consistent procedure

  1. Self-consistent hybrid functionals for solids: a fully-automated implementation

    NASA Astrophysics Data System (ADS)

    Erba, A.

    2017-08-01

    A fully-automated algorithm for the determination of the system-specific optimal fraction of exact exchange in self-consistent hybrid functionals of the density-functional-theory is illustrated, as implemented into the public Crystal program. The exchange fraction of this new class of functionals is self-consistently updated proportionally to the inverse of the dielectric response of the system within an iterative procedure (Skone et al 2014 Phys. Rev. B 89, 195112). Each iteration of the present scheme, in turn, implies convergence of a self-consistent-field (SCF) and a coupled-perturbed-Hartree-Fock/Kohn-Sham (CPHF/KS) procedure. The present implementation, beside improving the user-friendliness of self-consistent hybrids, exploits the unperturbed and electric-field perturbed density matrices from previous iterations as guesses for subsequent SCF and CPHF/KS iterations, which is documented to reduce the overall computational cost of the whole process by a factor of 2.

  2. Communication: A difference density picture for the self-consistent field ansatz.

    PubMed

    Parrish, Robert M; Liu, Fang; Martínez, Todd J

    2016-04-07

    We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.

  3. Communication: A difference density picture for the self-consistent field ansatz

    NASA Astrophysics Data System (ADS)

    Parrish, Robert M.; Liu, Fang; Martínez, Todd J.

    2016-04-01

    We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this "difference self-consistent field (dSCF)" picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space. These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TeraChem SCF implementation.

  4. Assessing the performance of self-consistent hybrid functional for band gap calculation in oxide semiconductors

    NASA Astrophysics Data System (ADS)

    He, Jiangang; Franchini, Cesare

    2017-11-01

    In this paper we assess the predictive power of the self-consistent hybrid functional scPBE0 in calculating the band gap of oxide semiconductors. The computational procedure is based on the self-consistent evaluation of the mixing parameter α by means of an iterative calculation of the static dielectric constant using the perturbation expansion after discretization method and making use of the relation \

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parrish, Robert M.; Liu, Fang; Martínez, Todd J., E-mail: toddjmartinez@gmail.com

    We formulate self-consistent field (SCF) theory in terms of an interaction picture where the working variable is the difference density matrix between the true system and a corresponding superposition of atomic densities. As the difference density matrix directly represents the electronic deformations inherent in chemical bonding, this “difference self-consistent field (dSCF)” picture provides a number of significant conceptual and computational advantages. We show that this allows for a stable and efficient dSCF iterative procedure with wholly single-precision Coulomb and exchange matrix builds. We also show that the dSCF iterative procedure can be performed with aggressive screening of the pair space.more » These approximations are tested and found to be accurate for systems with up to 1860 atoms and >10 000 basis functions, providing for immediate overall speedups of up to 70% in the heavily optimized TERACHEM SCF implementation.« less

  6. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  7. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    NASA Astrophysics Data System (ADS)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-01

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  8. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory.

    PubMed

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  9. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE PAGES

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  10. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  11. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression.

    PubMed

    Meng, Yilin; Roux, Benoît

    2015-08-11

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.

  12. Efficient Determination of Free Energy Landscapes in Multiple Dimensions from Biased Umbrella Sampling Simulations Using Linear Regression

    PubMed Central

    2015-01-01

    The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost. PMID:26574437

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finzel, Kati, E-mail: kati.finzel@liu.se

    The local conditions for the Pauli potential that are necessary in order to yield self-consistent electron densities from orbital-free calculations are investigated for approximations that are expressed with the help of a local position variable. It is shown that those local conditions also apply when the Pauli potential is given in terms of the electron density. An explicit formula for the Ne atom is given, preserving the local conditions during the iterative procedure. The resulting orbital-free electron density exhibits proper shell structure behavior and is in close agreement with the Kohn-Sham electron density. This study demonstrates that it is possiblemore » to obtain self-consistent orbital-free electron densities with proper atomic shell structure from simple one-point approximations for the Pauli potential at local density level.« less

  15. Numerical modelling of instantaneous plate tectonics

    NASA Technical Reports Server (NTRS)

    Minster, J. B.; Haines, E.; Jordan, T. H.; Molnar, P.

    1974-01-01

    Assuming lithospheric plates to be rigid, 68 spreading rates, 62 fracture zones trends, and 106 earthquake slip vectors are systematically inverted to obtain a self-consistent model of instantaneous relative motions for eleven major plates. The inverse problem is linearized and solved iteratively by a maximum-likelihood procedure. Because the uncertainties in the data are small, Gaussian statistics are shown to be adequate. The use of a linear theory permits (1) the calculation of the uncertainties in the various angular velocity vectors caused by uncertainties in the data, and (2) quantitative examination of the distribution of information within the data set. The existence of a self-consistent model satisfying all the data is strong justification of the rigid plate assumption. Slow movement between North and South America is shown to be resolvable.

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  17. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  18. Applicability of Kerker preconditioning scheme to the self-consistent density functional theory calculations of inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Zhou, Yuzhi; Wang, Han; Liu, Yu; Gao, Xingyu; Song, Haifeng

    2018-03-01

    The Kerker preconditioner, based on the dielectric function of homogeneous electron gas, is designed to accelerate the self-consistent field (SCF) iteration in the density functional theory calculations. However, a question still remains regarding its applicability to the inhomogeneous systems. We develop a modified Kerker preconditioning scheme which captures the long-range screening behavior of inhomogeneous systems and thus improves the SCF convergence. The effectiveness and efficiency is shown by the tests on long-z slabs of metals, insulators, and metal-insulator contacts. For situations without a priori knowledge of the system, we design the a posteriori indicator to monitor if the preconditioner has suppressed charge sloshing during the iterations. Based on the a posteriori indicator, we demonstrate two schemes of the self-adaptive configuration for the SCF iteration.

  19. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity.

    PubMed

    Dummer, Benjamin; Wieland, Stefan; Lindner, Benjamin

    2014-01-01

    A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.

  20. Constructing Integrable Full-pressure Full-current Free-boundary Stellarator Magnetohydrodynamic Equilibria

    NASA Astrophysics Data System (ADS)

    Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.

    2003-06-01

    For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands are guaranteed to exist. Magnetic islands break the smooth topology of nested flux surfaces and chaotic field lines result when magnetic islands overlap. An analogous case occurs with 11/2-dimension Hamiltonian systems where resonant perturbations cause singularities in the transformation to action-angle coordinates and destroy integrability. The suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Techniques for `healing' vacuum fields and fixed-boundary plasma equilibria have been developed, but what is ultimately required is a procedure for designing stellarators such that the self-consistent plasma equilibrium currents and the coil currents combine to produce an integrable magnetic field, and such a procedure is presented here for the first time. Magnetic islands in free-boundary full-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [A.H.Reiman & H.S.Greenside, Comp. Phys. Comm., 43:157, 1986.] which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment [G.H.Neilson et.al., Phys. Plas., 7:1911, 2000.].

  1. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  2. Development of the Nuclear-Electronic Orbital Approach and Applications to Ionic Liquids and Tunneling Processes

    DTIC Science & Technology

    2010-02-24

    electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation

  3. Analytic approximation for random muffin-tin alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mills, R.; Gray, L.J.; Kaplan, T.

    1983-03-15

    The methods introduced in a previous paper under the name of ''traveling-cluster approximation'' (TCA) are applied, in a multiple-scattering approach, to the case of a random muffin-tin substitutional alloy. This permits the iterative part of a self-consistent calculation to be carried out entirely in terms of on-the-energy-shell scattering amplitudes. Off-shell components of the mean resolvent, needed for the calculation of spectral functions, are obtained by standard methods involving single-site scattering wave functions. The single-site TCA is just the usual coherent-potential approximation, expressed in a form particularly suited for iteration. A fixed-point theorem is proved for the general t-matrix TCA, ensuringmore » convergence upon iteration to a unique self-consistent solution with the physically essential Herglotz properties.« less

  4. Convergence of an iterative procedure for large-scale static analysis of structural components

    NASA Technical Reports Server (NTRS)

    Austin, F.; Ojalvo, I. U.

    1976-01-01

    The paper proves convergence of an iterative procedure for calculating the deflections of built-up component structures which can be represented as consisting of a dominant, relatively stiff primary structure and a less stiff secondary structure, which may be composed of one or more substructures that are not connected to one another but are all connected to the primary structure. The iteration consists in estimating the deformation of the primary structure in the absence of the secondary structure on the assumption that all mechanical loads are applied directly to the primary structure. The j-th iterate primary structure deflections at the interface are imposed on the secondary structure, and the boundary loads required to produce these deflections are computed. The cycle is completed by applying the interface reaction to the primary structure and computing its updated deflections. It is shown that the mathematical condition for convergence of this procedure is that the maximum eigenvalue of the equation relating primary-structure deflection to imposed secondary-structure deflection be less than unity, which is shown to correspond with the physical requirement that the secondary structure be more flexible at the interface boundary.

  5. Steady-State Electrodiffusion from the Nernst-Planck Equation Coupled to Local Equilibrium Monte Carlo Simulations.

    PubMed

    Boda, Dezső; Gillespie, Dirk

    2012-03-13

    We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.

  6. Validation of a coupled core-transport, pedestal-structure, current-profile and equilibrium model

    NASA Astrophysics Data System (ADS)

    Meneghini, O.

    2015-11-01

    The first workflow capable of predicting the self-consistent solution to the coupled core-transport, pedestal structure, and equilibrium problems from first-principles and its experimental tests are presented. Validation with DIII-D discharges in high confinement regimes shows that the workflow is capable of robustly predicting the kinetic profiles from on axis to the separatrix and matching the experimental measurements to within their uncertainty, with no prior knowledge of the pedestal height nor of any measurement of the temperature or pressure. Self-consistent coupling has proven to be essential to match the experimental results, and capture the non-linear physics that governs the core and pedestal solutions. In particular, clear stabilization of the pedestal peeling ballooning instabilities by the global Shafranov shift and destabilization by additional edge bootstrap current, and subsequent effect on the core plasma profiles, have been clearly observed and documented. In our model, self-consistency is achieved by iterating between the TGYRO core transport solver (with NEO and TGLF for neoclassical and turbulent flux), and the pedestal structure predicted by the EPED model. A self-consistent equilibrium is calculated by EFIT, while the ONETWO transport package evolves the current profile and calculates the particle and energy sources. The capabilities of such workflow are shown to be critical for the design of future experiments such as ITER and FNSF, which operate in a regime where the equilibrium, the pedestal, and the core transport problems are strongly coupled, and for which none of these quantities can be assumed to be known. Self-consistent core-pedestal predictions for ITER, as well as initial optimizations, will be presented. Supported by the US Department of Energy under DE-FC02-04ER54698, DE-SC0012652.

  7. Analysis of energy states in modulation doped multiquantum well heterostructures

    NASA Technical Reports Server (NTRS)

    Ji, G.; Henderson, T.; Peng, C. K.; Huang, D.; Morkoc, H.

    1990-01-01

    A precise and effective numerical procedure to model the band diagram of modulation doped multiquantum well heterostructures is presented. This method is based on a self-consistent iterative solution of the Schroedinger equation and the Poisson equation. It can be used rather easily in any arbitrary modulation-doped structure. In addition to confined energy subbands, the unconfined states can be calculated as well. Examples on realistic device structures are given to demonstrate capabilities of this procedure. The numerical results are in good agreement with experiments. With the aid of this method the transitions involving both the confined and unconfined conduction subbands in a modulation doped AlGaAs/GaAs superlattice, and in a strained layer InGaAs/GaAs superlattice are identified. These results represent the first observation of unconfined transitions in modulation doped multiquantum well structures.

  8. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, Brendan; Polizzi, Eric

    2013-03-01

    The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.

  9. Iterative Methods for the Non-LTE Transfer of Polarized Radiation: Resonance Line Polarization in One-dimensional Atmospheres

    NASA Astrophysics Data System (ADS)

    Trujillo Bueno, Javier; Manso Sainz, Rafael

    1999-05-01

    This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.

  10. Unsupervised iterative detection of land mines in highly cluttered environments.

    PubMed

    Batman, Sinan; Goutsias, John

    2003-01-01

    An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.

  11. Iterative method of construction of a bifurcation diagram of autorotation motions for a system with one degree of freedom

    NASA Astrophysics Data System (ADS)

    Klimina, L. A.

    2018-05-01

    The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.

  12. Prefixation of Simplex Pairs in Czech: An Analysis of Spatial Semantics, Distributive Verbs, and Procedural Meanings

    ERIC Educational Resources Information Center

    Hilchey, Christian Thomas

    2014-01-01

    This dissertation examines prefixation of simplex pairs. A simplex pair consists of an iterative imperfective and a semelfactive perfective verb. When prefixed, both of these verbs are perfective. The prefixed forms derived from semelfactives are labeled single act verbs, while the prefixed forms derived from iterative imperfective simplex verbs…

  13. Complex wet-environments in electronic-structure calculations

    NASA Astrophysics Data System (ADS)

    Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.

  14. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  15. Convergence of quasiparticle self-consistent GW calculations of transition metal monoxides

    NASA Astrophysics Data System (ADS)

    Das, Suvadip; Coulter, John E.; Manousakis, Efstratios

    2015-03-01

    We have investigated the electronic structure of the transition metal monoxides MnO, CoO, and NiO in their undistorted rock-salt structure within a fully iterated quasiparticle self-consistent GW (QPscGW) scheme. We have studied the convergence of the QPscGW method, i.e., how the quasiparticle energy eigenvalues and wavefunctions converge as a function of the QPscGW iterations, and compared the converged outputs obtained from different starting wavefunctions. We found that the convergence is slow and that a one-shot G0W0 calculation does not significantly improve the initial eigenvalues and states. In some cases the ``path'' to convergence may go through energy band reordering which cannot be captured by the simple initial unperturbed Hamiltonian. When a fully iterated solution is reached, the converged density of states, band-gaps and magnetic moments of these oxides are found to be only weakly dependent on the choice of the starting wavefunctions and in reasonable agreement with the experiment. National High Magnetic Field Laboratory.

  16. A multi-scale homogenization model for fine-grained porous viscoplastic polycrystals: I - Finite-strain theory

    NASA Astrophysics Data System (ADS)

    Song, Dawei; Ponte Castañeda, P.

    2018-06-01

    We make use of the recently developed iterated second-order homogenization method to obtain finite-strain constitutive models for the macroscopic response of porous polycrystals consisting of large pores randomly distributed in a fine-grained polycrystalline matrix. The porous polycrystal is modeled as a three-scale composite, where the grains are described by single-crystal viscoplasticity and the pores are assumed to be large compared to the grain size. The method makes use of a linear comparison composite (LCC) with the same substructure as the actual nonlinear composite, but whose local properties are chosen optimally via a suitably designed variational statement. In turn, the effective properties of the resulting three-scale LCC are determined by means of a sequential homogenization procedure, utilizing the self-consistent estimates for the effective behavior of the polycrystalline matrix, and the Willis estimates for the effective behavior of the porous composite. The iterated homogenization procedure allows for a more accurate characterization of the properties of the matrix by means of a finer "discretization" of the properties of the LCC to obtain improved estimates, especially at low porosities, high nonlinearties and high triaxialities. In addition, consistent homogenization estimates for the average strain rate and spin fields in the pores and grains are used to develop evolution laws for the substructural variables, including the porosity, pore shape and orientation, as well as the "crystallographic" and "morphological" textures of the underlying matrix. In Part II of this work has appeared in Song and Ponte Castañeda (2018b), the model will be used to generate estimates for both the instantaneous effective response and the evolution of the microstructure for porous FCC and HCP polycrystals under various loading conditions.

  17. Linear-scaling implementation of molecular response theory in self-consistent field electronic-structure theory.

    PubMed

    Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł

    2007-04-21

    A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.

  18. Chemical Compositions of Kinematically Selected Outer Halo Stars

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Ishigaki, Miho; Aoki, Wako; Zhao, Gang; Chiba, Masashi

    2009-12-01

    Chemical abundances of 26 metal-poor dwarfs and giants are determined from high-resolution and high signal-to-noise ratio spectra obtained with the Subaru/High Dispersion Spectrograph. The sample is selected so that most of the objects have outer-halo kinematics. Self-consistent atmospheric parameters were determined by an iterative procedure based on spectroscopic analysis. Abundances of 13 elements, including α-elements (Mg, Si, Ca, Ti), odd-Z light elements (Na, Sc), iron-peak elements (Cr, Mn, Fe, Ni, Zn), and neutron-capture elements (Y, Ba), are determined by two independent data reduction and local thermodynamic equillibrium analysis procedures, confirming the consistency of the stellar parameters and abundances results. We find a decreasing trend of [α/Fe] with increasing [Fe/H] for the range of -3.5< [Fe/H] <-1, as found by Stephens & Boesgaard. [Zn/Fe] values of most objects in our sample are slightly lower than the bulk of halo stars previously studied. These results are discussed as possible chemical properties of the outer halo in the Galaxy. Based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  19. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  20. Communication: Hilbert-space partitioning of the molecular one-electron density matrix with orthogonal projectors

    NASA Astrophysics Data System (ADS)

    Vanfleteren, Diederik; Van Neck, Dimitri; Bultinck, Patrick; Ayers, Paul W.; Waroquier, Michel

    2010-12-01

    A double-atom partitioning of the molecular one-electron density matrix is used to describe atoms and bonds. All calculations are performed in Hilbert space. The concept of atomic weight functions (familiar from Hirshfeld analysis of the electron density) is extended to atomic weight matrices. These are constructed to be orthogonal projection operators on atomic subspaces, which has significant advantages in the interpretation of the bond contributions. In close analogy to the iterative Hirshfeld procedure, self-consistency is built in at the level of atomic charges and occupancies. The method is applied to a test set of about 67 molecules, representing various types of chemical binding. A close correlation is observed between the atomic charges and the Hirshfeld-I atomic charges.

  1. Numerical Solution of the Gyrokinetic Poisson Equation in TEMPEST

    NASA Astrophysics Data System (ADS)

    Dorr, Milo; Cohen, Bruce; Cohen, Ronald; Dimits, Andris; Hittinger, Jeffrey; Kerbel, Gary; Nevins, William; Rognlien, Thomas; Umansky, Maxim; Xiong, Andrew; Xu, Xueqiao

    2006-10-01

    The gyrokinetic Poisson (GKP) model in the TEMPEST continuum gyrokinetic edge plasma code yields the electrostatic potential due to the charge density of electrons and an arbitrary number of ion species including the effects of gyroaveraging in the limit kρ1. The TEMPEST equations are integrated as a differential algebraic system involving a nonlinear system solve via Newton-Krylov iteration. The GKP preconditioner block is inverted using a multigrid preconditioned conjugate gradient (CG) algorithm. Electrons are treated as kinetic or adiabatic. The Boltzmann relation in the adiabatic option employs flux surface averaging to maintain neutrality within field lines and is solved self-consistently with the GKP equation. A decomposition procedure circumvents the near singularity of the GKP Jacobian block that otherwise degrades CG convergence.

  2. Preconditioned conjugate residual methods for the solution of spectral equations

    NASA Technical Reports Server (NTRS)

    Wong, Y. S.; Zang, T. A.; Hussaini, M. Y.

    1986-01-01

    Conjugate residual methods for the solution of spectral equations are described. An inexact finite-difference operator is introduced as a preconditioner in the iterative procedures. Application of these techniques is limited to problems for which the symmetric part of the coefficient matrix is positive definite. Although the spectral equation is a very ill-conditioned and full matrix problem, the computational effort of the present iterative methods for solving such a system is comparable to that for the sparse matrix equations obtained from the application of either finite-difference or finite-element methods to the same problems. Numerical experiments are shown for a self-adjoint elliptic partial differential equation with Dirichlet boundary conditions, and comparison with other solution procedures for spectral equations is presented.

  3. Efficient mixing scheme for self-consistent all-electron charge density

    NASA Astrophysics Data System (ADS)

    Shishidou, Tatsuya; Weinert, Michael

    2015-03-01

    In standard ab initio density-functional theory calculations, the charge density ρ is gradually updated using the ``input'' and ``output'' densities of the current and previous iteration steps. To accelerate the convergence, Pulay mixing has been widely used with great success. It expresses an ``optimal'' input density ρopt and its ``residual'' Ropt by a linear combination of the densities of the iteration sequences. In large-scale metallic systems, however, the long range nature of Coulomb interaction often causes the ``charge sloshing'' phenomenon and significantly impacts the convergence. Two treatments, represented in reciprocal space, are known to suppress the sloshing: (i) the inverse Kerker metric for Pulay optimization and (ii) Kerker-type preconditioning in mixing Ropt. In all-electron methods, where the charge density does not have a converging Fourier representation, treatments equivalent or similar to (i) and (ii) have not been described so far. In this work, we show that, by going through the calculation of Hartree potential, one can accomplish the procedures (i) and (ii) without entering the reciprocal space. Test calculations are done with a FLAPW method.

  4. Improved Savitzky-Golay-method-based fluorescence subtraction algorithm for rapid recovery of Raman spectra.

    PubMed

    Chen, Kun; Zhang, Hongyuan; Wei, Haoyun; Li, Yan

    2014-08-20

    In this paper, we propose an improved subtraction algorithm for rapid recovery of Raman spectra that can substantially reduce the computation time. This algorithm is based on an improved Savitzky-Golay (SG) iterative smoothing method, which involves two key novel approaches: (a) the use of the Gauss-Seidel method and (b) the introduction of a relaxation factor into the iterative procedure. By applying a novel successive relaxation (SG-SR) iterative method to the relaxation factor, additional improvement in the convergence speed over the standard Savitzky-Golay procedure is realized. The proposed improved algorithm (the RIA-SG-SR algorithm), which uses SG-SR-based iteration instead of Savitzky-Golay iteration, has been optimized and validated with a mathematically simulated Raman spectrum, as well as experimentally measured Raman spectra from non-biological and biological samples. The method results in a significant reduction in computing cost while yielding consistent rejection of fluorescence and noise for spectra with low signal-to-fluorescence ratios and varied baselines. In the simulation, RIA-SG-SR achieved 1 order of magnitude improvement in iteration number and 2 orders of magnitude improvement in computation time compared with the range-independent background-subtraction algorithm (RIA). Furthermore the computation time of the experimentally measured raw Raman spectrum processing from skin tissue decreased from 6.72 to 0.094 s. In general, the processing of the SG-SR method can be conducted within dozens of milliseconds, which can provide a real-time procedure in practical situations.

  5. A self-adapting system for the automated detection of inter-ictal epileptiform discharges.

    PubMed

    Lodder, Shaun S; van Putten, Michel J A M

    2014-01-01

    Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form "IED nominations", each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20-30 min recordings 1 took approximately 5 min. The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents.

  6. On the dual equivalence of the self-dual and topologically massive /B∧F models coupled to dynamical fermionic matter

    NASA Astrophysics Data System (ADS)

    Menezes, R.; Nascimento, J. R. S.; Ribeiro, R. F.; Wotzasek, C.

    2002-06-01

    We study the equivalence between the /B∧F self-dual (SDB∧F) and the /B∧F topologically massive (TMB∧F) models including the coupling to dynamical, U(1) charged fermionic matter. This is done through an iterative procedure of gauge embedding that produces the dual mapping. In the interactive cases, the minimal coupling adopted for both vector and tensor fields in the self-dual representation is transformed into a non-minimal magnetic like coupling in the topologically massive representation but with the currents swapped. It is known that to establish this equivalence a current-current interaction term is needed to render the matter sector unchanged. We show that both terms arise naturally from the embedding procedure.

  7. On the wing behaviour of the overtones of self-localized modes

    NASA Astrophysics Data System (ADS)

    Dusi, R.; Wagner, M.

    1998-08-01

    In this paper the solutions for self-localized modes in a nonlinear chain are investigated. We present a converging iteration procedure, which is based on analytical information of the wings and which takes into account higher overtones of the solitonic oscillations. The accuracy is controlled in a step by step manner by means of a Gaussian error analysis. Our numerical procedure allows for highly accurate solutions, in all anharmonicity regimes, and beyond the rotating-wave approximation (RWA). It is found that the overtone wings change their analytical behaviour at certain critical values of the energy of the self-localized mode: there is a turnover in the exponent of descent. The results are shown for a Fermi-Pasta-Ulam (FPU) chain with quartic anharmonicity.

  8. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  9. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  10. Iterating between lessons on concepts and procedures can improve mathematics knowledge.

    PubMed

    Rittle-Johnson, Bethany; Koedinger, Kenneth

    2009-09-01

    Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. The purpose of the current study was to evaluate the instructional benefits of an iterative lesson sequence compared to a concepts-before-procedures sequence for students learning decimal place-value concepts and arithmetic procedures. In two classroom experiments, sixth-grade students from two schools participated (N=77 and 26). Students completed six decimal lessons on an intelligent-tutoring systems. In the iterative condition, lessons cycled between concept and procedure lessons. In the concepts-first condition, all concept lessons were presented before introducing the procedure lessons. In both experiments, students in the iterative condition gained more knowledge of arithmetic procedures, including ability to transfer the procedures to problems with novel features. Knowledge of concepts was fairly comparable across conditions. Finally, pre-test knowledge of one type predicted gains in knowledge of the other type across experiments. An iterative sequencing of lessons seems to facilitate learning and transfer, particularly of mathematical procedures. The findings support an iterative perspective for the development of knowledge of concepts and procedures.

  11. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Ee, E-mail: eehan@math.uni-bremen.de; Hantke, Maren, E-mail: maren.hantke@ovgu.de; Müller, Siegfried, E-mail: mueller@igpm.rwth-aachen.de

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novelmore » iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.« less

  12. Enhancement of event related potentials by iterative restoration algorithms

    NASA Astrophysics Data System (ADS)

    Pomalaza-Raez, Carlos A.; McGillem, Clare D.

    1986-12-01

    An iterative procedure for the restoration of event related potentials (ERP) is proposed and implemented. The method makes use of assumed or measured statistical information about latency variations in the individual ERP components. The signal model used for the restoration algorithm consists of a time-varying linear distortion and a positivity/negativity constraint. Additional preprocessing in the form of low-pass filtering is needed in order to mitigate the effects of additive noise. Numerical results obtained with real data show clearly the presence of enhanced and regenerated components in the restored ERP's. The procedure is easy to implement which makes it convenient when compared to other proposed techniques for the restoration of ERP signals.

  13. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    PubMed

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  14. Self consistent MHD modeling of the solar wind from coronal holes with distinct geometries

    NASA Technical Reports Server (NTRS)

    Stewart, G. A.; Bravo, S.

    1995-01-01

    Utilizing an iterative scheme, a self-consistent axisymmetric MHD model for the solar wind has been developed. We use this model to evaluate the properties of the solar wind issuing from the open polar coronal hole regions of the Sun, during solar minimum. We explore the variation of solar wind parameters across the extent of the hole and we investigate how these variations are affected by the geometry of the hole and the strength of the field at the coronal base.

  15. Indirect (source-free) integration method. II. Self-force consistent radial fall

    NASA Astrophysics Data System (ADS)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    We apply our method of indirect integration, described in Part I, at fourth order, to the radial fall affected by the self-force (SF). The Mode-Sum regularization is performed in the Regge-Wheeler gauge using the equivalence with the harmonic gauge for this orbit. We consider also the motion subjected to a self-consistent and iterative correction determined by the SF through osculating stretches of geodesics. The convergence of the results confirms the validity of the integration method. This work complements and justifies the analysis and the results appeared in [Int. J. Geom. Meth. Mod. Phys. 11 (2014) 1450090].

  16. The CLASSY clustering algorithm: Description, evaluation, and comparison with the iterative self-organizing clustering system (ISOCLS). [used for LACIE data

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Malek, H.

    1978-01-01

    A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.

  17. A pseudoinverse deformation vector field generator and its applications

    PubMed Central

    Yan, C.; Zhong, H.; Murphy, M.; Weiss, E.; Siebers, J. V.

    2010-01-01

    Purpose: To present, implement, and test a self-consistent pseudoinverse displacement vector field (PIDVF) generator, which preserves the location of information mapped back-and-forth between image sets. Methods: The algorithm is an iterative scheme based on nearest neighbor interpolation and a subsequent iterative search. Performance of the algorithm is benchmarked using a lung 4DCT data set with six CT images from different breathing phases and eight CT images for a single prostrate patient acquired on different days. A diffeomorphic deformable image registration is used to validate our PIDVFs. Additionally, the PIDVF is used to measure the self-consistency of two nondiffeomorphic algorithms which do not use a self-consistency constraint: The ITK Demons algorithm for the lung patient images and an in-house B-Spline algorithm for the prostate patient images. Both Demons and B-Spline have been QAed through contour comparison. Self-consistency is determined by using a DIR to generate a displacement vector field (DVF) between reference image R and study image S (DVFR–S). The same DIR is used to generate DVFS–R. Additionally, our PIDVF generator is used to create PIDVFS–R. Back-and-forth mapping of a set of points (used as surrogates of contours) using DVFR–S and DVFS–R is compared to back-and-forth mapping performed with DVFR–S and PIDVFS–R. The Euclidean distances between the original unmapped points and the mapped points are used as a self-consistency measure. Results: Test results demonstrate that the consistency error observed in back-and-forth mappings can be reduced two to nine times in point mapping and 1.5 to three times in dose mapping when the PIDVF is used in place of the B-Spline algorithm. These self-consistency improvements are not affected by the exchanging of R and S. It is also demonstrated that differences between DVFS–R and PIDVFS–R can be used as a criteria to check the quality of the DVF. Conclusions: Use of DVF and its PIDVF will improve the self-consistency of points, contour, and dose mappings in image guided adaptive therapy. PMID:20384247

  18. Control software for two dimensional airfoil tests using a self-streamlining flexible walled transonic test section

    NASA Technical Reports Server (NTRS)

    Wolf, S. W. D.; Goodyer, M. J.

    1982-01-01

    Operation of the Transonic Self-Streamlining Wind Tunnel (TSWT) involved on-line data acquisition with automatic wall adjustment. A tunnel run consisted of streamlining the walls from known starting contours in iterative steps and acquiring model data. Each run performs what is described as a streamlining cycle. The associated software is presented.

  19. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments.

    PubMed

    Fisicaro, G; Genovese, L; Andreussi, O; Marzari, N; Goedecker, S

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.

  20. A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S.; Genovese, L.

    2016-01-07

    The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and themore » linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.« less

  1. Fourier transform-based scattering-rate method for self-consistent simulations of carrier transport in semiconductor heterostructures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schrottke, L., E-mail: lutz@pdi-berlin.de; Lü, X.; Grahn, H. T.

    We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficientmore » design of complex heterostructures such as terahertz quantum-cascade lasers.« less

  2. Self-consistent field for fragmented quantum mechanical model of large molecular systems.

    PubMed

    Jin, Yingdi; Su, Neil Qiang; Xu, Xin; Hu, Hao

    2016-01-30

    Fragment-based linear scaling quantum chemistry methods are a promising tool for the accurate simulation of chemical and biomolecular systems. Because of the coupled inter-fragment electrostatic interactions, a dual-layer iterative scheme is often employed to compute the fragment electronic structure and the total energy. In the dual-layer scheme, the self-consistent field (SCF) of the electronic structure of a fragment must be solved first, then followed by the updating of the inter-fragment electrostatic interactions. The two steps are sequentially carried out and repeated; as such a significant total number of fragment SCF iterations is required to converge the total energy and becomes the computational bottleneck in many fragment quantum chemistry methods. To reduce the number of fragment SCF iterations and speed up the convergence of the total energy, we develop here a new SCF scheme in which the inter-fragment interactions can be updated concurrently without converging the fragment electronic structure. By constructing the global, block-wise Fock matrix and density matrix, we prove that the commutation between the two global matrices guarantees the commutation of the corresponding matrices in each fragment. Therefore, many highly efficient numerical techniques such as the direct inversion of the iterative subspace method can be employed to converge simultaneously the electronic structure of all fragments, reducing significantly the computational cost. Numerical examples for water clusters of different sizes suggest that the method shall be very useful in improving the scalability of fragment quantum chemistry methods. © 2015 Wiley Periodicals, Inc.

  3. Numerical methods for solving moment equations in kinetic theory of neuronal network dynamics

    NASA Astrophysics Data System (ADS)

    Rangan, Aaditya V.; Cai, David; Tao, Louis

    2007-02-01

    Recently developed kinetic theory and related closures for neuronal network dynamics have been demonstrated to be a powerful theoretical framework for investigating coarse-grained dynamical properties of neuronal networks. The moment equations arising from the kinetic theory are a system of (1 + 1)-dimensional nonlinear partial differential equations (PDE) on a bounded domain with nonlinear boundary conditions. The PDEs themselves are self-consistently specified by parameters which are functions of the boundary values of the solution. The moment equations can be stiff in space and time. Numerical methods are presented here for efficiently and accurately solving these moment equations. The essential ingredients in our numerical methods include: (i) the system is discretized in time with an implicit Euler method within a spectral deferred correction framework, therefore, the PDEs of the kinetic theory are reduced to a sequence, in time, of boundary value problems (BVPs) with nonlinear boundary conditions; (ii) a set of auxiliary parameters is introduced to recast the original BVP with nonlinear boundary conditions as BVPs with linear boundary conditions - with additional algebraic constraints on the auxiliary parameters; (iii) a careful combination of two Newton's iterates for the nonlinear BVP with linear boundary condition, interlaced with a Newton's iterate for solving the associated algebraic constraints is constructed to achieve quadratic convergence for obtaining the solutions with self-consistent parameters. It is shown that a simple fixed-point iteration can only achieve a linear convergence for the self-consistent parameters. The practicability and efficiency of our numerical methods for solving the moment equations of the kinetic theory are illustrated with numerical examples. It is further demonstrated that the moment equations derived from the kinetic theory of neuronal network dynamics can very well capture the coarse-grained dynamical properties of integrate-and-fire neuronal networks.

  4. Deblurring in digital tomosynthesis by iterative self-layer subtraction

    NASA Astrophysics Data System (ADS)

    Youn, Hanbean; Kim, Jee Young; Jang, SunYoung; Cho, Min Kook; Cho, Seungryong; Kim, Ho Kyung

    2010-04-01

    Recent developments in large-area flat-panel detectors have made tomosynthesis technology revisited in multiplanar xray imaging. However, the typical shift-and-add (SAA) or backprojection reconstruction method is notably claimed by a lack of sharpness in the reconstructed images because of blur artifact which is the superposition of objects which are out of planes. In this study, we have devised an intuitive simple method to reduce the blur artifact based on an iterative approach. This method repeats a forward and backward projection procedure to determine the blur artifact affecting on the plane-of-interest (POI), and then subtracts it from the POI. The proposed method does not include any Fourierdomain operations hence excluding the Fourier-domain-originated artifacts. We describe the concept of the self-layer subtractive tomosynthesis and demonstrate its performance with numerical simulation and experiments. Comparative analysis with the conventional methods, such as the SAA and filtered backprojection methods, is addressed.

  5. Extended Lagrangian Excited State Molecular Dynamics

    DOE PAGES

    Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei; ...

    2018-01-09

    In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less

  6. Extended Lagrangian Excited State Molecular Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bjorgaard, Josiah August; Sheppard, Daniel Glen; Tretiak, Sergei

    In this work, an extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born–Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both formore » the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. In conclusion, the XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree–Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).« less

  7. Extended Lagrangian Excited State Molecular Dynamics.

    PubMed

    Bjorgaard, J A; Sheppard, D; Tretiak, S; Niklasson, A M N

    2018-02-13

    An extended Lagrangian framework for excited state molecular dynamics (XL-ESMD) using time-dependent self-consistent field theory is proposed. The formulation is a generalization of the extended Lagrangian formulations for ground state Born-Oppenheimer molecular dynamics [Phys. Rev. Lett. 2008 100, 123004]. The theory is implemented, demonstrated, and evaluated using a time-dependent semiempirical model, though it should be generally applicable to ab initio theory. The simulations show enhanced energy stability and a significantly reduced computational cost associated with the iterative solutions of both the ground state and the electronically excited states. Relaxed convergence criteria can therefore be used both for the self-consistent ground state optimization and for the iterative subspace diagonalization of the random phase approximation matrix used to calculate the excited state transitions. The XL-ESMD approach is expected to enable numerically efficient excited state molecular dynamics for such methods as time-dependent Hartree-Fock (TD-HF), Configuration Interactions Singles (CIS), and time-dependent density functional theory (TD-DFT).

  8. A Self-Adapting System for the Automated Detection of Inter-Ictal Epileptiform Discharges

    PubMed Central

    Lodder, Shaun S.; van Putten, Michel J. A. M.

    2014-01-01

    Purpose Scalp EEG remains the standard clinical procedure for the diagnosis of epilepsy. Manual detection of inter-ictal epileptiform discharges (IEDs) is slow and cumbersome, and few automated methods are used to assist in practice. This is mostly due to low sensitivities, high false positive rates, or a lack of trust in the automated method. In this study we aim to find a solution that will make computer assisted detection more efficient than conventional methods, while preserving the detection certainty of a manual search. Methods Our solution consists of two phases. First, a detection phase finds all events similar to epileptiform activity by using a large database of template waveforms. Individual template detections are combined to form “IED nominations”, each with a corresponding certainty value based on the reliability of their contributing templates. The second phase uses the ten nominations with highest certainty and presents them to the reviewer one by one for confirmation. Confirmations are used to update certainty values of the remaining nominations, and another iteration is performed where ten nominations with the highest certainty are presented. This continues until the reviewer is satisfied with what has been seen. Reviewer feedback is also used to update template accuracies globally and improve future detections. Key Findings Using the described method and fifteen evaluation EEGs (241 IEDs), one third of all inter-ictal events were shown after one iteration, half after two iterations, and 74%, 90%, and 95% after 5, 10 and 15 iterations respectively. Reviewing fifteen iterations for the 20–30 min recordings 1took approximately 5 min. Significance The proposed method shows a practical approach for combining automated detection with visual searching for inter-ictal epileptiform activity. Further evaluation is needed to verify its clinical feasibility and measure the added value it presents. PMID:24454813

  9. Parabolized Navier-Stokes solutions of separation and trailing-edge flows

    NASA Technical Reports Server (NTRS)

    Brown, J. L.

    1983-01-01

    A robust, iterative solution procedure is presented for the parabolized Navier-Stokes or higher order boundary layer equations as applied to subsonic viscous-inviscid interaction flows. The robustness of the present procedure is due, in part, to an improved algorithmic formulation. The present formulation is based on a reinterpretation of stability requirements for this class of algorithms and requires only second order accurate backward or central differences for all streamwise derivatives. Upstream influence is provided for through the algorithmic formulation and iterative sweeps in x. The primary contribution to robustness, however, is the boundary condition treatment, which imposes global constraints to control the convergence path. Discussed are successful calculations of subsonic, strong viscous-inviscid interactions, including separation. These results are consistent with Navier-Stokes solutions and triple deck theory.

  10. Differential dynamic microscopy microrheology of soft materials: A tracking-free determination of the frequency-dependent loss and storage moduli

    NASA Astrophysics Data System (ADS)

    Edera, Paolo; Bergamini, Davide; Trappe, Véronique; Giavazzi, Fabio; Cerbino, Roberto

    2017-12-01

    Particle-tracking microrheology (PT-μ r ) exploits the thermal motion of embedded particles to probe the local mechanical properties of soft materials. Despite its appealing conceptual simplicity, PT-μ r requires calibration procedures and operating assumptions that constitute a practical barrier to its wider application. Here we demonstrate differential dynamic microscopy microrheology (DDM-μ r ), a tracking-free approach based on the multiscale, temporal correlation study of the image intensity fluctuations that are observed in microscopy experiments as a consequence of the translational and rotational motion of the tracers. We show that the mechanical moduli of an arbitrary sample are determined correctly over a wide frequency range provided that the standard DDM analysis is reinforced with an iterative, self-consistent procedure that fully exploits the multiscale information made available by DDM. Our approach to DDM-μ r does not require any prior calibration, is in agreement with both traditional rheology and diffusing wave spectroscopy microrheology, and works in conditions where PT-μ r fails, providing thus an operationally simple, calibration-free probe of soft materials.

  11. Resolution enhancement in digital holography by self-extrapolation of holograms.

    PubMed

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2013-03-25

    It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.

  12. Developing Conceptual Understanding and Procedural Skill in Mathematics: An Iterative Process.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Siegler, Robert S.; Alibali, Martha Wagner

    2001-01-01

    Proposes that conceptual and procedural knowledge develop in an iterative fashion and improved problem representation is one mechanism underlying the relations between them. Two experiments were conducted with 5th and 6th grade students learning about decimal fractions. Results indicate conceptual and procedural knowledge do develop, iteratively,…

  13. Self-consistent collective coordinate for reaction path and inertial mass

    NASA Astrophysics Data System (ADS)

    Wen, Kai; Nakatsukasa, Takashi

    2016-11-01

    We propose a numerical method to determine the optimal collective reaction path for a nucleus-nucleus collision, based on the adiabatic self-consistent collective coordinate (ASCC) method. We use an iterative method, combining the imaginary-time evolution and the finite amplitude method, for the solution of the ASCC coupled equations. It is applied to the simplest case, α -α scattering. We determine the collective path, the potential, and the inertial mass. The results are compared with other methods, such as the constrained Hartree-Fock method, Inglis's cranking formula, and the adiabatic time-dependent Hartree-Fock (ATDHF) method.

  14. Self-organised criticality in the evolution of a thermodynamic model of rodent thermoregulatory huddling

    PubMed Central

    2017-01-01

    A thermodynamic model of thermoregulatory huddling interactions between endotherms is developed. The model is presented as a Monte Carlo algorithm in which animals are iteratively exchanged between groups, with a probability of exchanging groups defined in terms of the temperature of the environment and the body temperatures of the animals. The temperature-dependent exchange of animals between groups is shown to reproduce a second-order critical phase transition, i.e., a smooth switch to huddling when the environment gets colder, as measured in recent experiments. A peak in the rate at which group sizes change, referred to as pup flow, is predicted at the critical temperature of the phase transition, consistent with a thermodynamic description of huddling, and with a description of the huddle as a self-organising system. The model was subjected to a simple evolutionary procedure, by iteratively substituting the physiologies of individuals that fail to balance the costs of thermoregulation (by huddling in groups) with the costs of thermogenesis (by contributing heat). The resulting tension between cooperative and competitive interactions was found to generate a phenomenon called self-organised criticality, as evidenced by the emergence of avalanches in fitness that propagate across many generations. The emergence of avalanches reveals how huddling can introduce correlations in fitness between individuals and thereby constrain evolutionary dynamics. Finally, a full agent-based model of huddling interactions is also shown to generate criticality when subjected to the same evolutionary pressures. The agent-based model is related to the Monte Carlo model in the way that a Vicsek model is related to an Ising model in statistical physics. Huddling therefore presents an opportunity to use thermodynamic theory to study an emergent adaptive animal behaviour. In more general terms, huddling is proposed as an ideal system for investigating the interaction between self-organisation and natural selection empirically. PMID:28141809

  15. Precision tuning of InAs quantum dot emission wavelength by iterative laser annealing

    NASA Astrophysics Data System (ADS)

    Dubowski, Jan J.; Stanowski, Radoslaw; Dalacu, Dan; Poole, Philip J.

    2018-07-01

    Controlling the emission wavelength of quantum dots (QDs) over large surface area wafers is challenging to achieve directly through epitaxial growth methods. We have investigated an innovative post growth laser-based tuning procedure of the emission of self-assembled InAs QDs grown epitaxially on InP (001). A targeted blue shift of the emission is achieved with a series of iterative steps, with photoluminescence diagnostics employed between the steps to monitor the result of intermixing. We demonstrate tuning of the emission wavelength of ensembles of QDs to within approximately ±1 nm, while potentially better precision should be achievable for tuning the emission of individual QDs.

  16. On Study of Air/Space-borne Dual-Wavelength Radar for Estimates of Rain Profiles

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert

    2004-01-01

    In this study, a framework is discussed to apply air/space-borne dual-wavelength radar for the estimation of characteristic parameters of hydrometeors. The focus of our study is on the Global Precipitation Measurements (GPM) precipitation radar, a dual-wavelength radar that operates at Ku (13.8 GHz) and Ka (35 GHz) bands. As the droplet size distributions (DSD) of rain are expressed as the Gamma function, a procedure is described to derive the median volume diameter (D(sub 0)) and particle number concentration (N(sub T)) of rain. The correspondences of an important quantity of dual-wavelength radar, defined as deferential frequency ratio (DFR), to the D(sub 0) in the melting region are given as a function of the distance from the 0 C isotherm. A self-consistent iterative algorithm that shows a promising to account for rain attenuation of radar and infer the DSD without use of surface reference technique (SRT) is examined by applying it to the apparent radar reflectivity profiles simulated from the DSD model and then comparing the estimates with the model (true) results. For light to moderate rain the self-consistent rain profiling approach converges to unique and correct solutions only if the same shape factors of Gamma functions are used both to generate and retrieve the rain profiles, but does not converges to the true solutions if the DSD form is not chosen correctly. To further examine the dual-wavelength techniques, the self-consistent algorithm, along with forward and backward rain profiling algorithms, is then applied to the measurements taken from the 2nd generation Precipitation Radar (PR-2) built by Jet Propulsion Laboratory. It is found that rain profiles estimated from the forward and backward approaches are not sensitive to shape factor of DSD Gamma distribution, but the self-consistent method is.

  17. Solving Boltzmann and Fokker-Planck Equations Using Sparse Representation

    DTIC Science & Technology

    2011-05-31

    material science. We have com- puted the electronic structure of 2D quantum dot system, and compared the efficiency with the benchmark software OCTOPUS . For...one self-consistent iteration step with 512 electrons, OCTOPUS costs 1091 sec, and selected inversion costs 9.76 sec. The algorithm exhibits

  18. Iterating between Lessons on Concepts and Procedures Can Improve Mathematics Knowledge

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Koedinger, Kenneth

    2009-01-01

    Background: Knowledge of concepts and procedures seems to develop in an iterative fashion, with increases in one type of knowledge leading to increases in the other type of knowledge. This suggests that iterating between lessons on concepts and procedures may improve learning. Aims: The purpose of the current study was to evaluate the…

  19. Correction of phase velocity bias caused by strong directional noise sources in high-frequency ambient noise tomography: a case study in Karamay, China

    NASA Astrophysics Data System (ADS)

    Wang, K.; Luo, Y.; Yang, Y.

    2016-12-01

    We collect two months of ambient noise data recorded by 35 broadband seismic stations in a 9×11 km area near Karamay, China, and do cross-correlation of noise data between all station pairs. Array beamforming analysis of the ambient noise data shows that ambient noise sources are unevenly distributed and the most energetic ambient noise mainly comes from azimuths of 40o-70o. As a consequence of the strong directional noise sources, surface wave waveforms of the cross-correlations at 1-5 Hz show clearly azimuthal dependence, and direct dispersion measurements from cross-correlations are strongly biased by the dominant noise energy. This bias renders that the dispersion measurements from cross-correlations do not accurately reflect the interstation velocities of surface waves propagating directly from one station to the other, that is, the cross-correlation functions do not retrieve Empirical Green's Functions accurately. To correct the bias caused by unevenly distributed noise sources, we adopt an iterative inversion procedure. The iterative inversion procedure, based on plane-wave modeling, includes three steps: (1) surface wave tomography, (2) estimation of ambient noise energy and (3) phase velocities correction. First, we use synthesized data to test efficiency and stability of the iterative procedure for both homogeneous and heterogeneous media. The testing results show that: (1) the amplitudes of phase velocity bias caused by directional noise sources are significant, reaching 2% and 10% for homogeneous and heterogeneous media, respectively; (2) phase velocity bias can be corrected by the iterative inversion procedure and the convergences of inversion depend on the starting phase velocity map and the complexity of the media. By applying the iterative approach to the real data in Karamay, we further show that phase velocity maps converge after ten iterations and the phase velocity map based on corrected interstation dispersion measurements are more consistent with results from geology surveys than those based on uncorrected ones. As ambient noise in high frequency band (>1Hz) is mostly related to human activities or climate events, both of which have strong directivity, the iterative approach demonstrated here helps improve the accuracy and resolution of ANT in imaging shallow earth structures.

  20. Improving cluster-based missing value estimation of DNA microarray data.

    PubMed

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  1. Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Watson, Willie R.; Mani, Ramani

    2007-01-01

    A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.

  2. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    PubMed

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.

  3. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: Exactly solvable two-site Hubbard model

    DOE PAGES

    Kutepov, A. L.

    2015-07-22

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ₁ from the first-order perturbation theory, and the exact vertex Γ E). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. Results obtained with the exact vertex are directly related to the present open question—which approximation is more advantageous for future implementations, GW + DMFT or QPGW +more » DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on Perturbation Theory systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.« less

  4. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    PubMed

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  5. CRISM Hyperspectral Data Filtering with Application to MSL Landing Site Selection

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Parente, M.; Clark, T.; Morgan, F.; Barnouin-Jha, O. S.; McGovern, A.; Murchie, S. L.; Taylor, H.

    2009-12-01

    We report on the development and implementation of a custom filtering procedure for Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) IR hyperspectral data that is suitable for incorporation into the CRISM Reduced Data Record (RDR) calibration pipeline. Over the course of the Mars Reconnaissance Orbiter (MRO) Primary Science Phase (PSP) and the ongoing Extended Science Phase (ESP) CRISM has operated with an IR detector temperature between ~107 K and ~127 K. This ~20 K range in operational temperature has resulted in variable data quality, with observations acquired at higher detector temperatures exhibiting a marked increase in both systematic and stochastic noise. The CRISM filtering procedure consists of two main data processing capabilities. The primary systematic noise component in CRISM IR data appears as along track or column oriented striping. This is addressed by the robust derivation and application of an inter-column ratio correction frame. The correction frame is developed through the serial evaluation of band specific column ratio statistics and so does not compromise the spectral fidelity of the image cube. The dominant CRISM IR stochastic noise components appear as isolated data spikes or column oriented segments of variable length with erroneous data values. The non-systematic noise is identified and corrected through the application of an iterative-recursive kernel modeling procedure which employs a formal statistical outlier test as the iteration control and recursion termination criterion. This allows the filtering procedure to make a statistically supported determination between high frequency (spatial/spectral) signal and high frequency noise based on the information content of a given multidimensional data kernel. The governing statistical test also allows the kernel filtering procedure to be self regulating and adaptive to the intrinsic noise level in the data. The CRISM IR filtering procedure is scheduled to be incorporated into the next augmentation of the CRISM IR calibration (version 3). The filtering algorithm will be applied to the I/F data (IF) delivered to the Planetary Data System (PDS), but the radiance on sensor data (RA) will remain unfiltered. The development of CRISM hyperspectral analysis products in support of the Mars Science Laboratory (MSL) landing site selection process has motivated the advance of CRISM-specific data processing techniques. The quantitative results of the CRISM IR filtering procedure as applied to CRISM observations acquired in support of MSL landing site selection will be presented.

  6. Stability of iterative procedures with errors for approximating common fixed points of a couple of q-contractive-like mappings in Banach spaces

    NASA Astrophysics Data System (ADS)

    Zeng, Lu-Chuan; Yao, Jen-Chih

    2006-09-01

    Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.

  7. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  8. Integrated fusion simulation with self-consistent core-pedestal coupling

    DOE PAGES

    Meneghini, O.; Snyder, P. B.; Smith, S. P.; ...

    2016-04-20

    In this study, accurate prediction of fusion performance in present and future tokamaks requires taking into account the strong interplay between core transport, pedestal structure, current profile and plasma equilibrium. An integrated modeling workflow capable of calculating the steady-state self- consistent solution to this strongly-coupled problem has been developed. The workflow leverages state-of-the-art components for collisional and turbulent core transport, equilibrium and pedestal stability. Validation against DIII-D discharges shows that the workflow is capable of robustly pre- dicting the kinetic profiles (electron and ion temperature and electron density) from the axis to the separatrix in good agreement with the experiments.more » An example application is presented, showing self-consistent optimization for the fusion performance of the 15 MA D-T ITER baseline scenario as functions of the pedestal density and ion effective charge Z eff.« less

  9. Communication: The description of strong correlation within self-consistent Green's function second-order perturbation theory

    NASA Astrophysics Data System (ADS)

    Phillips, Jordan J.; Zgid, Dominika

    2014-06-01

    We report an implementation of self-consistent Green's function many-body theory within a second-order approximation (GF2) for application with molecular systems. This is done by iterative solution of the Dyson equation expressed in matrix form in an atomic orbital basis, where the Green's function and self-energy are built on the imaginary frequency and imaginary time domain, respectively, and fast Fourier transform is used to efficiently transform these quantities as needed. We apply this method to several archetypical examples of strong correlation, such as a H32 finite lattice that displays a highly multireference electronic ground state even at equilibrium lattice spacing. In all cases, GF2 gives a physically meaningful description of the metal to insulator transition in these systems, without resorting to spin-symmetry breaking. Our results show that self-consistent Green's function many-body theory offers a viable route to describing strong correlations while remaining within a computationally tractable single-particle formalism.

  10. Sensitivity calculations for iteratively solved problems

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1985-01-01

    The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.

  11. Convergence of quasiparticle self-consistent G W calculations of transition-metal monoxides

    NASA Astrophysics Data System (ADS)

    Das, Suvadip; Coulter, John E.; Manousakis, Efstratios

    2015-03-01

    Finding an accurate ab initio approach for calculating the electronic properties of transition-metal oxides has been a problem for several decades. In this paper, we investigate the electronic structure of the transition-metal monoxides MnO, CoO, and NiO in their undistorted rocksalt structure within a fully iterated quasiparticle self-consistent G W (QPsc G W ) scheme. We study the convergence of the QPsc G W method, i.e., how the quasiparticle energy eigenvalues and wave functions converge as a function of the QPsc G W iterations, and we compare the converged outputs obtained from different starting wave functions. We find that the convergence is slow and that a one-shot G0W0 calculation does not significantly improve the initial eigenvalues and states. It is important to notice that in some cases the "path" to convergence may go through energy band reordering which cannot be captured by the simple initial unperturbed Hamiltonian. When we reach a fully iterated solution, the converged density of states, band gaps, and magnetic moments of these oxides are found to be only weakly dependent on the choice of the starting wave functions and in reasonably good agreement with the experiment. Finally, this approach provides a clear picture of the interplay between the various orbitals near the Fermi level of these simple transition-metal monoxides. The results of these accurate ab initio calculations can provide input for models aiming at describing the low-energy physics in these materials.

  12. Global Asymptotic Behavior of Iterative Implicit Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sweby, P. K.

    1994-01-01

    The global asymptotic nonlinear behavior of some standard iterative procedures in solving nonlinear systems of algebraic equations arising from four implicit linear multistep methods (LMMs) in discretizing three models of 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed using the theory of dynamical systems. The iterative procedures include simple iteration and full and modified Newton iterations. The results are compared with standard Runge-Kutta explicit methods, a noniterative implicit procedure, and the Newton method of solving the steady part of the ODEs. Studies showed that aside from exhibiting spurious asymptotes, all of the four implicit LMMs can change the type and stability of the steady states of the differential equations (DEs). They also exhibit a drastic distortion but less shrinkage of the basin of attraction of the true solution than standard nonLMM explicit methods. The simple iteration procedure exhibits behavior which is similar to standard nonLMM explicit methods except that spurious steady-state numerical solutions cannot occur. The numerical basins of attraction of the noniterative implicit procedure mimic more closely the basins of attraction of the DEs and are more efficient than the three iterative implicit procedures for the four implicit LMMs. Contrary to popular belief, the initial data using the Newton method of solving the steady part of the DEs may not have to be close to the exact steady state for convergence. These results can be used as an explanation for possible causes and cures of slow convergence and nonconvergence of steady-state numerical solutions when using an implicit LMM time-dependent approach in computational fluid dynamics.

  13. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  14. Electronic excitation spectra of molecules in solution calculated using the symmetry-adapted cluster-configuration interaction method in the polarizable continuum model with perturbative approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fukuda, Ryoichi, E-mail: fukuda@ims.ac.jp; Ehara, Masahiro; Elements Strategy Initiative for Catalysts and Batteries

    A perturbative approximation of the state specific polarizable continuum model (PCM) symmetry-adapted cluster-configuration interaction (SAC-CI) method is proposed for efficient calculations of the electronic excitations and absorption spectra of molecules in solutions. This first-order PCM SAC-CI method considers the solvent effects on the energies of excited states up to the first-order with using the zeroth-order wavefunctions. This method can avoid the costly iterative procedure of the self-consistent reaction field calculations. The first-order PCM SAC-CI calculations well reproduce the results obtained by the iterative method for various types of excitations of molecules in polar and nonpolar solvents. The first-order contribution ismore » significant for the excitation energies. The results obtained by the zeroth-order PCM SAC-CI, which considers the fixed ground-state reaction field for the excited-state calculations, are deviated from the results by the iterative method about 0.1 eV, and the zeroth-order PCM SAC-CI cannot predict even the direction of solvent shifts in n-hexane for many cases. The first-order PCM SAC-CI is applied to studying the solvatochromisms of (2,2{sup ′}-bipyridine)tetracarbonyltungsten [W(CO){sub 4}(bpy), bpy = 2,2{sup ′}-bipyridine] and bis(pentacarbonyltungsten)pyrazine [(OC){sub 5}W(pyz)W(CO){sub 5}, pyz = pyrazine]. The SAC-CI calculations reveal the detailed character of the excited states and the mechanisms of solvent shifts. The energies of metal to ligand charge transfer states are significantly sensitive to solvents. The first-order PCM SAC-CI well reproduces the observed absorption spectra of the tungsten carbonyl complexes in several solvents.« less

  15. New type of a generalized variable-coefficient Kadomtsev-Petviashvili equation with self-consistent sources and its Grammian-type solutions

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Xu, Yue; Ma, Kun

    2016-08-01

    In this paper, the variable-coefficient Kadomtsev-Petviashvili (vcKP) equation with self-consistent sources is presented by two different methods, one is the source generation procedure, the other is the Pfaffianization procedure, and the solutions for the two new coupled systems are given through Grammian-type Pfaffian determinants.

  16. A new solution procedure for a nonlinear infinite beam equation of motion

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2016-10-01

    Our goal of this paper is of a purely theoretical question, however which would be fundamental in computational partial differential equations: Can a linear solution-structure for the equation of motion for an infinite nonlinear beam be directly manipulated for constructing its nonlinear solution? Here, the equation of motion is modeled as mathematically a fourth-order nonlinear partial differential equation. To answer the question, a pseudo-parameter is firstly introduced to modify the equation of motion. And then, an integral formalism for the modified equation is found here, being taken as a linear solution-structure. It enables us to formulate a nonlinear integral equation of second kind, equivalent to the original equation of motion. The fixed point approach, applied to the integral equation, results in proposing a new iterative solution procedure for constructing the nonlinear solution of the original beam equation of motion, which consists luckily of just the simple regular numerical integration for its iterative process; i.e., it appears to be fairly simple as well as straightforward to apply. A mathematical analysis is carried out on both natures of convergence and uniqueness of the iterative procedure by proving a contractive character of a nonlinear operator. It follows conclusively,therefore, that it would be one of the useful nonlinear strategies for integrating the equation of motion for a nonlinear infinite beam, whereby the preceding question may be answered. In addition, it may be worth noticing that the pseudo-parameter introduced here has double roles; firstly, it connects the original beam equation of motion with the integral equation, second, it is related with the convergence of the iterative method proposed here.

  17. On iterative processes in the Krylov-Sonneveld subspaces

    NASA Astrophysics Data System (ADS)

    Ilin, Valery P.

    2016-10-01

    The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.

  18. Self-Motivated Personal Career Planning Program. Planner's Guide. [Student Edition].

    ERIC Educational Resources Information Center

    Walter, Verne

    The Self-Motivated Personal Career Planning guide for students presents a process of self-assessment and goal-setting. An overview and rationale of the program and instructions and procedures are discussed in Chapters 1 and 2. The remainder of the guide consists of procedural steps for (1) self-assessment and (2) review and planning.…

  19. Integrated modeling of high βN steady state scenario on DIII-D

    DOE PAGES

    Park, Jin Myung; Ferron, J. R.; Holcomb, Christopher T.; ...

    2018-01-10

    Theory-based integrated modeling validated against DIII-D experiments predicts that fully non-inductive DIII-D operation with β N > 4.5 is possible with certain upgrades. IPS-FASTRAN is a new iterative numerical procedure that integrates models of core transport, edge pedestal, equilibrium, stability, heating, and current drive self-consistently to find steady-state ( d/dt = 0) solutions and reproduces most features of DIII-D high β N discharges with a stationary current profile. Projecting forward to scenarios possible on DIII-D with future upgrades, the high q min > 2 scenario achieves stable operation at β N as high as 5 by using a very broadmore » current density profile to improve the ideal-wall stabilization of low- n instabilities along with confinement enhancement from low magnetic shear. This modeling guides the necessary upgrades of the heating and current drive system to realize reactor-relevant high β N steady-state scenarios on DIII-D by simultaneous optimization of the current and pressure profiles.« less

  20. Integrated modeling of high βN steady state scenario on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jin Myung; Ferron, J. R.; Holcomb, Christopher T.

    Theory-based integrated modeling validated against DIII-D experiments predicts that fully non-inductive DIII-D operation with β N > 4.5 is possible with certain upgrades. IPS-FASTRAN is a new iterative numerical procedure that integrates models of core transport, edge pedestal, equilibrium, stability, heating, and current drive self-consistently to find steady-state ( d/dt = 0) solutions and reproduces most features of DIII-D high β N discharges with a stationary current profile. Projecting forward to scenarios possible on DIII-D with future upgrades, the high q min > 2 scenario achieves stable operation at β N as high as 5 by using a very broadmore » current density profile to improve the ideal-wall stabilization of low- n instabilities along with confinement enhancement from low magnetic shear. This modeling guides the necessary upgrades of the heating and current drive system to realize reactor-relevant high β N steady-state scenarios on DIII-D by simultaneous optimization of the current and pressure profiles.« less

  1. Integrated modeling of high βN steady state scenario on DIII-D

    NASA Astrophysics Data System (ADS)

    Park, J. M.; Ferron, J. R.; Holcomb, C. T.; Buttery, R. J.; Solomon, W. M.; Batchelor, D. B.; Elwasif, W.; Green, D. L.; Kim, K.; Meneghini, O.; Murakami, M.; Snyder, P. B.

    2018-01-01

    Theory-based integrated modeling validated against DIII-D experiments predicts that fully non-inductive DIII-D operation with βN > 4.5 is possible with certain upgrades. IPS-FASTRAN is a new iterative numerical procedure that integrates models of core transport, edge pedestal, equilibrium, stability, heating, and current drive self-consistently to find steady-state (d/dt = 0) solutions and reproduces most features of DIII-D high βN discharges with a stationary current profile. Projecting forward to scenarios possible on DIII-D with future upgrades, the high qmin > 2 scenario achieves stable operation at βN as high as 5 by using a very broad current density profile to improve the ideal-wall stabilization of low-n instabilities along with confinement enhancement from low magnetic shear. This modeling guides the necessary upgrades of the heating and current drive system to realize reactor-relevant high βN steady-state scenarios on DIII-D by simultaneous optimization of the current and pressure profiles.

  2. Coupled dynamics in gluon mass generation and the impact of the three-gluon vertex

    NASA Astrophysics Data System (ADS)

    Binosi, Daniele; Papavassiliou, Joannis

    2018-03-01

    We present a detailed study of the subtle interplay transpiring at the level of two integral equations that are instrumental for the dynamical generation of a gluon mass in pure Yang-Mills theories. The main novelty is the joint treatment of the Schwinger-Dyson equation governing the infrared behavior of the gluon propagator and of the integral equation that controls the formation of massless bound-state excitations, whose inclusion is instrumental for obtaining massive solutions from the former equation. The self-consistency of the entire approach imposes the requirement of using a single value for the gauge coupling entering in the two key equations; its fulfilment depends crucially on the details of the three-gluon vertex, which contributes to both of them, but with different weight. In particular, the characteristic suppression of this vertex at intermediate and low energies enables the convergence of the iteration procedure to a single gauge coupling, whose value is reasonably close to that extracted from related lattice simulations.

  3. Effect of contrast enhancement prior to iteration procedure on image correction for soft x-ray projection microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamsranjav, Erdenetogtokh, E-mail: ja.erdenetogtokh@gmail.com; Shiina, Tatsuo, E-mail: shiina@faculity.chiba-u.jp; Kuge, Kenichi

    2016-01-28

    Soft X-ray microscopy is well recognized as a powerful tool of high-resolution imaging for hydrated biological specimens. Projection type of it has characteristics of easy zooming function, simple optical layout and so on. However the image is blurred by the diffraction of X-rays, leading the spatial resolution to be worse. In this study, the blurred images have been corrected by an iteration procedure, i.e., Fresnel and inverse Fresnel transformations repeated. This method was confirmed by earlier studies to be effective. Nevertheless it was not enough to some images showing too low contrast, especially at high magnification. In the present study,more » we tried a contrast enhancement method to make the diffraction fringes clearer prior to the iteration procedure. The method was effective to improve the images which were not successful by iteration procedure only.« less

  4. Self-consistent one dimension in space and three dimension in velocity kinetic trajectory simulation model of magnetized plasma-wall transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju

    2015-11-15

    We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less

  5. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    PubMed Central

    Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968

  6. Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions

    PubMed Central

    Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.

    2010-01-01

    Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256

  7. Electron beam charging of insulators: A self-consistent flight-drift model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Touzin, M.; Goeuriot, D.; Guerret-Piecourt, C.

    2006-06-01

    Electron beam irradiation and the self-consistent charge transport in bulk insulating samples are described by means of a new flight-drift model and an iterative computer simulation. Ballistic secondary electron and hole transport is followed by electron and hole drifts, their possible recombination and/or trapping in shallow and deep traps. The trap capture cross sections are the Poole-Frenkel-type temperature and field dependent. As a main result the spatial distributions of currents j(x,t), charges {rho}(x,t), the field F(x,t), and the potential slope V(x,t) are obtained in a self-consistent procedure as well as the time-dependent secondary electron emission rate {sigma}(t) and the surfacemore » potential V{sub 0}(t). For bulk insulating samples the time-dependent distributions approach the final stationary state with j(x,t)=const=0 and {sigma}=1. Especially for low electron beam energies E{sub 0}<4 keV the incorporation of mainly positive charges can be controlled by the potential V{sub G} of a vacuum grid in front of the target surface. For high beam energies E{sub 0}=10, 20, and 30 keV high negative surface potentials V{sub 0}=-4, -14, and -24 kV are obtained, respectively. Besides open nonconductive samples also positive ion-covered samples and targets with a conducting and grounded layer (metal or carbon) on the surface have been considered as used in environmental scanning electron microscopy and common SEM in order to prevent charging. Indeed, the potential distributions V(x) are considerably small in magnitude and do not affect the incident electron beam neither by retarding field effects in front of the surface nor within the bulk insulating sample. Thus the spatial scattering and excitation distributions are almost not affected.« less

  8. An iterative transformation procedure for numerical solution of flutter and similar characteristics-value problems

    NASA Technical Reports Server (NTRS)

    Gossard, Myron L

    1952-01-01

    An iterative transformation procedure suggested by H. Wielandt for numerical solution of flutter and similar characteristic-value problems is presented. Application of this procedure to ordinary natural-vibration problems and to flutter problems is shown by numerical examples. Comparisons of computed results with experimental values and with results obtained by other methods of analysis are made.

  9. Contact stresses in gear teeth: A new method of analysis

    NASA Technical Reports Server (NTRS)

    Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.

    1991-01-01

    A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.

  10. Multigrid techniques for nonlinear eigenvalue probems: Solutions of a nonlinear Schroedinger eigenvalue problem in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1994-01-01

    This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.

  11. Static shape of an acoustically levitated drop with wave-drop interaction

    NASA Astrophysics Data System (ADS)

    Lee, C. P.; Anilkumar, A. V.; Wang, T. G.

    1994-11-01

    The static shape of a drop levitated and flattened by an acoustic standing wave field in air is calculated, requiring self-consistency between the drop shape and the wave. The wave is calculated for a given shape using the boundary integral method. From the resulting radiation stress on the drop surface, the shape is determined by solving the Young-Laplace equation, completing an iteration cycle. The iteration is continued until both the shape and the wave converge. Of particular interest are the shapes of large drops that sustain equilibrium, beyond a certain degree of flattening, by becoming more flattened at a decreasing sound pressure level. The predictions for flattening versus acoustic radiation stress, for drops of different sizes, compare favorably with experimental data.

  12. Toroidal Ampere-Faraday Equations Solved Consistently with the CQL3D Fokker-Planck Time-Evolution

    NASA Astrophysics Data System (ADS)

    Harvey, R. W.; Petrov, Yu. V.

    2013-10-01

    A self-consistent, time-dependent toroidal electric field calculation is a key feature of a complete 3D Fokker-Planck kinetic distribution radial transport code for f(v,theta,rho,t). In the present CQL3D finite-difference model, the electric field E(rho,t) is either prescribed, or iteratively adjusted to obtain prescribed toroidal or parallel currents. We discuss first results of an implementation of the Ampere-Faraday equation for the self-consistent toroidal electric field, as applied to the runaway electron production in tokamaks due to rapid reduction of the plasma temperature as occurs in a plasma disruption. Our previous results assuming a constant current density (Lenz' Law) model showed that prompt ``hot-tail runaways'' dominated ``knock-on'' and Dreicer ``drizzle'' runaways; we will examine modifications due to the more complete Ampere-Faraday solution. Work supported by US DOE under DE-FG02-ER54744.

  13. Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography

    NASA Astrophysics Data System (ADS)

    Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.

    1985-06-01

    Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.

  14. Discrete Self-Similarity in Interfacial Hydrodynamics and the Formation of Iterated Structures.

    PubMed

    Dallaston, Michael C; Fontelos, Marco A; Tseluiko, Dmitri; Kalliadasis, Serafim

    2018-01-19

    The formation of iterated structures, such as satellite and subsatellite drops, filaments, and bubbles, is a common feature in interfacial hydrodynamics. Here we undertake a computational and theoretical study of their origin in the case of thin films of viscous fluids that are destabilized by long-range molecular or other forces. We demonstrate that iterated structures appear as a consequence of discrete self-similarity, where certain patterns repeat themselves, subject to rescaling, periodically in a logarithmic time scale. The result is an infinite sequence of ridges and filaments with similarity properties. The character of these discretely self-similar solutions as the result of a Hopf bifurcation from ordinarily self-similar solutions is also described.

  15. Electrostatics of proteins in dielectric solvent continua. II. Hamiltonian reaction field dynamics

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Tavan, Paul; Mathias, Gerald

    2014-03-01

    In Paper I of this work [S. Bauer, G. Mathias, and P. Tavan, J. Chem. Phys. 140, 104102 (2014)] we have presented a reaction field (RF) method, which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of polarizable molecular mechanics (MM) force fields. Building upon these results, here we suggest a method for linearly scaling Hamiltonian RF/MM molecular dynamics (MD) simulations, which we call "Hamiltonian dielectric solvent" (HADES). First, we derive analytical expressions for the RF forces acting on the solute atoms. These forces properly account for all those conditions, which have to be self-consistently fulfilled by RF quantities introduced in Paper I. Next we provide details on the implementation, i.e., we show how our RF approach is combined with a fast multipole method and how the self-consistency iterations are accelerated by the use of the so-called direct inversion in the iterative subspace. Finally we demonstrate that the method and its implementation enable Hamiltonian, i.e., energy and momentum conserving HADES-MD, and compare in a sample application on Ac-Ala-NHMe the HADES-MD free energy landscape at 300 K with that obtained in Paper I by scanning of configurations and with one obtained from an explicit solvent simulation.

  16. Forward marching procedure for separated boundary-layer flows

    NASA Technical Reports Server (NTRS)

    Carter, J. E.; Wornom, S. F.

    1975-01-01

    A forward-marching procedure for separated boundary-layer flows which permits the rapid and accurate solution of flows of limited extent is presented. The streamwise convection of vorticity in the reversed flow region is neglected, and this approximation is incorporated into a previously developed (Carter, 1974) inverse boundary-layer procedure. The equations are solved by the Crank-Nicolson finite-difference scheme in which column iteration is carried out at each streamwise station. Instabilities encountered in the column iterations are removed by introducing timelike terms in the finite-difference equations. This provides both unconditional diagonal dominance and a column iterative scheme, found to be stable using the von Neumann stability analysis.

  17. Analysis of aircraft tires via semianalytic finite elements

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.

    1990-01-01

    A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.

  18. SNL-SAND-IV v. 0.9 (beta)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Patrick J.

    2016-10-05

    The code is used to provide an unfolded/adjusted energy-dependent fission reactor neutron spectrum based upon an input trial spectrum and a set of measured activities. This is part of a neutron environment characterization that supports doing testing in a given reactor environment. An iterative perturbation method is used to obtain a "best fit" neutron flux spectrum for a given input set of infinitely dilute foil activities. The calculational procedure consists of the selection of a trial flux spectrum to serve as the initial approximation to the solution, and subsequent iteration to a form acceptable as an appropriate solution. The solutionmore » is specified either as time-integrated flux (fluence) for a pulsed environment or as a flux for a steady-state neutron environment.« less

  19. Self-consistent Non-LTE Model of Infrared Molecular Emissions and Oxygen Dayglows in the Mesosphere and Lower Thermosphere

    NASA Technical Reports Server (NTRS)

    Feofilov, Artem G.; Yankovsky, Valentine A.; Pesnell, William D.; Kutepov, Alexander A.; Goldberg, Richard A.; Mauilova, Rada O.

    2007-01-01

    We present the new version of the ALI-ARMS (for Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) model. The model allows simultaneous self-consistent calculating the non-LTE populations of the electronic-vibrational levels of the O3 and O2 photolysis products and vibrational level populations of CO2, N2,O2, O3, H2O, CO and other molecules with detailed accounting for the variety of the electronic-vibrational, vibrational-vibrational and vibrational-translational energy exchange processes. The model was used as the reference one for modeling the O2 dayglows and infrared molecular emissions for self-consistent diagnostics of the multi-channel space observations of MLT in the SABER experiment It also allows reevaluating the thermalization efficiency of the absorbed solar ultraviolet energy and infrared radiative cooling/heating of MLT by detailed accounting of the electronic-vibrational relaxation of excited photolysis products via the complex chain of collisional energy conversion processes down to the vibrational energy of optically active trace gas molecules.

  20. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    DOE PAGES

    de Vries, Peter C.; Luce, Timothy C.; Bae, Young-soon; ...

    2017-11-22

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in fGW limits the duration ofmore » the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q95~3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in βp at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. Here, the results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.« less

  1. Multi-machine analysis of termination scenarios with comparison to simulations of controlled shutdown of ITER discharges

    NASA Astrophysics Data System (ADS)

    de Vries, P. C.; Luce, T. C.; Bae, Y. S.; Gerhardt, S.; Gong, X.; Gribov, Y.; Humphreys, D.; Kavin, A.; Khayrutdinov, R. R.; Kessel, C.; Kim, S. H.; Loarte, A.; Lukash, V. E.; de la Luna, E.; Nunes, I.; Poli, F.; Qian, J.; Reinke, M.; Sauter, O.; Sips, A. C. C.; Snipes, J. A.; Stober, J.; Treutterer, W.; Teplukhina, A. A.; Voitsekhovitch, I.; Woo, M. H.; Wolfe, S.; Zabeo, L.; the Alcator C-MOD Team; the ASDEX Upgrade Team; the DIII-D Team; the EAST Team; contributors, JET; the KSTAR Team; the NSTX-U Team; the TCV Team; IOS members, ITPA; experts

    2018-02-01

    To improve our understanding of the dynamics and control of ITER terminations, a study has been carried out on data from existing tokamaks. The aim of this joint analysis is to compare the assumptions for ITER terminations with the present experience basis. The study examined the parameter ranges in which present day devices operated during their terminations, as well as the dynamics of these parameters. The analysis of a database, built using a selected set of experimental termination cases, showed that, the H-mode density decays slower than the plasma current ramp-down. The consequential increase in f GW limits the duration of the H-mode phase or result in disruptions. The lower temperatures after the drop out of H-mode will allow the plasma internal inductance to increase. But vertical stability control remains manageable in ITER at high internal inductance when accompanied by a strong elongation reduction. This will result in ITER terminations remaining longer at low q (q 95 ~ 3) than most present-day devices during the current ramp-down. A fast power ramp-down leads to a larger change in β p at the H-L transition, but the experimental data showed that these are manageable for the ITER radial position control. The analysis of JET data shows that radiation and impurity levels significantly alter the H-L transition dynamics. Self-consistent calculations of the impurity content and resulting radiation should be taken into account when modelling ITER termination scenarios. The results from this analysis can be used to better prescribe the inputs for the detailed modelling and preparation of ITER termination scenarios.

  2. CORSICA modelling of ITER hybrid operation scenarios

    NASA Astrophysics Data System (ADS)

    Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.

    2016-12-01

    The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.

  3. Fuel Burn Estimation Using Real Track Data

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.

    2011-01-01

    A procedure for estimating fuel burned based on actual flight track data, and drag and fuel-flow models is described. The procedure consists of estimating aircraft and wind states, lift, drag and thrust. Fuel-flow for jet aircraft is determined in terms of thrust, true airspeed and altitude as prescribed by the Base of Aircraft Data fuel-flow model. This paper provides a theoretical foundation for computing fuel-flow with most of the information derived from actual flight data. The procedure does not require an explicit model of thrust and calibrated airspeed/Mach profile which are typically needed for trajectory synthesis. To validate the fuel computation method, flight test data provided by the Federal Aviation Administration were processed. Results from this method show that fuel consumed can be estimated within 1% of the actual fuel consumed in the flight test. Next, fuel consumption was estimated with simplified lift and thrust models. Results show negligible difference with respect to the full model without simplifications. An iterative takeoff weight estimation procedure is described for estimating fuel consumption, when takeoff weight is unavailable, and for establishing fuel consumption uncertainty bounds. Finally, the suitability of using radar-based position information for fuel estimation is examined. It is shown that fuel usage could be estimated within 5.4% of the actual value using positions reported in the Airline Situation Display to Industry data with simplified models and iterative takeoff weight computation.

  4. Full-wave Moment Tensor and Tomographic Inversions Based on 3D Strain Green Tensor

    DTIC Science & Technology

    2010-01-31

    propagation in three-dimensional (3D) earth, linearizes the inverse problem by iteratively updating the earth model , and provides an accurate way to...self-consistent FD-SGT databases constructed from finite-difference simulations of wave propagation in full-wave tomographic models can be used to...determine the moment tensors within minutes after a seismic event, making it possible for real time monitoring using 3D models . 15. SUBJECT TERMS

  5. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  6. Preclinical studies on the reinforcing effects of cannabinoids. A tribute to the scientific research of Dr. Steve Goldberg

    PubMed Central

    Tanda, Gianluigi

    2016-01-01

    Rationale The reinforcing effects of most abused drugs have been consistently demonstrated and studied in animal models, although those of marijuana were not, until the demonstration fifteen years ago that THC could serve as a reinforcer in self-administration (SA) procedures in squirrel monkeys. Until then, those effects were inferred using indirect assessments. Objectives The aim of this manuscript is to review the primary preclinical procedures used to indirectly and directly infer reinforcing effects of cannabinoid drugs. Methods Results will be reviewed from studies of cannabinoid-discrimination, intracranial-self-stimulation (ICSS), conditioned place preference (CPP), as well as change in levels of dopamine assessed in brain areas related to reinforcement, and finally from self-administration procedures. For each procedure, an evaluation will be made of the predictive validity in detecting the potential abuse liability of cannabinoids based on seminal papers, with the addition of selected reports from more recent years especially those from Dr. Goldberg’s research group. Results and Conclusions ICSS and CPP do not provide consistent results for the assessment of potential for abuse of cannabinoids. However, drug-discrimination and neurochemistry procedures appear to detect potential for abuse of cannabinoids, as well as several novel “designer cannabinoid drugs.” Though after 15 years it remains somewhat problematic transfer the self-administration model of marijuana abuse from squirrel monkeys to other species, studies with the former species have substantially advanced the field, and several reports have been published with consistent self-administration of cannabinoid agonists in rodents. PMID:27026633

  7. Towards a fully self-consistent inversion combining historical and paleomagnetic data for geomagnetic field reconstructions

    NASA Astrophysics Data System (ADS)

    Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.

    2017-12-01

    Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.

  8. irGPU.proton.Net: Irregular strong charge interaction networks of protonatable groups in protein molecules--a GPU solver using the fast multipole method and statistical thermodynamics.

    PubMed

    Kantardjiev, Alexander A

    2015-04-05

    A cluster of strongly interacting ionization groups in protein molecules with irregular ionization behavior is suggestive for specific structure-function relationship. However, their computational treatment is unconventional (e.g., lack of convergence in naive self-consistent iterative algorithm). The stringent evaluation requires evaluation of Boltzmann averaged statistical mechanics sums and electrostatic energy estimation for each microstate. irGPU: Irregular strong interactions in proteins--a GPU solver is novel solution to a versatile problem in protein biophysics--atypical protonation behavior of coupled groups. The computational severity of the problem is alleviated by parallelization (via GPU kernels) which is applied for the electrostatic interaction evaluation (including explicit electrostatics via the fast multipole method) as well as statistical mechanics sums (partition function) estimation. Special attention is given to the ease of the service and encapsulation of theoretical details without sacrificing rigor of computational procedures. irGPU is not just a solution-in-principle but a promising practical application with potential to entice community into deeper understanding of principles governing biomolecule mechanisms. © 2015 Wiley Periodicals, Inc.

  9. A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis

    ERIC Educational Resources Information Center

    Cao, Mengyang; Tay, Louis; Liu, Yaowu

    2017-01-01

    This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…

  10. Fast divide-and-conquer algorithm for evaluating polarization in classical force fields

    NASA Astrophysics Data System (ADS)

    Nocito, Dominique; Beran, Gregory J. O.

    2017-03-01

    Evaluation of the self-consistent polarization energy forms a major computational bottleneck in polarizable force fields. In large systems, the linear polarization equations are typically solved iteratively with techniques based on Jacobi iterations (JI) or preconditioned conjugate gradients (PCG). Two new variants of JI are proposed here that exploit domain decomposition to accelerate the convergence of the induced dipoles. The first, divide-and-conquer JI (DC-JI), is a block Jacobi algorithm which solves the polarization equations within non-overlapping sub-clusters of atoms directly via Cholesky decomposition, and iterates to capture interactions between sub-clusters. The second, fuzzy DC-JI, achieves further acceleration by employing overlapping blocks. Fuzzy DC-JI is analogous to an additive Schwarz method, but with distance-based weighting when averaging the fuzzy dipoles from different blocks. Key to the success of these algorithms is the use of K-means clustering to identify natural atomic sub-clusters automatically for both algorithms and to determine the appropriate weights in fuzzy DC-JI. The algorithm employs knowledge of the 3-D spatial interactions to group important elements in the 2-D polarization matrix. When coupled with direct inversion in the iterative subspace (DIIS) extrapolation, fuzzy DC-JI/DIIS in particular converges in a comparable number of iterations as PCG, but with lower computational cost per iteration. In the end, the new algorithms demonstrated here accelerate the evaluation of the polarization energy by 2-3 fold compared to existing implementations of PCG or JI/DIIS.

  11. Self-Regulated Learning Procedure for University Students: The "Meaningful Text-Reading" Strategy

    ERIC Educational Resources Information Center

    Roman Sanchez, Jose Maria

    2004-01-01

    Introduction: Experimental validation of a self-regulated learning procedure for university students, i.e. the "meaningful text-reading" strategy, is reported in this paper. The strategy's theoretical framework is the "ACRA Model" of learning strategies. The strategy consists of a flexible, recurring sequence of five mental operations of written…

  12. Iterative pass optimization of sequence data

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  13. Improved evaluation of optical depth components from Langley plot data

    NASA Technical Reports Server (NTRS)

    Biggar, S. F.; Gellman, D. I.; Slater, P. N.

    1990-01-01

    A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.

  14. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  15. In-vessel tritium retention and removal in ITER

    NASA Astrophysics Data System (ADS)

    Federici, G.; Anderl, R. A.; Andrew, P.; Brooks, J. N.; Causey, R. A.; Coad, J. P.; Cowgill, D.; Doerner, R. P.; Haasz, A. A.; Janeschitz, G.; Jacob, W.; Longhurst, G. R.; Nygren, R.; Peacock, A.; Pick, M. A.; Philipps, V.; Roth, J.; Skinner, C. H.; Wampler, W. R.

    Tritium retention inside the vacuum vessel has emerged as a potentially serious constraint in the operation of the International Thermonuclear Experimental Reactor (ITER). In this paper we review recent tokamak and laboratory data on hydrogen, deuterium and tritium retention for materials and conditions which are of direct relevance to the design of ITER. These data, together with significant advances in understanding the underlying physics, provide the basis for modelling predictions of the tritium inventory in ITER. We present the derivation, and discuss the results, of current predictions both in terms of implantation and codeposition rates, and critically discuss their uncertainties and sensitivity to important design and operation parameters such as the plasma edge conditions, the surface temperature, the presence of mixed-materials, etc. These analyses are consistent with recent tokamak findings and show that codeposition of tritium occurs on the divertor surfaces primarily with carbon eroded from a limited area of the divertor near the strike zones. This issue remains an area of serious concern for ITER. The calculated codeposition rates for ITER are relatively high and the in-vessel tritium inventory limit could be reached, under worst assumptions, in approximately a week of continuous operation. We discuss the implications of these estimates on the design, operation and safety of ITER and present a strategy for resolving the issues. We conclude that as long as carbon is used in ITER - and more generically in any other next-step experimental fusion facility fuelled with tritium - the efficient control and removal of the codeposited tritium is essential. There is a critical need to develop and test in situ cleaning techniques and procedures that are beyond the current experience of present-day tokamaks. We review some of the principal methods that are being investigated and tested, in conjunction with the R&D work still required to extrapolate their applicability to ITER. Finally, unresolved issues are identified and recommendations are made on potential R&D avenues for their resolution.

  16. An Iterative Inference Procedure Applying Conditional Random Fields for Simultaneous Classification of Land Cover and Land Use

    NASA Astrophysics Data System (ADS)

    Albert, L.; Rottensteiner, F.; Heipke, C.

    2015-08-01

    Land cover and land use exhibit strong contextual dependencies. We propose a novel approach for the simultaneous classification of land cover and land use, where semantic and spatial context is considered. The image sites for land cover and land use classification form a hierarchy consisting of two layers: a land cover layer and a land use layer. We apply Conditional Random Fields (CRF) at both layers. The layers differ with respect to the image entities corresponding to the nodes, the employed features and the classes to be distinguished. In the land cover layer, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Both CRFs model spatial dependencies between neighbouring image sites. The complex semantic relations between land cover and land use are integrated in the classification process by using contextual features. We propose a new iterative inference procedure for the simultaneous classification of land cover and land use, in which the two classification tasks mutually influence each other. This helps to improve the classification accuracy for certain classes. The main idea of this approach is that semantic context helps to refine the class predictions, which, in turn, leads to more expressive context information. Thus, potentially wrong decisions can be reversed at later stages. The approach is designed for input data based on aerial images. Experiments are carried out on a test site to evaluate the performance of the proposed method. We show the effectiveness of the iterative inference procedure and demonstrate that a smaller size of the super-pixels has a positive influence on the classification result.

  17. Orbital-Dependent Density Functionals for Chemical Catalysis

    DTIC Science & Technology

    2011-02-16

    E2 and SN2 Reactions: Effects of the Choice of Density Functional, Basis Set, and Self-Consistent Iterations," Y. Zhao and D. G. Truhlar, Journal...for  the  anti-­‐ E2,  syn-­‐E2,  and   SN2  pathways  of  the  reactions  of  F-­‐  and  Cl-­‐  with  CH3CH2F  and

  18. Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094

    NASA Technical Reports Server (NTRS)

    Minter, R. T. (Principal Investigator)

    1972-01-01

    The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.

  19. Solution of Heliospheric Propagation: Unveiling the Local Interstellar Spectra of Cosmic-ray Species

    NASA Astrophysics Data System (ADS)

    Boschini, M. J.; Della Torre, S.; Gervasi, M.; Grandi, D.; Jóhannesson, G.; Kachelriess, M.; La Vacca, G.; Masi, N.; Moskalenko, I. V.; Orlando, E.; Ostapchenko, S. S.; Pensotti, S.; Porter, T. A.; Quadrani, L.; Rancoita, P. G.; Rozza, D.; Tacconi, M.

    2017-05-01

    Local interstellar spectra (LIS) for protons, helium, and antiprotons are built using the most recent experimental results combined with state-of-the-art models for propagation in the Galaxy and heliosphere. Two propagation packages, GALPROP and HelMod, are combined to provide a single framework that is run to reproduce direct measurements of cosmic-ray (CR) species at different modulation levels and at both polarities of the solar magnetic field. To do so in a self-consistent way, an iterative procedure was developed, where the GALPROP LIS output is fed into HelMod, providing modulated spectra for specific time periods of selected experiments to compare with the data; the HelMod parameter optimization is performed at this stage and looped back to adjust the LIS using the new GALPROP run. The parameters were tuned with the maximum likelihood procedure using an extensive data set of proton spectra from 1997 to 2015. The proposed LIS accommodate both the low-energy interstellar CR spectra measured by Voyager 1 and the high-energy observations by BESS, Pamela, AMS-01, and AMS-02 made from the balloons and near-Earth payloads; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The found solution is in a good agreement with proton, helium, and antiproton data by AMS-02, BESS, and PAMELA in the whole energy range.

  20. VizieR Online Data Catalog: Local interstellar spectra of cosmic-ray species (Boschini+, 2017)

    NASA Astrophysics Data System (ADS)

    Boschini, M. J.; Torre, S. D.; Gervasi, M.; Grandi, D.; Johannesson, G.; Kachelriess, M.; La Vacca, G.; Masi, N.; Moskalenko, I. V.; Orlando, E.; Ostapchenko, S. S.; Pensotti, S.; Porter, T. A.; Quadrani, L.; Rancoita, P. G.; Rozza, D.; Tacconi, M.

    2017-11-01

    Local interstellar spectra (LIS) for protons, helium, and antiprotons are built using the most recent experimental results combined with state-of-the-art models for propagation in the Galaxy and heliosphere. Two propagation packages, GALPROP and HelMod, are combined to provide a single framework that is run to reproduce direct measurements of cosmic-ray (CR) species at different modulation levels and at both polarities of the solar magnetic field. To do so in a self-consistent way, an iterative procedure was developed, where the GALPROP LIS output is fed into HelMod, providing modulated spectra for specific time periods of selected experiments to compare with the data; the HelMod parameter optimization is performed at this stage and looped back to adjust the LIS using the new GALPROP run. The parameters were tuned with the maximum likelihood procedure using an extensive data set of proton spectra from 1997 to 2015. The proposed LIS accommodate both the low-energy interstellar CR spectra measured by Voyager 1 and the high-energy observations by BESS, Pamela, AMS-01, and AMS-02 made from the balloons and near-Earth payloads; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The found solution is in a good agreement with proton, helium, and antiproton data by AMS-02, BESS, and PAMELA in the whole energy range. (3 data files).

  1. Solution of Heliospheric Propagation: Unveiling the Local Interstellar Spectra of Cosmic-ray Species

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschini, M. J.; Torre, S. Della; Gervasi, M.

    2017-05-10

    Local interstellar spectra (LIS) for protons, helium, and antiprotons are built using the most recent experimental results combined with state-of-the-art models for propagation in the Galaxy and heliosphere. Two propagation packages, GALPROP and HelMod, are combined to provide a single framework that is run to reproduce direct measurements of cosmic-ray (CR) species at different modulation levels and at both polarities of the solar magnetic field. To do so in a self-consistent way, an iterative procedure was developed, where the GALPROP LIS output is fed into HelMod, providing modulated spectra for specific time periods of selected experiments to compare with themore » data; the HelMod parameter optimization is performed at this stage and looped back to adjust the LIS using the new GALPROP run. The parameters were tuned with the maximum likelihood procedure using an extensive data set of proton spectra from 1997 to 2015. The proposed LIS accommodate both the low-energy interstellar CR spectra measured by Voyager 1 and the high-energy observations by BESS, Pamela, AMS-01, and AMS-02 made from the balloons and near-Earth payloads; it also accounts for Ulysses counting rate features measured out of the ecliptic plane. The found solution is in a good agreement with proton, helium, and antiproton data by AMS-02, BESS, and PAMELA in the whole energy range.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliaga, José I., E-mail: aliaga@uji.es; Alonso, Pedro; Badía, José M.

    We introduce a new iterative Krylov subspace-based eigensolver for the simulation of macromolecular motions on desktop multithreaded platforms equipped with multicore processors and, possibly, a graphics accelerator (GPU). The method consists of two stages, with the original problem first reduced into a simpler band-structured form by means of a high-performance compute-intensive procedure. This is followed by a memory-intensive but low-cost Krylov iteration, which is off-loaded to be computed on the GPU by means of an efficient data-parallel kernel. The experimental results reveal the performance of the new eigensolver. Concretely, when applied to the simulation of macromolecules with a few thousandsmore » degrees of freedom and the number of eigenpairs to be computed is small to moderate, the new solver outperforms other methods implemented as part of high-performance numerical linear algebra packages for multithreaded architectures.« less

  3. A method for the dynamic and thermal stress analysis of space shuttle surface insulation

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Levy, A.; Austin, F.

    1975-01-01

    The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.

  4. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    PubMed

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  5. Self-consistent gyrokinetic modeling of neoclassical and turbulent impurity transport

    NASA Astrophysics Data System (ADS)

    Estève, D.; Sarazin, Y.; Garbet, X.; Grandgirard, V.; Breton, S.; Donnel, P.; Asahi, Y.; Bourdelle, C.; Dif-Pradalier, G.; Ehrlacher, C.; Emeriau, C.; Ghendrih, Ph.; Gillot, C.; Latu, G.; Passeron, C.

    2018-03-01

    Trace impurity transport is studied with the flux-driven gyrokinetic GYSELA code (Grandgirard et al 2016 Comput. Phys. Commun. 207 35). A reduced and linearized multi-species collision operator has been recently implemented, so that both neoclassical and turbulent transport channels can be treated self-consistently on an equal footing. In the Pfirsch-Schlüter regime that is probably relevant for tungsten, the standard expression for the neoclassical impurity flux is shown to be recovered from gyrokinetics with the employed collision operator. Purely neoclassical simulations of deuterium plasma with trace impurities of helium, carbon and tungsten lead to impurity diffusion coefficients, inward pinch velocities due to density peaking, and thermo-diffusion terms which quantitatively agree with neoclassical predictions and NEO simulations (Belli et al 2012 Plasma Phys. Control. Fusion 54 015015). The thermal screening factor appears to be less than predicted analytically in the Pfirsch-Schlüter regime, which can be detrimental to fusion performance. Finally, self-consistent nonlinear simulations have revealed that the tungsten impurity flux is not the sum of turbulent and neoclassical fluxes computed separately, as is usually assumed. The synergy partly results from the turbulence-driven in-out poloidal asymmetry of tungsten density. This result suggests the need for self-consistent simulations of impurity transport, i.e. including both turbulence and neoclassical physics, in view of quantitative predictions for ITER.

  6. A semi-automatic method for analysis of landscape elements using Shuttle Radar Topography Mission and Landsat ETM+ data

    NASA Astrophysics Data System (ADS)

    Ehsani, Amir Houshang; Quiel, Friedrich

    2009-02-01

    In this paper, we demonstrate artificial neural networks—self-organizing map (SOM)—as a semi-automatic method for extraction and analysis of landscape elements in the man and biosphere reserve "Eastern Carpathians". The Shuttle Radar Topography Mission (SRTM) collected data to produce generally available digital elevation models (DEM). Together with Landsat Thematic Mapper data, this provides a unique, consistent and nearly worldwide data set. To integrate the DEM with Landsat data, it was re-projected from geographic coordinates to UTM with 28.5 m spatial resolution using cubic convolution interpolation. To provide quantitative morphometric parameters, first-order (slope) and second-order derivatives of the DEM—minimum curvature, maximum curvature and cross-sectional curvature—were calculated by fitting a bivariate quadratic surface with a window size of 9×9 pixels. These surface curvatures are strongly related to landform features and geomorphological processes. Four morphometric parameters and seven Landsat-enhanced thematic mapper (ETM+) bands were used as input for the SOM algorithm. Once the network weights have been randomly initialized, different learning parameter sets, e.g. initial radius, final radius and number of iterations, were investigated. An optimal SOM with 20 classes using 1000 iterations and a final neighborhood radius of 0.05 provided a low average quantization error of 0.3394 and was used for further analysis. The effect of randomization of initial weights for optimal SOM was also studied. Feature space analysis, three-dimensional inspection and auxiliary data facilitated the assignment of semantic meaning to the output classes in terms of landform, based on morphometric analysis, and land use, based on spectral properties. Results were displayed as thematic map of landscape elements according to form, cover and slope. Spectral and morphometric signature analysis with corresponding zoom samples superimposed by contour lines were compared in detail to clarify the role of morphometric parameters to separate landscape elements. The results revealed the efficiency of SOM to integrate SRTM and Landsat data in landscape analysis. Despite the stochastic nature of SOM, the results in this particular study are not sensitive to randomization of initial weight vectors if many iterations are used. This procedure is reproducible for the same application with consistent results.

  7. Delphi Method Validation of a Procedural Performance Checklist for Insertion of an Ultrasound-Guided Internal Jugular Central Line.

    PubMed

    Hartman, Nicholas; Wittler, Mary; Askew, Kim; Manthey, David

    2016-01-01

    Placement of ultrasound-guided central lines is a critical skill for physicians in several specialties. Improving the quality of care delivered surrounding this procedure demands rigorous measurement of competency, and validated tools to assess performance are essential. Using the iterative, modified Delphi technique and experts in multiple disciplines across the United States, the study team created a 30-item checklist designed to assess competency in the placement of ultrasound-guided internal jugular central lines. Cronbach α was .94, indicating an excellent degree of internal consistency. Further validation of this checklist will require its implementation in simulated and clinical environments. © The Author(s) 2014.

  8. Measuring leader perceptions of school readiness for reforms: use of an iterative model combining classical and Rasch methods.

    PubMed

    Chatterji, Madhabi

    2002-01-01

    This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.

  9. 3D unstructured-mesh radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less

  10. The role of simulation in the design of a neural network chip

    NASA Technical Reports Server (NTRS)

    Desai, Utpal; Roppel, Thaddeus A.; Padgett, Mary L.

    1993-01-01

    An iterative, simulation-based design procedure for a neural network chip is introduced. For this design procedure, the goal is to produce a chip layout for a neural network in which the weights are determined by transistor gate width-to-length ratios. In a given iteration, the current layout is simulated using the circuit simulator SPICE, and layout adjustments are made based on conventional gradient-decent methods. After the iteration converges, the chip is fabricated. Monte Carlo analysis is used to predict the effect of statistical fabrication process variations on the overall performance of the neural network chip.

  11. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less

  12. Evaluating the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2013-07-01

    In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.

  13. The modified semi-discrete two-dimensional Toda lattice with self-consistent sources

    NASA Astrophysics Data System (ADS)

    Gegenhasi

    2017-07-01

    In this paper, we derive the Grammian determinant solutions to the modified semi-discrete two-dimensional Toda lattice equation, and then construct the semi-discrete two-dimensional Toda lattice equation with self-consistent sources via source generation procedure. The algebraic structure of the resulting coupled modified differential-difference equation is clarified by presenting its Grammian determinant solutions and Casorati determinant solutions. As an application of the Grammian determinant and Casorati determinant solution, the explicit one-soliton and two-soliton solution of the modified semi-discrete two-dimensional Toda lattice equation with self-consistent sources are given. We also construct another form of the modified semi-discrete two-dimensional Toda lattice equation with self-consistent sources which is the Bäcklund transformation for the semi-discrete two-dimensional Toda lattice equation with self-consistent sources.

  14. Method for computing self-consistent solution in a gun code

    DOEpatents

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  15. Exact Exchange calculations for periodic systems: a real space approach

    NASA Astrophysics Data System (ADS)

    Natan, Amir; Marom, Noa; Makmal, Adi; Kronik, Leeor; Kuemmel, Stephan

    2011-03-01

    We present a real-space method for exact-exchange Kohn-Sham calculations of periodic systems. The method is based on self-consistent solutions of the optimized effective potential (OEP) equation on a three-dimensional non-orthogonal grid, using norm conserving pseudopotentials. These solutions can be either exact, using the S-iteration approach, or approximate, using the Krieger, Li, and Iafrate (KLI) approach. We demonstrate, using a variety of systems, the importance of singularity corrections and use of appropriate pseudopotentials.

  16. Self-attitude awareness training: An aid to effective performance in microgravity and virtual environments

    NASA Technical Reports Server (NTRS)

    Parker, Donald E.; Harm, D. L.; Florer, Faith L.

    1993-01-01

    This paper describes ongoing development of training procedures to enhance self-attitude awareness in astronaut trainees. The procedures are based on observations regarding self-attitude (perceived self-orientation and self-motion) reported by astronauts. Self-attitude awareness training is implemented on a personal computer system and consists of lesson stacks programmed using Hypertalk with Macromind Director movie imports. Training evaluation will be accomplished by an active search task using the virtual Spacelab environment produced by the Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME-PAT) as well as by assessment of astronauts' performance and sense of well-being during orbital flight. The general purpose of self-attitude awareness training is to use as efficiently as possible the limited DOME-PAT training time available to astronauts prior to a space mission. We suggest that similar training procedures may enhance the performance of virtual environment operators.

  17. Determination of the effective sample thickness via radiative capture

    DOE PAGES

    Hurst, A. M.; Summers, N. C.; Szentmiklosi, L.; ...

    2015-09-14

    Our procedure for determining the effective thickness of non-uniform irregular-shaped samples via radiative capture is described. In this technique, partial γ-ray production cross sections of a compound nucleus produced in a neutron-capture reaction are measured using Prompt Gamma Activation Analysis and compared to their corresponding standardized absolute values. For the low-energy transitions, the measured cross sections are lower than their standard values due to significant photoelectric absorption of the γ rays within the bulk-sample volume itself. Using standard theoretical techniques, the amount of γ-ray self absorption and neutron self shielding can then be calculated by iteratively varying the sample thicknessmore » until the observed cross sections converge with the known standards. The overall attenuation provides a measure of the effective sample thickness illuminated by the neutron beam. This procedure is illustrated through radiative neutron capture using powdered oxide samples comprising enriched 186W and 182W from which their tungsten-equivalent effective thicknesses are deduced to be 0.077(3) mm and 0.042(8) mm, respectively.« less

  18. Application of a self-consistent NEGF procedure to study the coherent transport with phase breaking scattering in low dimensional systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratap, Surender, E-mail: surender.pratap@pilani.bits-pilani.ac.in; Sarkar, Niladri, E-mail: niladri@pilani.bits-pilani.ac.in

    2016-04-13

    We have studied Quantum Transport with dephasing in Low Dimensional systems. Here, we apply a self-consistent NEGF procedure to study the transport mechanism in low-dimensional systems with phase breaking scatterers. Under this we have determined the transmission coefficient of a very small Multi-Moded Nanowire which is under a small bias potential of few meV. We have calculated the transmission of this device first with no scatterers. Then we have introduced scatterers in the device and calculated the transmission for the device.

  19. Web-based assessments of physical activity in youth: considerations for design and scale calibration.

    PubMed

    Saint-Maurice, Pedro F; Welk, Gregory J

    2014-12-01

    This paper describes the design and methods involved in calibrating a Web-based self-report instrument to estimate physical activity behavior. The limitations of self-report measures are well known, but calibration methods enable the reported information to be equated to estimates obtained from objective data. This paper summarizes design considerations for effective development and calibration of physical activity self-report measures. Each of the design considerations is put into context and followed by a practical application based on our ongoing calibration research with a promising online self-report tool called the Youth Activity Profile (YAP). We first describe the overall concept of calibration and how this influences the selection of appropriate self-report tools for this population. We point out the advantages and disadvantages of different monitoring devices since the choice of the criterion measure and the strategies used to minimize error in the measure can dramatically improve the quality of the data. We summarize strategies to ensure quality control in data collection and discuss analytical considerations involved in group- vs individual-level inference. For cross-validation procedures, we describe the advantages of equivalence testing procedures that directly test and quantify agreement. Lastly, we introduce the unique challenges encountered when transitioning from paper to a Web-based tool. The Web offers considerable potential for broad adoption but an iterative calibration approach focused on continued refinement is needed to ensure that estimates are generalizable across individuals, regions, seasons and countries.

  20. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2011-06-15

    The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less

  1. Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions

    NASA Astrophysics Data System (ADS)

    Cinal, M.; Holas, A.

    2011-06-01

    The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.

  2. Identification of Scoliosis Research Society-22r Health-Related Quality of Life questionnaire domains using factor analysis methodology.

    PubMed

    Lai, Sue-Min; Asher, Marc A; Burton, Douglas C; Carlson, Brandon B

    2010-05-20

    Cross-sectional mail questionnaire. Examination of the underlying construct validity of the Scoliosis Research Society-22r (SRS-22r) Health-Related Quality of Life (HRQoL) Questionnaire using factor analysis. The original SRS-24 HRQoL questionnaire has undergone a series of modifications in an effort to further improve its psychometric properties and validate its use in patients from 10 years of age until well into adulthood. The SRS-22r questionnaire is the result of this effort. To date, the underlying construct validity of the original English version has not been analyzed by factor analysis. A questionnaire including all questions on the SRS-24, -23, -22, and -22r questionnaires (49 total questions) was mailed to a consecutive series of 235 patients who had received primary posterior or anterior instrumentation and arthrodesis. Domain structure of the SRS-22r questions was analyzed using iterated principal factor analysis with orthogonal rotation. One hundred twenty-one (51%) of the patients, age 23.34 +/- 4.52 years (range, 14.16-34.57 years), returned the questionnaire at 8.63 +/- 4.00 years (range, 2.32-15.94 years) following surgery. Factor analysis using all 22 questions resulted in 3 factors with many shared items because of significant collinearity of the satisfaction/dissatisfaction with management questions with the others. After 18 iterations, factor analysis using the 20 nonmanagement questions revealed 4 factors that explained 98% of the variance. These factors parallel the assigned domains of the SRS-22r questionnaire. Three questions (2 self-image and 1 function) were identified that had high loading in 2 factors. However, internal consistency was best when 2 of the questions (1 self-image and 1 function) were retained in their assigned SRS-22r domains and the third decreased self-image internal consistency by only 0.01%. The internal consistencies (Cronbach alpha) of the assigned SRS-22r nonmanagement domains were excellent or very good: function 0.83, pain 0.87, self-image 0.80, and mental health 0.90. For the management domain it was good: 0.73. Factor analysis of the SRS-22r HRQoL confirms placement of the 20 nonmanagement domain questions in the assigned 4 domains, all with excellent or very good internal consistency.

  3. Studies on Flat Sandwich-type Self-Powered Detectors for Flux Measurements in ITER Test Blanket Modules

    NASA Astrophysics Data System (ADS)

    Raj, Prasoon; Angelone, Maurizio; Döring, Toralf; Eberhardt, Klaus; Fischer, Ulrich; Klix, Axel; Schwengner, Ronald

    2018-01-01

    Neutron and gamma flux measurements in designated positions in the test blanket modules (TBM) of ITER will be important tasks during ITER's campaigns. As part of the ongoing task on development of nuclear instrumentation for application in European ITER TBMs, experimental investigations on self-powered detectors (SPD) are undertaken. This paper reports the findings of neutron and photon irradiation tests performed with a test SPD in flat sandwich-like geometry. Whereas both neutrons and gammas can be detected with appropriate optimization of geometries, materials and sizes of the components, the present sandwich-like design is more sensitive to gammas than 14 MeV neutrons. Range of SPD current signals achievable under TBM conditions are predicted based on the SPD sensitivities measured in this work.

  4. Optimal Averages for Nonlinear Signal Decompositions - Another Alternative for Empirical Mode Decomposition

    DTIC Science & Technology

    2014-10-01

    nonlinear and non-stationary signals. It aims at decomposing a signal, via an iterative sifting procedure, into several intrinsic mode functions ...stationary signals. It aims at decomposing a signal, via an iterative sifting procedure into several intrinsic mode functions (IMFs), and each of the... function , optimization. 1 Introduction It is well known that nonlinear and non-stationary signal analysis is important and difficult. His- torically

  5. Self-prior strategy for organ reconstruction in fluorescence molecular tomography

    PubMed Central

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-01-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy. PMID:29082094

  6. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    PubMed

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  7. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  8. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  9. Strategies for the coupling of global and local crystal growth models

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Lun, Lisa; Yeckel, Andrew

    2007-05-01

    The modular coupling of existing numerical codes to model crystal growth processes will provide for maximum effectiveness, capability, and flexibility. However, significant challenges are posed to make these coupled models mathematically self-consistent and algorithmically robust. This paper presents sample results from a coupling of the CrysVUn code, used here to compute furnace-scale heat transfer, and Cats2D, used to calculate melt fluid dynamics and phase-change phenomena, to form a global model for a Bridgman crystal growth system. However, the strategy used to implement the CrysVUn-Cats2D coupling is unreliable and inefficient. The implementation of under-relaxation within a block Gauss-Seidel iteration is shown to be ineffective for improving the coupling performance in a model one-dimensional problem representative of a melt crystal growth model. Ideas to overcome current convergence limitations using approximations to a full Newton iteration method are discussed.

  10. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  11. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  12. Optimal Damping Behavior of a Composite Sandwich Beam Reinforced with Coated Fibers

    NASA Astrophysics Data System (ADS)

    Lurie, S.; Solyaev, Y.; Ustenko, A.

    2018-04-01

    In the present paper, the effective damping properties of a symmetric foam-core sandwich beam with composite face plates reinforced with coated fibers is studied. A glass fiber-epoxy composite with additional rubber-toughened epoxy coatings on the fibers is considered as the material of the face plates. A micromechanical analysis of the effective properties of the unidirectional lamina is conducted based on the generalized self-consistent method and the viscoelastic correspondence principle. The effective complex moduli of composite face plates with a symmetric angle-ply structure are evaluated based on classical lamination theory. A modified Mead-Markus model is utilized to evaluate the fundamental modal loss factor of a simply supported sandwich beam with a polyurethane core. The viscoelastic frequency-dependent behaviors of the core and face plate materials are both considered. The properties of the face plates are evaluated based on a micromechanical analysis and found to implicitly depend on frequency; thus, an iterative procedure is applied to find the natural frequencies of the lateral vibrations of the beam. The optimal values of the coating thickness, lamination angle and core thickness for the best multi-scale damping behavior of the beam are found.

  13. The two-phase method for finding a great number of eigenpairs of the symmetric or weakly non-symmetric large eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dul, F.A.; Arczewski, K.

    1994-03-01

    Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less

  14. Self-adaptive difference method for the effective solution of computationally complex problems of boundary layer theory

    NASA Technical Reports Server (NTRS)

    Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.

    1986-01-01

    An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.

  15. Generalization of a model of hysteresis for dynamical systems.

    PubMed

    Piquette, Jean C; McLaughlin, Elizabeth A; Ren, Wei; Mukherjee, Binu K

    2002-06-01

    A previously described model of hysteresis [J. C. Piquette and S. E. Forsythe, J. Acoust. Soc. Am. 106, 3317-3327 (1999); 106, 3328-3334 (1999)] is generalized to apply to a dynamical system. The original model produces theoretical hysteresis loops that agree well with laboratory measurements acquired under quasi-static conditions. The loops are produced using three-dimensional rotation matrices. An iterative procedure, which allows the model to be applied to a dynamical system, is introduced here. It is shown that, unlike the quasi-static case, self-crossing of the loops is a realistic possibility when inertia and viscous friction are taken into account.

  16. Variational study on the vibrational level structure and IVR behavior of highly vibrationally excited S0 formaldehyde.

    PubMed

    Rashev, Svetoslav; Moule, David C

    2012-02-15

    We perform large scale converged variational vibrational calculations on S(0) formaldehyde up to very high excess vibrational energies (E(v)), E(v)∼17,000cm(-1), using our vibrational method, consisting of a specific search/selection/Lanczos iteration procedure. Using the same method we investigate the vibrational level structure and intramolecular vibrational redistribution (IVR) characteristics for various vibrational levels in this energy range in order to assess the onset of IVR. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Ocean tides from Seasat-A

    NASA Technical Reports Server (NTRS)

    Hendershott, M. C.; Munk, W. H.; Zetler, B. D.

    1974-01-01

    Two procedures for the evaluation of global tides from SEASAT-A altimetry data are elaborated: an empirical method leading to the response functions for a grid of about 500 points from which the tide can be predicted for any point in the oceans, and a dynamic method which consists of iteratively modifying the parameters in a numerical solution to Laplace tide equations. It is assumed that the shape of the received altimeter signal can be interpreted for sea state and that orbit calculations are available so that absolute sea levels can be obtained.

  18. Upwind relaxation methods for the Navier-Stokes equations using inner iterations

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.

    1992-01-01

    A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.

  19. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  20. Further investigation on "A multiplicative regularization for force reconstruction"

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    We have recently proposed a multiplicative regularization to reconstruct mechanical forces acting on a structure from vibration measurements. This method does not require any selection procedure for choosing the regularization parameter, since the amount of regularization is automatically adjusted throughout an iterative resolution process. The proposed iterative algorithm has been developed with performance and efficiency in mind, but it is actually a simplified version of a full iterative procedure not described in the original paper. The present paper aims at introducing the full resolution algorithm and comparing it with its simplified version in terms of computational efficiency and solution accuracy. In particular, it is shown that both algorithms lead to very similar identified solutions.

  1. RMP ELM Suppression in DIII-D Plasmas with ITER Similar Shapes and Collisionalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, T.E.; Fenstermacher, M. E.; Moyer, R.A.

    2008-01-01

    Large Type-I edge localized modes (ELMs) are completely eliminated with small n = 3 resonant magnetic perturbations (RMP) in low average triangularity, = 0.26, plasmas and in ITER similar shaped (ISS) plasmas, = 0.53, with ITER relevant collisionalities ve 0.2. Significant differences in the RMP requirements and in the properties of the ELM suppressed plasmas are found when comparing the two triangularities. In ISS plasmas, the current required to suppress ELMs is approximately 25% higher than in low average triangularity plasmas. It is also found that the width of the resonant q95 window required for ELM suppression is smaller inmore » ISS plasmas than in low average triangularity plasmas. An analysis of the positions and widths of resonant magnetic islands across the pedestal region, in the absence of resonant field screening or a self-consistent plasma response, indicates that differences in the shape of the q profile may explain the need for higher RMP coil currents during ELM suppression in ISS plasmas. Changes in the pedestal profiles are compared for each plasma shape as well as with changes in the injected neutral beam power and the RMP amplitude. Implications of these results are discussed in terms of requirements for optimal ELM control coil designs and for establishing the physics basis needed in order to scale this approach to future burning plasma devices such as ITER.« less

  2. Anderson acceleration and application to the three-temperature energy equations

    NASA Astrophysics Data System (ADS)

    An, Hengbin; Jia, Xiaowei; Walker, Homer F.

    2017-10-01

    The Anderson acceleration method is an algorithm for accelerating the convergence of fixed-point iterations, including the Picard method. Anderson acceleration was first proposed in 1965 and, for some years, has been used successfully to accelerate the convergence of self-consistent field iterations in electronic-structure computations. Recently, the method has attracted growing attention in other application areas and among numerical analysts. Compared with a Newton-like method, an advantage of Anderson acceleration is that there is no need to form the Jacobian matrix. Thus the method is easy to implement. In this paper, an Anderson-accelerated Picard method is employed to solve the three-temperature energy equations, which are a type of strong nonlinear radiation-diffusion equations. Two strategies are used to improve the robustness of the Anderson acceleration method. One strategy is to adjust the iterates when necessary to satisfy the physical constraint. Another strategy is to monitor and, if necessary, reduce the matrix condition number of the least-squares problem in the Anderson-acceleration implementation so that numerical stability can be guaranteed. Numerical results show that the Anderson-accelerated Picard method can solve the three-temperature energy equations efficiently. Compared with the Picard method without acceleration, Anderson acceleration can reduce the number of iterations by at least half. A comparison between a Jacobian-free Newton-Krylov method, the Picard method, and the Anderson-accelerated Picard method is conducted in this paper.

  3. Modeling and analysis of the space shuttle nose-gear tire with semianalytic finite elements

    NASA Technical Reports Server (NTRS)

    Kim, Kyun O.; Noor, Ahmed K.; Tanner, John A.

    1990-01-01

    A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The Space Shuttle Orbiter nose gear tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynominals in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell. Numerical results of the Space Shuttle Orbiter nose gear tire model are compared with experimental measurements of the tire subjected to inflation loading.

  4. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT.

    PubMed

    Visser, R; Godart, J; Wauben, D J L; Langendijk, J A; Van't Veld, A A; Korevaar, E W

    2016-05-21

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a detector model and an optimizer. Expected detector response was calculated using a radiotherapy treatment plan in combination with the fluence model and detector model. MLC leaf positions were reconstructed by minimizing differences between expected and measured detector response. The iterative reconstruction method was evaluated for an Elekta SLi with 10.0 mm MLC leafs in combination with the COMPASS system and the MatriXX Evolution (IBA Dosimetry) detector with a spacing of 7.62 mm. The detector was positioned in such a way that each leaf pair of the MLC was aligned with one row of ionization chambers. Known leaf displacements were introduced in various field geometries ranging from  -10.0 mm to 10.0 mm. Error detection performance was tested for MLC leaf position dependency relative to the detector position, gantry angle dependency, monitor unit dependency, and for ten clinical intensity modulated radiotherapy (IMRT) treatment beams. For one clinical head and neck IMRT treatment beam, influence of the iterative reconstruction method on existing 3D dose reconstruction artifacts was evaluated. The described iterative reconstruction method was capable of individual MLC leaf position reconstruction with millimeter accuracy, independent of the relative detector position within the range of clinically applied MU's for IMRT. Dose reconstruction artifacts in a clinical IMRT treatment beam were considerably reduced as compared to the current dose verification procedure. The iterative reconstruction method allows high accuracy 3D dose verification by including actual MLC leaf positions reconstructed from low resolution 2D measurements.

  5. Self-monitoring Lifestyle Behavior in Overweight and Obese Pregnant Women: Qualitative Findings.

    PubMed

    Shieh, Carol; Draucker, Claire Burke

    Excessive maternal gestational weight gain increases pregnancy and infant complications. Self-monitoring has been shown to be an effective strategy in weight management. Literature, however, is limited in describing pregnant women's engagement in self-monitoring. This qualitative study explored the experiences of overweight and obese pregnant women who self-monitored their eating, walking, and weight as participants in an intervention for excessive gestational weight gain prevention. Thirteen overweight and obese pregnant women participated in semistructured interviews. Reflexive iteration data analysis was conducted. Five themes were identified: making self-monitoring a habit, strategies for self-monitoring, barriers to self-monitoring, benefits of self-monitoring, and drawbacks of self-monitoring. The women viewed self-monitoring as a "habit" that could foster a sense of self-control and mindfulness. Visual or tracing aids were used to maintain the self-monitoring habit. Forgetting, defective tracking aids, complexities of food monitoring, and life events could impede self-monitoring. Being unable to keep up with self-monitoring or to achieve goals created stress. Self-monitoring is a promising approach to weight management for overweight and obese pregnant women. However, healthcare providers should be aware that, although women may identify several benefits to self-monitoring, for some women, consistently trying to track their behaviors is stressful.

  6. Iterated unscented Kalman filter for phase unwrapping of interferometric fringes.

    PubMed

    Xie, Xianming

    2016-08-22

    A fresh phase unwrapping algorithm based on iterated unscented Kalman filter is proposed to estimate unambiguous unwrapped phase of interferometric fringes. This method is the result of combining an iterated unscented Kalman filter with a robust phase gradient estimator based on amended matrix pencil model, and an efficient quality-guided strategy based on heap sort. The iterated unscented Kalman filter that is one of the most robust methods under the Bayesian theorem frame in non-linear signal processing so far, is applied to perform simultaneously noise suppression and phase unwrapping of interferometric fringes for the first time, which can simplify the complexity and the difficulty of pre-filtering procedure followed by phase unwrapping procedure, and even can remove the pre-filtering procedure. The robust phase gradient estimator is used to efficiently and accurately obtain phase gradient information from interferometric fringes, which is needed for the iterated unscented Kalman filtering phase unwrapping model. The efficient quality-guided strategy is able to ensure that the proposed method fast unwraps wrapped pixels along the path from the high-quality area to the low-quality area of wrapped phase images, which can greatly improve the efficiency of phase unwrapping. Results obtained from synthetic data and real data show that the proposed method can obtain better solutions with an acceptable time consumption, with respect to some of the most used algorithms.

  7. Computing eigenfunctions and eigenvalues of boundary-value problems with the orthogonal spectral renormalization method

    NASA Astrophysics Data System (ADS)

    Cartarius, Holger; Musslimani, Ziad H.; Schwarz, Lukas; Wunner, Günter

    2018-03-01

    The spectral renormalization method was introduced in 2005 as an effective way to compute ground states of nonlinear Schrödinger and Gross-Pitaevskii type equations. In this paper, we introduce an orthogonal spectral renormalization (OSR) method to compute ground and excited states (and their respective eigenvalues) of linear and nonlinear eigenvalue problems. The implementation of the algorithm follows four simple steps: (i) reformulate the underlying eigenvalue problem as a fixed-point equation, (ii) introduce a renormalization factor that controls the convergence properties of the iteration, (iii) perform a Gram-Schmidt orthogonalization process in order to prevent the iteration from converging to an unwanted mode, and (iv) compute the solution sought using a fixed-point iteration. The advantages of the OSR scheme over other known methods (such as Newton's and self-consistency) are (i) it allows the flexibility to choose large varieties of initial guesses without diverging, (ii) it is easy to implement especially at higher dimensions, and (iii) it can easily handle problems with complex and random potentials. The OSR method is implemented on benchmark Hermitian linear and nonlinear eigenvalue problems as well as linear and nonlinear non-Hermitian PT -symmetric models.

  8. Non-linear quantum-classical scheme to simulate non-equilibrium strongly correlated fermionic many-body dynamics

    PubMed Central

    Kreula, J. M.; Clark, S. R.; Jaksch, D.

    2016-01-01

    We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673

  9. Self-Management and Transition Readiness Assessment: Development, Reliability, and Factor Structure of the STARx Questionnaire.

    PubMed

    Ferris, M; Cohen, S; Haberman, C; Javalkar, K; Massengill, S; Mahan, J D; Kim, S; Bickford, K; Cantu, G; Medeiros, M; Phillips, A; Ferris, M T; Hooper, S R

    2015-01-01

    The Self-Management and Transition to Adulthood with Rx=Treatment (STARx) Questionnaire was developed to collect information on self-management and health care transition (HCT) skills, via self-report, in a broad population of adolescents and young adults (AYAs) with chronic conditions. Over several iterations, the STARx questionnaire was created with AYA, family, and health provider input. The development and pilot testing of the STARx Questionnaire took place with the assistance of 1219 AYAs with different chronic health conditions, in multiple institutions and settings over three phases: item development, pilot testing, reliability and factor structuring. The three development phases resulted in a final version of the STARx Questionnaire. The exploratory factor analysis of the third version of the 18-item STARx identified six factors that accounted for about 65% of the variance: Medication management, Provider communication, Engagement during appointments, Disease knowledge, Adult health responsibilities, and Resource utilization. Reliability estimates revealed good internal consistency and temporal stability, with the alpha coefficient for the overall scale being .80. The STARx was developmentally sensitive, with older patients scoring significantly higher on nearly every factor than younger patients. The STARx Questionnaire is a reliable, self-report tool with adequate internal consistency, temporal stability, and a strong, multidimensional factor structure. It provides another assessment strategy to measure self-management and transition skills in AYAs with chronic conditions. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Refinement procedure for the image alignment in high-resolution electron tomography.

    PubMed

    Houben, L; Bar Sadan, M

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images

    NASA Astrophysics Data System (ADS)

    Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.

    2015-07-01

    Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.

  12. Electric field effect on the second-order nonlinear optical properties of parabolic and semiparabolic quantum wells

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Xie, Hong-Jing

    2003-12-01

    By using the compact-density-matrix approach and iterative procedure, a detailed procedure for the calculation of the second-harmonic generation (SHG) susceptibility tensor is given in the electric-field-biased parabolic and semiparabolic quantum wells (QW’s). The simple analytical formula for the SHG susceptibility in the systems is also deduced. By adopting the methods of envelope wave function and displacement harmonic oscillation, the electronic states in parabolic and semi parabolic QW’s with applied electric fields are exactly solved. Numerical results on typical AlxGa1-xAl/GaAs materials show that, for the same effective widths, the SHG susceptibility in semiparabolic QW is larger than that in parabolic QW due to the self-asymmetry of the semiparabolic QW, and the applied electric field can make the SHG susceptibilities in both systems enhance remarkably. Moreover, the SHG susceptibility also sensitively depends on the relaxation rate of the systems.

  13. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks

    PubMed Central

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-01-01

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods. PMID:27669250

  14. An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks.

    PubMed

    Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang

    2016-09-22

    To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.

  15. Design Thinking for mHealth Application Co-Design to Support Heart Failure Self-Management.

    PubMed

    Woods, Leanna; Cummings, Elizabeth; Duff, Jed; Walker, Kim

    2017-01-01

    Heart failure is a prevalent, progressive chronic disease costing in excess of $1billion per year in Australia alone. Disease self-management has positive implications for the patient and decreases healthcare usage. However, adherence to recommended guidelines is challenging and existing literature reports sub-optimal adherence. mHealth applications in chronic disease education have the potential to facilitate patient enablement for disease self-management. To the best of our knowledge no heart failure self-management application is available for safe use by our patients. In this paper, we present the process established to co-design a mHealth application in support of heart-failure self-management. For this development, an interdisciplinary team systematically proceeds through the phases of Stanford University's Design Thinking process; empathise, define, ideate, prototype and test with a user-centred philosophy. Using this clinician-led heart failure app research as a case study, we describe a sequence of procedures to engage with local patients, carers, software developers, eHealth experts and clinical colleagues to foster rigorously developed and locally relevant patient-facing mHealth solutions. Importantly, patients are engaged in each stage with ethnographic interviews, a series of workshops and multiple re-design iterations.

  16. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.

    2014-08-21

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less

  17. Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER

    NASA Astrophysics Data System (ADS)

    Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.

    2014-08-01

    In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.

  18. Quantum-Inspired Multidirectional Associative Memory With a Self-Convergent Iterative Learning.

    PubMed

    Masuyama, Naoki; Loo, Chu Kiong; Seera, Manjeevan; Kubota, Naoyuki

    2018-04-01

    Quantum-inspired computing is an emerging research area, which has significantly improved the capabilities of conventional algorithms. In general, quantum-inspired hopfield associative memory (QHAM) has demonstrated quantum information processing in neural structures. This has resulted in an exponential increase in storage capacity while explaining the extensive memory, and it has the potential to illustrate the dynamics of neurons in the human brain when viewed from quantum mechanics perspective although the application of QHAM is limited as an autoassociation. We introduce a quantum-inspired multidirectional associative memory (QMAM) with a one-shot learning model, and QMAM with a self-convergent iterative learning model (IQMAM) based on QHAM in this paper. The self-convergent iterative learning enables the network to progressively develop a resonance state, from inputs to outputs. The simulation experiments demonstrate the advantages of QMAM and IQMAM, especially the stability to recall reliability.

  19. A Test of Multisession Automatic Action Tendency Retraining to Reduce Alcohol Consumption Among Young Adults in the Context of a Human Laboratory Paradigm.

    PubMed

    Leeman, Robert F; Nogueira, Christine; Wiers, Reinout W; Cousijn, Janna; Serafini, Kelly; DeMartini, Kelly S; Bargh, John A; O'Malley, Stephanie S

    2018-04-01

    Young adult heavy drinking is an important public health concern. Current interventions have efficacy but with only modest effects, and thus, novel interventions are needed. In prior studies, heavy drinkers, including young adults, have demonstrated stronger automatically triggered approach tendencies to alcohol-related stimuli than lighter drinkers. Automatic action tendency retraining has been developed to correct this tendency and consequently reduce alcohol consumption. This study is the first to test multiple iterations of automatic action tendency retraining, followed by laboratory alcohol self-administration. A total of 72 nontreatment-seeking, heavy drinking young adults ages 21 to 25 were randomized to automatic action tendency retraining or a control condition (i.e., "sham training"). Of these, 69 (54% male) completed 4 iterations of retraining or the control condition over 5 days with an alcohol drinking session on Day 5. Self-administration was conducted according to a human laboratory paradigm designed to model individual differences in impaired control (i.e., difficulty adhering to limits on alcohol consumption). Automatic action tendency retraining was not associated with greater reduction in alcohol approach tendency or less alcohol self-administration than the control condition. The laboratory paradigm was probably sufficiently sensitive to detect an effect of an experimental manipulation given the range of self-administration behavior observed, both in terms of number of alcoholic and nonalcoholic drinks and measures of drinking topography. Automatic action tendency retraining was ineffective among heavy drinking young adults without motivation to change their drinking. Details of the retraining procedure may have contributed to the lack of a significant effect. Despite null primary findings, the impaired control laboratory paradigm is a valid laboratory-based measure of young adult alcohol consumption that provides the opportunity to observe drinking topography and self-administration of nonalcoholic beverages (i.e., protective behavioral strategies directly related to alcohol use). Copyright © 2018 by the Research Society on Alcoholism.

  20. The effects of self-management in general education classrooms on the organizational skills of adolescents with ADHD.

    PubMed

    Gureasko-Moore, Sammi; Dupaul, George J; White, George P

    2006-03-01

    Self-management procedures have been used in school settings to successfully reduce problem behaviors, as well as to reinforce appropriate behavior. A multiple-baseline across participants design was applied in this study to evaluate the effects of using a self-management procedure to enhance the classroom preparation skills of secondary school students with attention-deficit/ hyperactivity disorder (ADHD). Three male students enrolled in a public secondary school were selected for this study because teacher reports suggested that these students were insufficiently prepared for class and inconsistently completed assignments. The intervention involved training in self-management procedures focusing on the improvement of classroom preparation skills. Following the intervention, the training process was systematically faded. Results were consistent across the 3 participants in enhancing classroom preparation behaviors. Implications for practice and future research are discussed.

  1. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    PubMed

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  2. Aircraft Environmental Systems Mechanic. Part 2.

    ERIC Educational Resources Information Center

    Chanute AFB Technical Training Center, IL.

    This packet contains learning modules designed for a self-paced course in aircraft environmental systems mechanics that was developed for the Air Force. Learning modules consist of some or all of the following materials: objectives, instructions, equipment, procedures, information sheets, handouts, workbooks, self-tests with answers, review…

  3. On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN

    NASA Astrophysics Data System (ADS)

    Patriarchi, P.; Perinotto, M.

    The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.

  4. An iterative technique to stabilize a linear time invariant multivariable system with output feedback

    NASA Technical Reports Server (NTRS)

    Sankaran, V.

    1974-01-01

    An iterative procedure for determining the constant gain matrix that will stabilize a linear constant multivariable system using output feedback is described. The use of this procedure avoids the transformation of variables which is required in other procedures. For the case in which the product of the output and input vector dimensions is greater than the number of states of the plant, general solution is given. In the case in which the states exceed the product of input and output vector dimensions, a least square solution which may not be stable in all cases is presented. The results are illustrated with examples.

  5. A general framework for regularized, similarity-based image restoration.

    PubMed

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  6. MCSCF wave functions for excited states of polar molecules - Application to BeO. [Multi-Configuration Self-Consistent Field

    NASA Technical Reports Server (NTRS)

    Bauschlicher, C. W., Jr.; Yarkony, D. R.

    1980-01-01

    A previously reported multi-configuration self-consistent field (MCSCF) algorithm based on the generalized Brillouin theorem is extended in order to treat the excited states of polar molecules. In particular, the algorithm takes into account the proper treatment of nonorthogonality in the space of single excitations and invokes, when necessary, a constrained optimization procedure to prevent the variational collapse of excited states. In addition, a configuration selection scheme (suitable for use in conjunction with extended configuration interaction methods) is proposed for the MCSCF procedure. The algorithm is used to study the low-lying singlet states of BeO, a system which has not previously been studied using an MCSCF procedure. MCSCF wave functions are obtained for three 1 Sigma + and two 1 Pi states. The 1 Sigma + results are juxtaposed with comparable results for MgO in order to assess the generality of the description presented here.

  7. Non-iterative distance constraints enforcement for cloth drapes simulation

    NASA Astrophysics Data System (ADS)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  8. Exploring connections between statistical mechanics and Green's functions for realistic systems: Temperature dependent electronic entropy and internal energy from a self-consistent second-order Green's function

    NASA Astrophysics Data System (ADS)

    Welden, Alicia Rae; Rusakov, Alexander A.; Zgid, Dominika

    2016-11-01

    Including finite-temperature effects from the electronic degrees of freedom in electronic structure calculations of semiconductors and metals is desired; however, in practice it remains exceedingly difficult when using zero-temperature methods, since these methods require an explicit evaluation of multiple excited states in order to account for any finite-temperature effects. Using a Matsubara Green's function formalism remains a viable alternative, since in this formalism it is easier to include thermal effects and to connect the dynamic quantities such as the self-energy with static thermodynamic quantities such as the Helmholtz energy, entropy, and internal energy. However, despite the promising properties of this formalism, little is known about the multiple solutions of the non-linear equations present in the self-consistent Matsubara formalism and only a few cases involving a full Coulomb Hamiltonian were investigated in the past. Here, to shed some light onto the iterative nature of the Green's function solutions, we self-consistently evaluate the thermodynamic quantities for a one-dimensional (1D) hydrogen solid at various interatomic separations and temperatures using the self-energy approximated to second-order (GF2). At many points in the phase diagram of this system, multiple phases such as a metal and an insulator exist, and we are able to determine the most stable phase from the analysis of Helmholtz energies. Additionally, we show the evolution of the spectrum of 1D boron nitride to demonstrate that GF2 is capable of qualitatively describing the temperature effects influencing the size of the band gap.

  9. Aircraft Environmental Systems Mechanic. Part 1.

    ERIC Educational Resources Information Center

    Chanute AFB Technical Training Center, IL.

    This packet contains learning modules for a self-paced course in aircraft environmental systems mechanics that was developed for the Air Force. Each learning module consists of some or all of the following: objectives, instructions, equipment, procedures, information sheets, handouts, self-tests with answers, review section, tests, and response…

  10. Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.

    PubMed

    Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo

    2017-03-03

    Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.

  11. Advances in the steady-state hybrid regime in DIII-D – a fully non-inductive, ELM-suppressed scenario for ITER

    DOE PAGES

    Petty, Craig C.; Nazikian, Raffi; Park, Jin Myung; ...

    2017-07-19

    Here, the hybrid regime with beta, collisionality, safety factor and plasma shape relevant to the ITER steady-state mission has been successfully integrated with ELM suppression by applying an odd parity n=3 resonant magnetic perturbation (RMP). Fully non-inductive hybrids in the DIII-D tokamak with high beta (β ≤ 2.8%) and high confinement (98y2 ≤ 1.4) in the ITER similar shape have achieved zero surface loop voltage for up to two current relaxation times using efficient central current drive from ECCD and NBCD. The n=3 RMP causes surprisingly little increase in thermal transport during ELM suppression. Poloidal magnetic flux pumping in hybridmore » plasmas maintains q above 1 without loss of current drive efficiency, except that experiments show that extremely peaked ECCD profiles can create sawteeth. During ECCD, Alfvén eigenmode (AE) activity is replaced by a more benign fishbone-like mode, reducing anomalous beam ion diffusion by a factor of 2. While the electron and ion thermal diffusivities substantially increase with higher ECCD power, the loss of confinement can be offset by the decreased fast ion transport resulting from AE suppression. Extrapolations from DIII-D along a dimensionless parameter scaling path as well as those using self-consistent theory-based modeling show that these ELM-suppressed, fully non-inductive hybrids can achieve the Q = 5 ITER steady-state mission.« less

  12. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1991-01-01

    Run-time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run-time, wavefronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing, and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run-time reordering of loop indexes can have a significant impact on performance.

  13. Two-Level Chebyshev Filter Based Complementary Subspace Method: Pushing the Envelope of Large-Scale Electronic Structure Calculations.

    PubMed

    Banerjee, Amartya S; Lin, Lin; Suryanarayana, Phanish; Yang, Chao; Pask, John E

    2018-06-12

    We describe a novel iterative strategy for Kohn-Sham density functional theory calculations aimed at large systems (>1,000 electrons), applicable to metals and insulators alike. In lieu of explicit diagonalization of the Kohn-Sham Hamiltonian on every self-consistent field (SCF) iteration, we employ a two-level Chebyshev polynomial filter based complementary subspace strategy to (1) compute a set of vectors that span the occupied subspace of the Hamiltonian; (2) reduce subspace diagonalization to just partially occupied states; and (3) obtain those states in an efficient, scalable manner via an inner Chebyshev filter iteration. By reducing the necessary computation to just partially occupied states and obtaining these through an inner Chebyshev iteration, our approach reduces the cost of large metallic calculations significantly, while eliminating subspace diagonalization for insulating systems altogether. We describe the implementation of the method within the framework of the discontinuous Galerkin (DG) electronic structure method and show that this results in a computational scheme that can effectively tackle bulk and nano systems containing tens of thousands of electrons, with chemical accuracy, within a few minutes or less of wall clock time per SCF iteration on large-scale computing platforms. We anticipate that our method will be instrumental in pushing the envelope of large-scale ab initio molecular dynamics. As a demonstration of this, we simulate a bulk silicon system containing 8,000 atoms at finite temperature, and obtain an average SCF step wall time of 51 s on 34,560 processors; thus allowing us to carry out 1.0 ps of ab initio molecular dynamics in approximately 28 h (of wall time).

  14. Solving Differential Equations Using Modified Picard Iteration

    ERIC Educational Resources Information Center

    Robin, W. A.

    2010-01-01

    Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…

  15. Development of a peer-supported, self-management intervention for people following mental health crisis.

    PubMed

    Milton, Alyssa; Lloyd-Evans, Brynmor; Fullarton, Kate; Morant, Nicola; Paterson, Bethan; Hindle, David; Kelly, Kathleen; Mason, Oliver; Lambert, Marissa; Johnson, Sonia

    2017-11-09

    A documented gap in support exists for service users following discharge from acute mental health services, and structured interventions to reduce relapse are rarely provided. Peer-facilitated self-management interventions have potential to meet this need, but evidence for their effectiveness is limited. This paper describes the development of a peer-provided self-management intervention for mental health service users following discharge from crisis resolution teams (CRTs). A five-stage iterative mixed-methods approach of sequential data collection and intervention development was adopted, following the development and piloting stages of the MRC framework for developing and evaluating complex interventions. Evidence review (stage 1) included systematic reviews of both peer support and self-management literature. Interviews with CRT service users (n = 41) regarding needs and priorities for support following CRT discharge were conducted (stage 2). Focus group consultations (n = 12) were held with CRT service-users, staff and carers to assess the acceptability and feasibility of a proposed intervention, and to refine intervention organisation and content (stage 3). Qualitative evaluation of a refined, peer-provided, self-management intervention involved qualitative interviews with CRT service user participants (n = 9; n = 18) in feasibility testing (stage 4) and a pilot trial (stage 5), and a focus group at each stage with the peer worker providers (n = 4). Existing evidence suggests self-management interventions can reduce relapse and improve recovery. Initial interviews and focus groups indicated support for the overall purpose and planned content of a recovery-focused self-management intervention for people leaving CRT care adapted from an existing resource: The personal recovery plan (developed by Repper and Perkins), and for peer support workers (PSWs) as providers. Participant feedback after feasibility testing was positive regarding facilitation of the intervention by PSWs; however, the structured self-management booklet was underutilised. Modifications to the self-management intervention manual and PSWs' training were made before piloting, which confirmed the acceptability and feasibility of the intervention for testing in a future, definitive trial. A manualised intervention and operating procedures, focusing on the needs and priorities of the target client group, have been developed through iterative stages of intervention development and feedback for testing in a trial context. Trial Registration ISRCTN01027104 date of registration: 11/10/2012.

  16. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.

    PubMed

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2014-12-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.

  17. Embedded sparse representation of fMRI data via group-wise dictionary optimization

    NASA Astrophysics Data System (ADS)

    Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.

    2016-03-01

    Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.

  18. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  19. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  20. Examining the Effects of a Behavioural Self-Control Package on the Behaviour of the Distance Learner. REDEAL Research Report #8. Project REDEAL. Research and Evaluation of Distance Education for the Adult Learner.

    ERIC Educational Resources Information Center

    Powell, Russell; Coldeway, Dan O.

    An unsuccessful attempt was made to facilitate study behavior of Athabasca University learners through instruction in behavioral methods of self-control. The general procedure consisted of providing each student with a package containing instructions and materials for the self-application of the strategies of self-monitoring and standard setting.…

  1. Determination of the clean-up efficiency of the solid-phase extraction of rosemary extracts: Application of full-factorial design in hyphenation with Gaussian peak fit function.

    PubMed

    Meischl, Florian; Kirchler, Christian Günter; Jäger, Michael Andreas; Huck, Christian Wolfgang; Rainer, Matthias

    2018-02-01

    We present a novel method for the quantitative determination of the clean-up efficiency to provide a calculated parameter for peak purity through iterative fitting in conjunction with design of experiments. Rosemary extracts were used and analyzed before and after solid-phase extraction using a self-fabricated mixed-mode sorbent based on poly(N-vinylimidazole/ethylene glycol dimethacrylate). Optimization was performed by variation of washing steps using a full three-level factorial design and response surface methodology. Separation efficiency of rosmarinic acid from interfering compounds was calculated using an iterative fit of Gaussian-like signals and quantifications were performed by the separate integration of the two interfering peak areas. Results and recoveries were analyzed using Design-Expert® software and revealed significant differences between the washing steps. Optimized parameters were considered and used for all further experiments. Furthermore, the solid-phase extraction procedure was tested and compared with commercial available sorbents. In contrast to generic protocols of the manufacturers, the optimized procedure showed excellent recoveries and clean-up rates for the polymer with ion exchange properties. Finally, rosemary extracts from different manufacturing areas and application types were studied to verify the developed method for its applicability. The cleaned-up extracts were analyzed by liquid chromatography with tandem mass spectrometry for detailed compound evaluation to exclude any interference from coeluting molecules. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Hofmann, Rich

    1995-01-01

    Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)

  3. A computer program to calculate the longitudinal aerodynamic characteristics of wing-flap configurations with externally blown flaps

    NASA Technical Reports Server (NTRS)

    Mendenhall, M. R.; Goodwin, F. K.; Spangler, S. B.

    1976-01-01

    A vortex lattice lifting-surface method is used to model the wing and multiple flaps. Each lifting surface may be of arbitrary planform having camber and twist, and the multiple-slotted trailing-edge flap system may consist of up to ten flaps with different spans and deflection angles. The engine wakes model consists of a series of closely spaced vortex rings with circular or elliptic cross sections. The rings are normal to a wake centerline which is free to move vertically and laterally to accommodate the local flow field beneath the wing and flaps. The two potential flow models are used in an iterative fashion to calculate the wing-flap loading distribution including the influence of the waves from up to two turbofan engines on the semispan. The method is limited to the condition where the flow and geometry of the configurations are symmetric about the vertical plane containing the wing root chord. The calculation procedure starts with arbitrarily positioned wake centerlines and the iterative calculation continues until the total configuration loading converges within a prescribed tolerance. Program results include total configuration forces and moments, individual lifting-surface load distributions, including pressure distributions, individual flap hinge moments, and flow field calculation at arbitrary field points.

  4. Modal analysis and dynamic stresses for acoustically excited Shuttle insulation tiles

    NASA Technical Reports Server (NTRS)

    Ojalvo, I. U.; Ogilvie, P. I.

    1976-01-01

    The thermal protection system of the Space Shuttle consists of thousands of separate insulation tiles, of varying thicknesses, bonded to the orbiter's surface through a soft strain-isolation pad which is bonded, in turn, to the vehicle's stiffened metallic skin. A modal procedure for obtaining the acoustically induced RMS stress in these comparatively thick tiles is described. The modes employed are generated by a previously developed iterative procedure which converges rapidly for the combined system of tiles and primary structure considered. Each tile is idealized by several hundred three-dimensional finite elements and all tiles on a given panel interact dynamically. Acoustic response results from the present analyses are presented. Comparisons with other analytical results and measured modal data for a typical Shuttle panel, both with and without tiles, are made, and the agreement is good.

  5. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    NASA Astrophysics Data System (ADS)

    Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; Govind, Niranjan; Yang, Chao

    2017-12-01

    We present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.

  6. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  7. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  8. Numerical modeling and optimization of the Iguassu gas centrifuge

    NASA Astrophysics Data System (ADS)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  9. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hanming; Wang, Linyuan; Li, Lei

    2016-06-15

    Purpose: Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. Methods: The proposed method consists of twomore » steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as “iFBP-TV” and “TV-FADM,” respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. Results: The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. Conclusions: Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.« less

  10. Projecting High Beta Steady-State Scenarios from DIII-D Advanced Tokamk Discharges

    NASA Astrophysics Data System (ADS)

    Park, J. M.

    2013-10-01

    Fusion power plant studies based on steady-state tokamak operation suggest that normalized beta in the range of 4-6 is needed for economic viability. DIII-D is exploring a range of candidate high beta scenarios guided by FASTRAN modeling in a repeated cycle of experiment and modeling validation. FASTRAN is a new iterative numerical procedure coupled to the Integrated Plasma Simulator (IPS) that integrates models of core transport, heating and current drive, equilibrium and stability self-consistently to find steady state (d / dt = 0) solutions, and reproduces most features of DIII-D high beta discharges with a stationary current profile. Separately, modeling components such as core transport (TGLF) and off-axis neutral beam current drive (NUBEAM) show reasonable agreement with experiment. Projecting forward to scenarios possible on DIII-D with future upgrades, two self-consistent noninductive scenarios at βN > 4 are found: high qmin and high internal inductance li. Both have bootstrap current fraction fBS > 0 . 5 and rely on the planned addition of a second off-axis neutral beamline and increased electron cyclotron heating. The high qmin > 2 scenario achieves stable operation at βN as high as 5 by a very broad current density profile to improve the ideal-wall stabilization of low-n instabilities along with confinement enhancement from low magnetic shear. The li near 1 scenario does not depend on ideal-wall stabilization. Improved confinement from strong magnetic shear makes up for the lower pedestal needed to maintain li high. The tradeoff between increasing li and reduced edge pedestal determines the achievable βN (near 4) and fBS (near 0.5). This modeling identifies the necessary upgrades to achieve target scenarios and clarifies the pros and cons of particular scenarios to better inform the development of steady-state fusion. Supported by the US Department of Energy under DE-AC05-00OR22725 & DE-FC02-04ER54698.

  11. Excitation spectra of aromatic molecules within a real-space G W -BSE formalism: Role of self-consistency and vertex corrections

    DOE PAGES

    Hung, Linda; da Jornada, Felipe H.; Souto-Casares, Jaime; ...

    2016-08-15

    Here, we present first-principles calculations on the vertical ionization potentials (IPs), electron affinities (EAs), and singlet excitation energies on an aromatic-molecule test set (benzene, thiophene, 1,2,5-thiadiazole, naphthalene, benzothiazole, and tetrathiafulvalene) within the GW and Bethe-Salpeter equation (BSE) formalisms. Our computational framework, which employs a real-space basis for ground-state and a transition-space basis for excited-state calculations, is well suited for high-accuracy calculations on molecules, as we show by comparing against G0W0 calculations within a plane-wave-basis formalism. We then generalize our framework to test variants of the GW approximation that include a local density approximation (LDA)–derived vertex function (Γ LDA ) andmore » quasiparticle-self-consistent (QS) iterations. We find that Γ LDA and quasiparticle self-consistency shift IPs and EAs by roughly the same magnitude, but with opposite sign for IPs and the same sign for EAs. G0W0 and QS GWΓ LDA are more accurate for IPs, while G 0W 0Γ LDA and QS GW are best for EAs. For optical excitations, we find that perturbative GW-BSE underestimates the singlet excitation energy, while self-consistent GW-BSE results in good agreement with previous best-estimate values for both valence and Rydberg excitations. Finally, our work suggests that a hybrid approach, in which G0W0 energies are used for occupied orbitals and G0W0Γ LDA for unoccupied orbitals, also yields optical excitation energies in good agreement with experiment but at a smaller computational cost.« less

  12. Excitation spectra of aromatic molecules within a real-space G W -BSE formalism: Role of self-consistency and vertex corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hung, Linda; da Jornada, Felipe H.; Souto-Casares, Jaime

    Here, we present first-principles calculations on the vertical ionization potentials (IPs), electron affinities (EAs), and singlet excitation energies on an aromatic-molecule test set (benzene, thiophene, 1,2,5-thiadiazole, naphthalene, benzothiazole, and tetrathiafulvalene) within the GW and Bethe-Salpeter equation (BSE) formalisms. Our computational framework, which employs a real-space basis for ground-state and a transition-space basis for excited-state calculations, is well suited for high-accuracy calculations on molecules, as we show by comparing against G0W0 calculations within a plane-wave-basis formalism. We then generalize our framework to test variants of the GW approximation that include a local density approximation (LDA)–derived vertex function (Γ LDA ) andmore » quasiparticle-self-consistent (QS) iterations. We find that Γ LDA and quasiparticle self-consistency shift IPs and EAs by roughly the same magnitude, but with opposite sign for IPs and the same sign for EAs. G0W0 and QS GWΓ LDA are more accurate for IPs, while G 0W 0Γ LDA and QS GW are best for EAs. For optical excitations, we find that perturbative GW-BSE underestimates the singlet excitation energy, while self-consistent GW-BSE results in good agreement with previous best-estimate values for both valence and Rydberg excitations. Finally, our work suggests that a hybrid approach, in which G0W0 energies are used for occupied orbitals and G0W0Γ LDA for unoccupied orbitals, also yields optical excitation energies in good agreement with experiment but at a smaller computational cost.« less

  13. Vortex breakdown simulation

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.

    1987-01-01

    In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.

  14. The Battlefield Environment Division Modeling Framework (BMF). Part 1: Optimizing the Atmospheric Boundary Layer Environment Model for Cluster Computing

    DTIC Science & Technology

    2014-02-01

    idle waiting for the wavefront to reach it. To overcome this, Reeve et al. (2001) 3 developed a scheme in analogy to the red-black Gauss - Seidel iterative ...understandable procedure calls. Parallelization of the SIMPLE iterative scheme with SIP used a red-black scheme similar to the red-black Gauss - Seidel ...scheme, the SIMPLE method, for pressure-velocity coupling. The result is a slowing convergence of the outer iterations . The red-black scheme excites a 2

  15. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles

    PubMed Central

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-01-01

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well. PMID:26287194

  16. Self Modeling: Expanding the Theories of Learning

    ERIC Educational Resources Information Center

    Dowrick, Peter W.

    2012-01-01

    Self modeling (SM) offers a unique expansion of learning theory. For several decades, a steady trickle of empirical studies has reported consistent evidence for the efficacy of SM as a procedure for positive behavior change across physical, social, educational, and diagnostic variations. SM became accepted as an extreme case of model similarity;…

  17. Influence of Self-Regulation on the Development of Children's Number Sense

    ERIC Educational Resources Information Center

    Ivrendi, Asiye

    2011-01-01

    The present study examined predictive power of behavioral self-regulation, family and child characteristics on children's number sense. The participants consisted of 101 kindergarten children. A subsample of 30 children was randomly chosen for the reliability procedures of Assessing Number Sense and Head, Toes, Knees and Shoulders instruments.…

  18. Self-Regulation for Students with Emotional and Behavioral Disorders: Preliminary Effects of the "I Control" Curriculum

    ERIC Educational Resources Information Center

    Smith, Stephen W.; Daunic, Ann P.; Algina, James; Pitts, Donna L.; Merrill, Kristen L.; Cumming, Michelle M.; Allen, Courtney

    2017-01-01

    Maladaptive adolescent behavior patterns often create escalating conflict with adults and peers, leading to poor long-term social trajectories. To address this, school-based behavior management often consists of contingent reinforcement for appropriate behavior, behavior reduction procedures, and placement in self-contained or alternative…

  19. The interaction of criminal procedure and outcome.

    PubMed

    Laxminarayan, Malini; Pemberton, Antony

    2014-01-01

    Procedural quality is an important aspect of crime victims' experiences in criminal proceedings and consists of different dimensions. Two of these dimensions are procedural justice (voice) and interpersonal justice (respectful treatment). Social psychological research has suggested that both voice and respectful treatment are moderated by the impact of outcomes of justice procedures on individuals' reactions. To add to this research, we extend this assertion to the criminal justice context, examining the interaction between the assessment of procedural quality and outcome favorability with victim's trust in the legal system and self-esteem. Hierarchical regression analyses reveal that voice, respectful treatment and outcome favorability are predictive of trust in the legal system and self-esteem. Further investigation reveals that being treated with respect is only related to trust in the legal system when outcome favorability is high. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Becoming an expert carer: the process of family carers learning to manage technical health procedures at home.

    PubMed

    McDonald, Janet; McKinlay, Eileen; Keeling, Sally; Levack, William

    2016-09-01

    To describe the learning process of family carers who manage technical health procedures (such as enteral tube feeding, intravenous therapy, dialysis or tracheostomy care) at home. Increasingly, complex procedures are being undertaken at home but little attention has been paid to the experiences of family carers who manage such procedures. Grounded theory, following Charmaz's constructivist approach. Interviews with 26 family carers who managed technical health procedures and 15 health professionals who taught carers such procedures. Data collection took place in New Zealand over 19 months during 2011-2013. Grounded theory procedures of iterative data collection, coding and analysis were followed, with the gradual development of theoretical ideas. The learning journey comprised three phases: (1) an initial, concentrated period of training; (2) novice carers taking responsibility for day-to-day care of procedures while continuing their learning; and (3) with time, experience and ongoing self-directed learning, the development of expertise. Teaching and support by health professionals (predominantly nurses) was focussed on the initial phase, but carers' learning continued throughout, developed through their own experience and using additional sources of information (notably the Internet and other carers). Further work is needed to determine the best educational process for carers, including where to locate training, who should teach them, optimal teaching methods and how structured or individualized teaching should be. Supporting carers well also benefits patient care. © 2016 John Wiley & Sons Ltd.

  1. Simultaneous multigrid techniques for nonlinear eigenvalue problems: Solutions of the nonlinear Schrödinger-Poisson eigenvalue problem in two and three dimensions

    NASA Astrophysics Data System (ADS)

    Costiner, Sorin; Ta'asan, Shlomo

    1995-07-01

    Algorithms for nonlinear eigenvalue problems (EP's) often require solving self-consistently a large number of EP's. Convergence difficulties may occur if the solution is not sought in an appropriate region, if global constraints have to be satisfied, or if close or equal eigenvalues are present. Multigrid (MG) algorithms for nonlinear problems and for EP's obtained from discretizations of partial differential EP have often been shown to be more efficient than single level algorithms. This paper presents MG techniques and a MG algorithm for nonlinear Schrödinger Poisson EP's. The algorithm overcomes the above mentioned difficulties combining the following techniques: a MG simultaneous treatment of the eigenvectors and nonlinearity, and with the global constrains; MG stable subspace continuation techniques for the treatment of nonlinearity; and a MG projection coupled with backrotations for separation of solutions. These techniques keep the solutions in an appropriate region, where the algorithm converges fast, and reduce the large number of self-consistent iterations to only a few or one MG simultaneous iteration. The MG projection makes it possible to efficiently overcome difficulties related to clusters of close and equal eigenvalues. Computational examples for the nonlinear Schrödinger-Poisson EP in two and three dimensions, presenting special computational difficulties that are due to the nonlinearity and to the equal and closely clustered eigenvalues are demonstrated. For these cases, the algorithm requires O(qN) operations for the calculation of q eigenvectors of size N and for the corresponding eigenvalues. One MG simultaneous cycle per fine level was performed. The total computational cost is equivalent to only a few Gauss-Seidel relaxations per eigenvector. An asymptotic convergence rate of 0.15 per MG cycle is attained.

  2. Truly self-consistent solution of Kohn-Sham equations for extended systems with inhomogeneous electron gas

    NASA Astrophysics Data System (ADS)

    Shul'man, A. Ya; Posvyanskii, D. V.

    2014-05-01

    The density functional approach in the Kohn-Sham approximation is widely used to study properties of many-electron systems. Due to the nonlinearity of the Kohn-Sham equations, the general self-consistent solution method for infinite systems involves iterations with alternate solutions of the Poisson and Schrödinger equations. One of problems with such an approach is that the charge distribution, updated by solving the Schrodinger equation, may be incompatible with the boundary conditions of the Poisson equation for Coulomb potential. The resulting instability or divergence manifests itself most appreciably in the case of infinitely extended systems because the corresponding boundary-value problem becomes singular. In this work the stable iterative scheme for solving the Kohn-Sham equations for infinite systems with inhomogeneous electron gas is described based on eliminating the long-range character of the Coulomb interaction, which causes the tight coupling of the charge distribution with the boundary conditions. This algorithm has been previously successfully implemented in the calculation of work function and surface energy of simple metals in the jellium model. Here it is used to calculate the energy spectrum of quasi-two-dimensional electron gas in the accumulation layer at the semiconductor surface n-InAs. The electrons in such a structure occupy states that belong to both discrete and continuous parts of the energy spectrum. This causes the problems of convergence in the usually used approaches, which do not exist in our case. Because of the narrow bandgap of InAs, it is necessary to take the nonparabolicity of the conduction band into account; this is done by means of a new effective mass method. The calculated quasi-two-dimensional energy bands correspond well to experimental data measured by the angle resolved photoelectron spectroscopy technique.

  3. The wide-angle equation and its solution through the short-time iterative Lanczos method.

    PubMed

    Campos-Martínez, José; Coalson, Rob D

    2003-03-20

    Properties of the wide-angle equation (WAEQ), a nonparaxial scalar wave equation used to propagate light through media characterized by inhomogeneous refractive-index profiles, are studied. In particular, it is shown that the WAEQ is not equivalent to the more complicated but more fundamental Helmholtz equation (HEQ) when the index of refraction profile depends on the position along the propagation axis. This includes all nonstraight waveguides. To study the quality of the WAEQ approximation, we present a novel method for computing solutions to the WAEQ. This method, based on a short-time iterative Lanczos (SIL) algorithm, can be applied directly to the full three-dimensional case, i.e., systems consisting of the propagation axis coordinate and two transverse coordinates. Furthermore, the SIL method avoids series-expansion procedures (e.g., Padé approximants) and thus convergence problems associated with such procedures. Detailed comparisons of solutions to the HEQ, WAEQ, and the paraxial equation (PEQ) are presented for two cases in which numerically exact solutions to the HEQ can be obtained by independent analysis, namely, (i) propagation in a uniform dielectric medium and (ii) propagation along a straight waveguide that has been tilted at an angle to the propagation axis. The quality of WAEQ and PEQ, compared with exact HEQ results, is investigated. Cases are found for which the WAEQ actually performs worse than the PEQ.

  4. A Model For Selecting An Environmentally Responsive Trait: Evaluating Micro-scale Fitness Through UV-C Resistance and Exposure in Escherichia coli.

    NASA Astrophysics Data System (ADS)

    Schenone, D. J.; Igama, S.; Marash-Whitman, D.; Sloan, C.; Okansinski, A.; Moffet, A.; Grace, J. M.; Gentry, D.

    2015-12-01

    Experimental evolution of microorganisms in controlled microenvironments serves as a powerful tool for understanding the relationship between micro-scale microbial interactions as well as local-to global-scale environmental factors. In response to iterative and targeted environmental pressures, mutagenesis drives the emergence of novel phenotypes. Current methods to induce expression of these phenotypes require repetitive and time intensive procedures and do not allow for the continuous monitoring of conditions such as optical density, pH and temperature. To address this shortcoming, an Automated Dynamic Directed Evolution Chamber is being developed. It will initially produce Escherichia coli cells with an elevated UV-C resistance phenotype that will ultimately be adapted for different organisms as well as studying environmental effects. A useful phenotype and environmental factor for examining this relationship is UV-C resistance and exposure. In order to build a baseline for the device's operational parameters, a UV-C assay was performed on six E. coli replicates with three exposure fluxes across seven iterations. The fluxes included a 0 second exposure (control), 6 seconds at 3.3 J/m2/s and 40 seconds at 0.5 J/m2/s. After each iteration the cells were regrown and tested for UV-C resistance. We sought to quantify the increase and variability of UV-C resistance among different fluxes, and observe changes in each replicate at each iteration in terms of variance. Under different fluxes, we observed that the 0s control showed no significant increase in resistance, while the 6s/40s fluxes showed increased resistance as the number of iterations increased. A one-million fold increase in survivability was observed after seven iterations. Through statistical analysis using Spearman's rank correlation, the 40s exposure showed signs of more consistently increased resistance, but seven iterations was insufficient to demonstrate statistical significance; to test this further, our experiments will include more iterations. Furthermore, we plan to sequence all the replicants. As adaptation dynamics under intense UV exposure leads to high rate of change, it would be useful to observe differences in tolerance-related and non-tolerance-related genes between the original and UV resistant strains.

  5. "Ask Ernö": a self-learning tool for assignment and prediction of nuclear magnetic resonance spectra.

    PubMed

    Castillo, Andrés M; Bernal, Andrés; Dieden, Reiner; Patiny, Luc; Wist, Julien

    2016-01-01

    We present "Ask Ernö", a self-learning system for the automatic analysis of NMR spectra, consisting of integrated chemical shift assignment and prediction tools. The output of the automatic assignment component initializes and improves a database of assigned protons that is used by the chemical shift predictor. In turn, the predictions provided by the latter facilitate improvement of the assignment process. Iteration on these steps allows Ask Ernö to improve its ability to assign and predict spectra without any prior knowledge or assistance from human experts. This concept was tested by training such a system with a dataset of 2341 molecules and their (1)H-NMR spectra, and evaluating the accuracy of chemical shift predictions on a test set of 298 partially assigned molecules (2007 assigned protons). After 10 iterations, Ask Ernö was able to decrease its prediction error by 17 %, reaching an average error of 0.265 ppm. Over 60 % of the test chemical shifts were predicted within 0.2 ppm, while only 5 % still presented a prediction error of more than 1 ppm. Ask Ernö introduces an innovative approach to automatic NMR analysis that constantly learns and improves when provided with new data. Furthermore, it completely avoids the need for manually assigned spectra. This system has the potential to be turned into a fully autonomous tool able to compete with the best alternatives currently available.Graphical abstractSelf-learning loop. Any progress in the prediction (forward problem) will improve the assignment ability (reverse problem) and vice versa.

  6. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  7. Unfolding of Proteins: Thermal and Mechanical Unfolding

    NASA Technical Reports Server (NTRS)

    Hur, Joe S.; Darve, Eric

    2004-01-01

    We have employed a Hamiltonian model based on a self-consistent Gaussian appoximation to examine the unfolding process of proteins in external - both mechanical and thermal - force elds. The motivation was to investigate the unfolding pathways of proteins by including only the essence of the important interactions of the native-state topology. Furthermore, if such a model can indeed correctly predict the physics of protein unfolding, it can complement more computationally expensive simulations and theoretical work. The self-consistent Gaussian approximation by Micheletti et al. has been incorporated in our model to make the model mathematically tractable by signi cantly reducing the computational cost. All thermodynamic properties and pair contact probabilities are calculated by simply evaluating the values of a series of Incomplete Gamma functions in an iterative manner. We have compared our results to previous molecular dynamics simulation and experimental data for the mechanical unfolding of the giant muscle protein Titin (1TIT). Our model, especially in light of its simplicity and excellent agreement with experiment and simulation, demonstrates the basic physical elements necessary to capture the mechanism of protein unfolding in an external force field.

  8. Polarized atomic orbitals for self-consistent field electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Lee, Michael S.; Head-Gordon, Martin

    1997-12-01

    We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.

  9. Graph-based analysis of kinetics on multidimensional potential-energy surfaces.

    PubMed

    Okushima, T; Niiyama, T; Ikeda, K S; Shimizu, Y

    2009-09-01

    The aim of this paper is twofold: one is to give a detailed description of an alternative graph-based analysis method, which we call saddle connectivity graph, for analyzing the global topography and the dynamical properties of many-dimensional potential-energy landscapes and the other is to give examples of applications of this method in the analysis of the kinetics of realistic systems. A Dijkstra-type shortest path algorithm is proposed to extract dynamically dominant transition pathways by kinetically defining transition costs. The applicability of this approach is first confirmed by an illustrative example of a low-dimensional random potential. We then show that a coarse-graining procedure tailored for saddle connectivity graphs can be used to obtain the kinetic properties of 13- and 38-atom Lennard-Jones clusters. The coarse-graining method not only reduces the complexity of the graphs, but also, with iterative use, reveals a self-similar hierarchical structure in these clusters. We also propose that the self-similarity is common to many-atom Lennard-Jones clusters.

  10. Assessment of the importance of neutron multiplication for tritium production

    NASA Astrophysics Data System (ADS)

    Chiovaro, P.; Di Maio, P. A.

    2017-01-01

    One of the major requirements for a fusion power plant in the future is tritium self-sufficiency. For this reason the scientific community has dedicated a lot of effort to research activity on reactor tritium breeding blankets. In the framework of the international project DEMO, many concepts of breeding blanket have been taken into account and some of them will be tested in the experimental reactor ITER by means of appropriate test blanket modules (TBMs). All the breeding blanket concepts rely on the adoption of binary systems composed of a material acting as neutronic multiplier and another as a breeder. This paper addresses a neutronic feature of these kinds of systems. In particular, attention has been focused on the assessment of the importance of neutrons coming from multiplication reactions for the production of tritium. A theoretical framework has been set up and a procedure to evaluate the performance of the multiplier-breeder systems, under the aforementioned point of view, has been developed. Moreover, the model set up has been applied to helium cooled lithium lead and helium cooled pebble bad TBMs under irradiation in ITER and the results have been critically discussed.

  11. Chebyshev polynomial filtered subspace iteration in the discontinuous Galerkin method for large-scale electronic structure calculations

    DOE PAGES

    Banerjee, Amartya S.; Lin, Lin; Hu, Wei; ...

    2016-10-21

    The Discontinuous Galerkin (DG) electronic structure method employs an adaptive local basis (ALB) set to solve the Kohn-Sham equations of density functional theory in a discontinuous Galerkin framework. The adaptive local basis is generated on-the-fly to capture the local material physics and can systematically attain chemical accuracy with only a few tens of degrees of freedom per atom. A central issue for large-scale calculations, however, is the computation of the electron density (and subsequently, ground state properties) from the discretized Hamiltonian in an efficient and scalable manner. We show in this work how Chebyshev polynomial filtered subspace iteration (CheFSI) canmore » be used to address this issue and push the envelope in large-scale materials simulations in a discontinuous Galerkin framework. We describe how the subspace filtering steps can be performed in an efficient and scalable manner using a two-dimensional parallelization scheme, thanks to the orthogonality of the DG basis set and block-sparse structure of the DG Hamiltonian matrix. The on-the-fly nature of the ALB functions requires additional care in carrying out the subspace iterations. We demonstrate the parallel scalability of the DG-CheFSI approach in calculations of large-scale twodimensional graphene sheets and bulk three-dimensional lithium-ion electrolyte systems. In conclusion, employing 55 296 computational cores, the time per self-consistent field iteration for a sample of the bulk 3D electrolyte containing 8586 atoms is 90 s, and the time for a graphene sheet containing 11 520 atoms is 75 s.« less

  12. Advances in the steady-state hybrid regime in DIII-D—a fully non-inductive, ELM-suppressed scenario for ITER

    NASA Astrophysics Data System (ADS)

    Petty, C. C.; Nazikian, R.; Park, J. M.; Turco, F.; Chen, Xi; Cui, L.; Evans, T. E.; Ferraro, N. M.; Ferron, J. R.; Garofalo, A. M.; Grierson, B. A.; Holcomb, C. T.; Hyatt, A. W.; Kolemen, E.; La Haye, R. J.; Lasnier, C.; Logan, N.; Luce, T. C.; McKee, G. R.; Orlov, D.; Osborne, T. H.; Pace, D. C.; Paz-Soldan, C.; Petrie, T. W.; Snyder, P. B.; Solomon, W. M.; Taylor, N. Z.; Thome, K. E.; Van Zeeland, M. A.; Zhu, Y.

    2017-11-01

    The hybrid regime with beta, collisionality, safety factor and plasma shape relevant to the ITER steady-state mission has been successfully integrated with ELM suppression by applying an odd parity n  =  3 resonant magnetic perturbation (RMP). Fully non-inductive hybrids in the DIII-D tokamak with high beta (≤ft< β \\right>   ⩽  2.8%) and high confinement (H98y2  ⩽  1.4) in the ITER similar shape have achieved zero surface loop voltage for up to two current relaxation times using efficient central current drive from ECCD and NBCD. The n  =  3 RMP causes surprisingly little increase in thermal transport during ELM suppression. Poloidal magnetic flux pumping in hybrid plasmas maintains q above 1 without loss of current drive efficiency, except that experiments show that extremely peaked ECCD profiles can create sawteeth. During ECCD, Alfvén eigenmode (AE) activity is replaced by a more benign fishbone-like mode, reducing anomalous beam ion diffusion by a factor of 2. While the electron and ion thermal diffusivities substantially increase with higher ECCD power, the loss of confinement can be offset by the decreased fast ion transport resulting from AE suppression. Extrapolations from DIII-D along a dimensionless parameter scaling path as well as those using self-consistent theory-based modeling show that these ELM-suppressed, fully non-inductive hybrids can achieve the Q fus  =  5 ITER steady-state mission.

  13. The Effect of a Multiple Treatment Program and Maintenance Procedures on Smoking Cessation.

    ERIC Educational Resources Information Center

    Powell, Don R.

    The efficacy of a multiple treatment smoking cessation program and three maintenance strategies was evaluated. Phases I and II of the study involved 51 subjects who participated in a five-day smoking cessation project consisting of lectures, demonstrations, practice exercises, negative smoking, and the teaching of self-control procedures. At the…

  14. On Nonequivalence of Several Procedures of Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Chan, Wai

    2005-01-01

    The normal theory based maximum likelihood procedure is widely used in structural equation modeling. Three alternatives are: the normal theory based generalized least squares, the normal theory based iteratively reweighted least squares, and the asymptotically distribution-free procedure. When data are normally distributed and the model structure…

  15. Comparing Instructional Strategies for Integrating Conceptual and Procedural Knowledge.

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Koedinger, Kenneth R.

    We compared alternative instructional strategies for integrating knowledge of decimal place value and regrouping concepts with procedures for adding and subtracting decimals. The first condition was based on recent research suggesting that conceptual and procedural knowledge develop in an iterative, hand over hand fashion. In this iterative…

  16. Run-time parallelization and scheduling of loops

    NASA Technical Reports Server (NTRS)

    Saltz, Joel H.; Mirchandaney, Ravi; Crowley, Kay

    1990-01-01

    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure.

  17. Development and Validation of the Diabetes Adolescent Problem Solving Questionnaire

    PubMed Central

    Mulvaney, Shelagh A.; Jaser, Sarah S.; Rothman, Russell L.; Russell, William; Pittel, Eric J.; Lybarger, Cindy; Wallston, Kenneth A.

    2014-01-01

    Objective Problem solving is a critical diabetes self-management skill. Because of a lack of clinically feasible measures, our aim was to develop and validate a self-report self-management problem solving questionnaire for adolescents with type 1 diabetes (T1D). Methods A multidisciplinary team of diabetes experts generated questionnaire items that addressed diabetes self-management problem solving. Iterative feedback from parents and adolescents resulted in 27 items. Adolescents from two studies (N=156) aged 13–17 were recruited through a pediatric diabetes clinic and completed measures through an online survey. Glycemic control was measured by HbA1c recorded in the medical record. Results Empirical elimination of items using Principal Components Analyses resulted in a 13-item unidimensional measure, the Diabetes Adolescent Problem Solving Questionnaire (DAPSQ) that explained 57% of the variance. The DAPSQ demonstrated internal consistency (Cronbach’s alpha = 0.92) and was correlated with diabetes self-management (r=0.53, p<.001), self-efficacy (r=0.54, p<.001), and glycemic control (r= −0.24, p<.01). Conclusion The DAPSQ is a brief instrument for assessment of diabetes self-management problem solving in youth with T1D associated with better self-management behaviors and glycemic control. Practice Implications The DAPSQ is a clinically feasible self-report measure that can provide valuable information regarding level of self-management problem solving and guide patient education. PMID:25063715

  18. Development and validation of the diabetes adolescent problem solving questionnaire.

    PubMed

    Mulvaney, Shelagh A; Jaser, Sarah S; Rothman, Russell L; Russell, William E; Pittel, Eric J; Lybarger, Cindy; Wallston, Kenneth A

    2014-10-01

    Problem solving is a critical diabetes self-management skill. Because of a lack of clinically feasible measures, our aim was to develop and validate a self-report self-management problem solving questionnaire for adolescents with type 1 diabetes (T1D). A multidisciplinary team of diabetes experts generated questionnaire items that addressed diabetes self-management problem solving. Iterative feedback from parents and adolescents resulted in 27 items. Adolescents from two studies (N=156) aged 13-17 were recruited through a pediatric diabetes clinic and completed measures through an online survey. Glycemic control was measured by HbA1c recorded in the medical record. Empirical elimination of items using principal components analyses resulted in a 13-item unidimensional measure, the diabetes adolescent problem solving questionnaire (DAPSQ) that explained 56% of the variance. The DAPSQ demonstrated internal consistency (Cronbach's alpha=0.92) and was correlated with diabetes self-management (r=0.53, p<.001), self-efficacy (r=0.54, p<.001), and glycemic control (r=-0.24, p<.01). The DAPSQ is a brief instrument for assessment of diabetes self-management problem solving in youth with T1D and is associated with better self-management behaviors and glycemic control. The DAPSQ is a clinically feasible self-report measure that can provide valuable information regarding level of self-management problem solving and guide patient education. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Effects of Test-Taking Instruction on a Health Professional Certifying Examination: An Evaluation.

    ERIC Educational Resources Information Center

    Frierson, Henry T., Jr.

    The intervention in this study focused upon effective test taking, defined as the capacity to use acquired subject matter knowledge to achieve test scores consistent with an individual's knowledge level. This approach also emphasized self-assessment and self-directed learning. The procedure was employed in efforts to enhance a class of medical…

  20. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  1. Parabolized Navier-Stokes Code for Computing Magneto-Hydrodynamic Flowfields

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Technical Monitor); Tannehill, J. C.

    2003-01-01

    This report consists of two published papers, 'Computation of Magnetohydrodynamic Flows Using an Iterative PNS Algorithm' and 'Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm'.

  2. Adapting Poisson-Boltzmann to the self-consistent mean field theory: Application to protein side-chain modeling

    NASA Astrophysics Data System (ADS)

    Koehl, Patrice; Orland, Henri; Delarue, Marc

    2011-08-01

    We present an extension of the self-consistent mean field theory for protein side-chain modeling in which solvation effects are included based on the Poisson-Boltzmann (PB) theory. In this approach, the protein is represented with multiple copies of its side chains. Each copy is assigned a weight that is refined iteratively based on the mean field energy generated by the rest of the protein, until self-consistency is reached. At each cycle, the variational free energy of the multi-copy system is computed; this free energy includes the internal energy of the protein that accounts for vdW and electrostatics interactions and a solvation free energy term that is computed using the PB equation. The method converges in only a few cycles and takes only minutes of central processing unit time on a commodity personal computer. The predicted conformation of each residue is then set to be its copy with the highest weight after convergence. We have tested this method on a database of hundred highly refined NMR structures to circumvent the problems of crystal packing inherent to x-ray structures. The use of the PB-derived solvation free energy significantly improves prediction accuracy for surface side chains. For example, the prediction accuracies for χ1 for surface cysteine, serine, and threonine residues improve from 68%, 35%, and 43% to 80%, 53%, and 57%, respectively. A comparison with other side-chain prediction algorithms demonstrates that our approach is consistently better in predicting the conformations of exposed side chains.

  3. The self-consistent calculation of pseudo-molecule energy levels, construction of energy level correlation diagrams and an automated computation system for SCF-X(Alpha)-SW calculations

    NASA Technical Reports Server (NTRS)

    Schlosser, H.

    1981-01-01

    The self consistent calculation of the electronic energy levels of noble gas pseudomolecules formed when a metal surface is bombarded by noble gas ions is discussed along with the construction of energy level correlation diagrams as a function of interatomic spacing. The self consistent field x alpha scattered wave (SCF-Xalpha-SW) method is utilized. Preliminary results on the Ne-Mg system are given. An interactive x alpha programming system, implemented on the LeRC IBM 370 computer, is described in detail. This automated system makes use of special PROCDEFS (procedure definitions) to minimize the data to be entered manually at a remote terminal. Listings of the special PROCDEFS and of typical input data are given.

  4. Quantum confined stark effect on the binding energy of exciton in type II quantum heterostructure

    NASA Astrophysics Data System (ADS)

    Suseel, Rahul K.; Mathew, Vincent

    2018-05-01

    In this work, we have investigated the effect of external electric field on the strongly confined excitonic properties of CdTe/CdSe/CdTe/CdSe type-II quantum dot heterostructures. Within the effective mass approximation, we solved the Poisson-Schrodinger equations of the exciton in nanostructure using relaxation method in a self-consistent iterative manner. We changed both the external electric field and core radius of the quantum dot, to study the behavior of binding energy of exciton. Our studies show that the external electric field destroys the positional flipped state of exciton by modifying the confining potentials of electron and hole.

  5. Quasiparticle self-consistent GW method for the spectral properties of complex materials.

    PubMed

    Bruneval, Fabien; Gatti, Matteo

    2014-01-01

    The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.

  6. On the self-similar solution to the Euler equations for an incompressible fluid in three dimensions

    NASA Astrophysics Data System (ADS)

    Pomeau, Yves

    2018-03-01

    The equations for a self-similar solution to an inviscid incompressible fluid are mapped into an integral equation that hopefully can be solved by iteration. It is argued that the exponents of the similarity are ruled by Kelvin's theorem of conservation of circulation. The end result is an iteration with a nonlinear term entering a kernel given by a 3D integral for a swirling flow, likely within reach of present-day computational power. Because of the slow decay of the similarity solution at large distances, its kinetic energy diverges, and some mathematical results excluding non-trivial solutions of the Euler equations in the self-similar case do not apply. xml:lang="fr"

  7. A consistent and uniform research earthquake catalog for the AlpArray region: preliminary results.

    NASA Astrophysics Data System (ADS)

    Molinari, I.; Bagagli, M.; Kissling, E. H.; Diehl, T.; Clinton, J. F.; Giardini, D.; Wiemer, S.

    2017-12-01

    The AlpArray initiative (www.alparray.ethz.ch) is a large-scale European collaboration ( 50 institutes involved) to study the entire Alpine orogen at high resolution with a variety of geoscientific methods. AlpArray provides unprecedentedly uniform station coverage for the region with more than 650 broadband seismic stations, 300 of which are temporary. The AlpArray Seismic Network (AASN) is a joint effort of 25 institutes from 10 nations, operates since January 2016 and is expected to continue until the end of 2018. In this study, we establish a uniform earthquake catalogue for the Greater Alpine region during the operation period of the AASN with a aimed completeness of M2.5. The catalog has two main goals: 1) calculation of consistent and precise hypocenter locations 2) provide preliminary but uniform magnitude calculations across the region. The procedure is based on automatic high-quality P- and S-wave pickers, providing consistent phase arrival times in combination with a picking quality assessment. First, we detect all events in the region in 2016/2017 using an STA/LTA based detector. Among the detected events, we select 50 geographically homogeneously distributed events with magnitudes ≥2.5 representative for the entire catalog. We manually pick the selected events to establish a consistent P- and S-phase reference data set, including arrival-time time uncertainties. The reference data, are used to adjust the automatic pickers and to assess their performance. In a first iteration, a simple P-picker algorithm is applied to the entire dataset, providing initial picks for the advanced MannekenPix (MPX) algorithm. In a second iteration, the MPX picker provides consistent and reliable automatic first arrival P picks together with a pick-quality estimate. The derived automatic P picks are then used as initial values for a multi-component S-phase picking algorithm. Subsequently, automatic picks of all well-locatable earthquakes will be considered to calculate final minimum 1D P and S velocity models for the region with appropriate stations corrections. Finally, all the events are relocated with the NonLinLoc algorithm in combination with the updated 1D models. The proposed procedure represents the first step towards uniform earthquake catalog for the entire greater Alpine region using the AASN.

  8. Numerical solution of quadratic matrix equations for free vibration analysis of structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1975-01-01

    This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.

  9. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  10. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  11. Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.

    NASA Astrophysics Data System (ADS)

    Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.

    1997-08-01

    A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, J. C.; Bonoli, P. T.; Schmidt, A. E.

    Lower hybrid (LH) waves ({omega}{sub ci}<<{omega}<<{omega}{sub ce}, where {omega}{sub i,e}{identical_to}Z{sub i,e}eB/m{sub i,e}c) have the attractive property of damping strongly via electron Landau resonance on relatively fast tail electrons and consequently are well-suited to driving current. Established modeling techniques use Wentzel-Kramers-Brillouin (WKB) expansions with self-consistent non-Maxwellian distributions. Higher order WKB expansions have shown some effects on the parallel wave number evolution and consequently on the damping due to diffraction [G. Pereverzev, Nucl. Fusion 32, 1091 (1991)]. A massively parallel version of the TORIC full wave electromagnetic field solver valid in the LH range of frequencies has been developed [J. C. Wrightmore » et al., Comm. Comp. Phys. 4, 545 (2008)] and coupled to an electron Fokker-Planck solver CQL3D[R. W. Harvey and M. G. McCoy, in Proceedings of the IAEA Technical Committee Meeting, Montreal, 1992 (IAEA Institute of Physics Publishing, Vienna, 1993), USDOC/NTIS Document No. DE93002962, pp. 489-526] in order to self-consistently evolve nonthermal electron distributions characteristic of LH current drive (LHCD) experiments in devices such as Alcator C-Mod and ITER (B{sub 0}{approx_equal}5 T, n{sub e0}{approx_equal}1x10{sup 20} m{sup -3}). These simulations represent the first ever self-consistent simulations of LHCD utilizing both a full wave and Fokker-Planck calculation in toroidal geometry.« less

  13. Monitor design with multiple self-loops for maximally permissive supervisors.

    PubMed

    Chen, YuFeng; Li, ZhiWu; Barkaoui, Kamel; Uzam, Murat

    2016-03-01

    In this paper, we improve the previous work by considering that a control place can have multiple self-loops. Then, two integer linear programming problems (ILPPs) are formulated. Based on the first ILPP, an iterative deadlock control policy is developed, where a control place is computed at each iteration to implement as many marking/transition separation instances (MTSIs) as possible. The second ILPP can find a set of control places to implement all MTSIs and the objective function is used to minimize the number of control places. It is a non-iterative deadlock control strategy since we need to solve the ILPP only once. Both ILPPs can make all legal markings reachable in the controlled system, i.e., the obtained supervisor is behaviorally optimal. Finally, we provide examples to illustrate the proposed approaches. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  14. AUTOMOTIVE DIESEL MAINTENANCE 1. UNIT VII, ENGINE TUNE-UP--DETROIT DIESEL ENGINE.

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF TUNE-UP PROCEDURES FOR DIESEL ENGINES. TOPICS ARE SCHEDULING TUNE-UPS, AND TUNE-UP PROCEDURES. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL BRANCH PROGRAMED TRAINING FILM "ENGINE TUNE-UP--DETROIT DIESEL ENGINE" AND OTHER MATERIALS. SEE VT 005 655 FOR FURTHER INFORMATION.…

  15. The Differential Effects of Two Self-Managed Math Instruction Procedures: Cover, Copy, and Compare versus Copy, Cover, and Compare

    ERIC Educational Resources Information Center

    Grafman, Joel M.; Cates, Gary L.

    2010-01-01

    This study compared the fluency and error rates produced when using the Cover, Copy, and Compare (CCC) and a modified CCC procedure (MCCC) called Copy, Cover, and Compare to complete subtraction math problems. Two second-grade classrooms consisting of 47 total students participated in the study. The following items were administered to…

  16. Obtaining Self-Report Data from Cognitively Impaired Elders: Methodological Issues and Clinical Implications for Nursing Home Pain Assessment

    ERIC Educational Resources Information Center

    Fisher, Susan E.; Burgio, Louis D.; Thorn, Beverly E.; Hardin, J. Michael

    2006-01-01

    Purpose: We developed and evaluated an explicit procedure for obtaining self-report pain data from nursing home residents across a broad range of cognitive status, and we evaluated the consistency, stability, and concurrent validity of resident responses. Design and Methods: Using a modification of the Geriatric Pain Measure (GPM-M2), we…

  17. A new approach for solving the three-dimensional steady Euler equations. I - General theory

    NASA Technical Reports Server (NTRS)

    Chang, S.-C.; Adamczyk, J. J.

    1986-01-01

    The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.

  18. A new approach for solving the three-dimensional steady Euler equations. I - General theory

    NASA Astrophysics Data System (ADS)

    Chang, S.-C.; Adamczyk, J. J.

    1986-08-01

    The present iterative procedure combines the Clebsch potentials and the Munk-Prim (1947) substitution principle with an extension of a semidirect Cauchy-Riemann solver to three dimensions, in order to solve steady, inviscid three-dimensional rotational flow problems in either subsonic or incompressible flow regimes. This solution procedure can be used, upon discretization, to obtain inviscid subsonic flow solutions in a 180-deg turning channel. In addition to accurately predicting the behavior of weak secondary flows, the algorithm can generate solutions for strong secondary flows and will yield acceptable flow solutions after only 10-20 outer loop iterations.

  19. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  20. Non-iterative volumetric particle reconstruction near moving bodies

    NASA Astrophysics Data System (ADS)

    Mendelson, Leah; Techet, Alexandra

    2017-11-01

    When multi-camera 3D PIV experiments are performed around a moving body, the body often obscures visibility of regions of interest in the flow field in a subset of cameras. We evaluate the performance of non-iterative particle reconstruction algorithms used for synthetic aperture PIV (SAPIV) in these partially-occluded regions. We show that when partial occlusions are present, the quality and availability of 3D tracer particle information depends on the number of cameras and reconstruction procedure used. Based on these findings, we introduce an improved non-iterative reconstruction routine for SAPIV around bodies. The reconstruction procedure combines binary masks, already required for reconstruction of the body's 3D visual hull, and a minimum line-of-sight algorithm. This approach accounts for partial occlusions without performing separate processing for each possible subset of cameras. We combine this reconstruction procedure with three-dimensional imaging on both sides of the free surface to reveal multi-fin wake interactions generated by a jumping archer fish. Sufficient particle reconstruction in near-body regions is crucial to resolving the wake structures of upstream fins (i.e., dorsal and anal fins) before and during interactions with the caudal tail.

  1. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  2. Perl Modules for Constructing Iterators

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2009-01-01

    The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.

  3. IRMHD: an implicit radiative and magnetohydrodynamical solver for self-gravitating systems

    NASA Astrophysics Data System (ADS)

    Hujeirat, A.

    1998-07-01

    The 2D implicit hydrodynamical solver developed by Hujeirat & Rannacher is now modified to include the effects of radiation, magnetic fields and self-gravity in different geometries. The underlying numerical concept is based on the operator splitting approach, and the resulting 2D matrices are inverted using different efficient preconditionings such as ADI (alternating direction implicit), the approximate factorization method and Line-Gauss-Seidel or similar iteration procedures. Second-order finite volume with third-order upwinding and second-order time discretization is used. To speed up convergence and enhance efficiency we have incorporated an adaptive time-step control and monotonic multilevel grid distributions as well as vectorizing the code. Test calculations had shown that it requires only 38 per cent more computational effort than its explicit counterpart, whereas its range of application to astrophysical problems is much larger. For example, strongly time-dependent, quasi-stationary and steady-state solutions for the set of Euler and Navier-Stokes equations can now be sought on a non-linearly distributed and strongly stretched mesh. As most of the numerical techniques used to build up this algorithm have been described by Hujeirat & Rannacher in an earlier paper, we focus in this paper on the inclusion of self-gravity, radiation and magnetic fields. Strategies for satisfying the condition ∇.B=0 in the implicit evolution of MHD flows are given. A new discretization strategy for the vector potential which allows alternating use of the direct method is prescribed. We investigate the efficiencies of several 2D solvers for a Poisson-like equation and compare their convergence rates. We provide a splitting approach for the radiative flux within the FLD (flux-limited diffusion) approximation to enhance consistency and accuracy between regions of different optical depths. The results of some test problems are presented to demonstrate the accuracy and robustness of the code.

  4. Kernel-based least squares policy iteration for reinforcement learning.

    PubMed

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating an initial controller to ensure online performance.

  5. Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng

    2017-05-01

    Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.

  6. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    NASA Astrophysics Data System (ADS)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.

  7. Saturated Widths of Magnetic Islands in Tokamak Discharges

    NASA Astrophysics Data System (ADS)

    Halpern, F.; Pankin, A. Y.

    2005-10-01

    The new ISLAND module described in reference [1] implements a quasi-linear model to compute the widths of multiple magnetic islands driven by saturated tearing modes in toroidal plasmas of arbitrary aspect ratio and cross sectional shape. The distortion of the island shape caused by the radial variation in the perturbation is computed in the new module. In transport simulations, the enhanced transport caused by the magnetic islands has the effect of flattening the pressure and current density profiles. This self consistent treatment of the magnetic islands alters the development of the plasma profiles. In addition, it is found that islands closer to the magnetic axis influence the evolution of islands further out in the plasma. In order to investigate such phenomena, the ISLAND module is used within the BALDUR predictive modeling code to compute the widths of multiple magnetic islands in tokamak discharges. The interaction between the islands and sawtooth crashes is examined in simulations of DIII-D and JET discharges. The module is used to compute saturated neoclassical tearing mode island widths for multiple modes in ITER. Preliminary results for island widths in ITER are consistent with those presented [2] by Hegna. [1] F.D. Halpern, G. Bateman, A.H. Kritz and A.Y. Pankin, ``The ISLAND Module for Computing Magnetic Island Widths in Tokamaks,'' submitted to J. Plasma Physics (2005). [2] C.C. Hegna, 2002 Fusion Snowmass Meeting.

  8. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  9. Bidirectional iterative parcellation of diffusion weighted imaging data: Separating cortical regions connected by the arcuate fasciculus and extreme capsule

    PubMed Central

    Patterson, Dianne K.; Van Petten, Cyma; Beeson, Pélagie M.; Rapcsak, Steven Z.; Plante, Elena

    2014-01-01

    This paper introduces a Bidirectional Iterative Parcellation (BIP) procedure designed to identify the location and size of connected cortical regions (parcellations) at both ends of a white matter tract in diffusion weighted images. The procedure applies the FSL option “probabilistic tracking with classification targets” in a bidirectional and iterative manner. To assess the utility of BIP, we applied the procedure to the problem of parcellating a limited set of well-established gray matter seed regions associated with the dorsal (arcuate fasciculus/superior longitudinal fasciculus) and ventral (extreme capsule fiber system) white matter tracts in the language networks of 97 participants. These left hemisphere seed regions and the two white matter tracts, along with their right hemisphere homologues, provided an excellent test case for BIP because the resulting parcellations overlap and their connectivity via the arcuate fasciculi and extreme capsule fiber systems are well studied. The procedure yielded both confirmatory and novel findings. Specifically, BIP confirmed that each tract connects within the seed regions in unique, but expected ways. Novel findings included increasingly left-lateralized parcellations associated with the arcuate fasciculus/superior longitudinal fasciculus as a function of age and education. These results demonstrate that BIP is an easily implemented technique that successfully confirmed cortical connectivity patterns predicted in the literature, and has the potential to provide new insights regarding the architecture of the brain. PMID:25173414

  10. Railway track geometry degradation due to differential settlement of ballast/subgrade - Numerical prediction by an iterative procedure

    NASA Astrophysics Data System (ADS)

    Nielsen, Jens C. O.; Li, Xin

    2018-01-01

    An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).

  11. Reliability of a rating procedure to monitor industry self-regulation codes governing alcohol advertising content.

    PubMed

    Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne

    2008-03-01

    The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.

  12. A methodology for finding the optimal iteration number of the SIRT algorithm for quantitative Electron Tomography.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-02-01

    The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    PubMed Central

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  14. Rotating and binary relativistic stars with magnetic field

    NASA Astrophysics Data System (ADS)

    Markakis, Charalampos

    We develop a geometrical treatment of general relativistic magnetohydrodynamics for perfectly conducting fluids in Einstein--Maxwell--Euler spacetimes. The theory is applied to describe a neutron star that is rotating or is orbiting a black hole or another neutron star. Under the hypotheses of stationarity and axisymmetry, we obtain the equations governing magnetohydrodynamic equilibria of rotating neutron stars with poloidal, toroidal or mixed magnetic fields. Under the hypothesis of an approximate helical symmetry, we obtain the first law of thermodynamics governing magnetized equilibria of double neutron star or black hole - neutron star systems in close circular orbits. The first law is written as a relation between the change in the asymptotic Noether charge deltaQ and the changes in the area and electric charge of black holes, and in the vorticity, baryon rest mass, entropy, charge and magnetic flux of the magnetofluid. In an attempt to provide a better theoretical understanding of the methods used to construct models of isolated rotating stars and corotating or irrotational binaries and their unexplained convergence properties, we analytically examine the behavior of different iterative schemes near a static solution. We find the spectrum of the linearized iteration operator and show for self-consistent field methods that iterative instability corresponds to unstable modes of this operator. On the other hand, we show that the success of iteratively stable methods is due to (quasi-)nilpotency of this operator. Finally, we examine the integrability of motion of test particles in a stationary axisymmetric gravitational field. We use a direct approach to seek nontrivial constants of motion polynomial in the momenta---in addition to energy and angular momentum about the symmetry axis. We establish the existence and uniqueness of quadratic constants and the nonexistence of quartic constants for stationary axisymmetric Newtonian potentials with equatorial symmetry and elucidate their relativistic analogues.

  15. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue

    Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less

  16. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue

    In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less

  17. Use of 13Cα Chemical-Shifts in Protein Structure Determination

    PubMed Central

    Vila, Jorge A.; Ripoll, Daniel R.; Scheraga, Harold A.

    2008-01-01

    A physics-based method, aimed at determining protein structures by using NOE-derived distances together with observed and computed 13C chemical shifts, is proposed. The approach makes use of 13Cα chemical shifts, computed at the density functional level of theory, to obtain torsional constraints for all backbone and side-chain torsional angles without making a priori use of the occupancy of any region of the Ramachandran map by the amino acid residues. The torsional constraints are not fixed but are changed dynamically in each step of the procedure, following an iterative self-consistent approach intended to identify a set of conformations for which the computed 13Cα chemical shifts match the experimental ones. A test is carried out on a 76-amino acid all-α-helical protein, namely the B. Subtilis acyl carrier protein. It is shown that, starting from randomly generated conformations, the final protein models are more accurate than an existing NMR-derived structure model of this protein, in terms of both the agreement between predicted and observed 13Cα chemical shifts and some stereochemical quality indicators, and of similar accuracy as one of the protein models solved at a high level of resolution. The results provide evidence that this methodology can be used not only for structure determination but also for additional protein structure refinement of NMR-derived models deposited in the Protein Data Bank. PMID:17516673

  18. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    DOE PAGES

    Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...

    2017-12-01

    In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less

  19. Efficient block preconditioned eigensolvers for linear response time-dependent density functional theory

    DOE PAGES

    Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...

    2017-08-24

    Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less

  20. Twenty Years of Research on the Alcator C-Mod Tokamak

    NASA Astrophysics Data System (ADS)

    Greenwald, Martin

    2013-10-01

    Alcator C-Mod is a compact, high-field tokamak, whose unique design and operating parameters have produced a wealth of new and important results since its start in 1993, contributing data that extended tests of critical physical models into new parameter ranges and into new regimes. Using only RF for heating and current drive with innovative launching structures, C-Mod operates routinely at very high power densities. Research highlights include direct experimental observation of ICRF mode-conversion, ICRF flow drive, demonstration of Lower-Hybrid current drive at ITER-like densities and fields and, using a set of powerful new diagnostics, extensive validation of advanced RF codes. C-Mod spearheaded the development of the vertical-target divertor and has always operated with high-Z metal plasma facing components--an approach adopted for ITER. C-Mod has made ground-breaking discoveries in divertor physics and plasma-material interactions at reactor-like power and particle fluxes and elucidated the critical role of cross-field transport in divertor operation, edge flows and the tokamak density limit. C-Mod developed the I-mode and EDA H-mode regimes which have high performance without large ELMs and with pedestal transport self-regulated by short-wavelength electromagnetic waves. C-Mod has carried out pioneering studies of intrinsic rotation and found that self-generated flow shear can be strong enough to significantly modify transport. C-Mod made the first quantitative link between pedestal temperature and H-mode performance, showing that the observed self-similar temperature profiles were consistent with critical-gradient-length theories and followed up with quantitative tests of nonlinear gyrokinetic models. Disruption studies on C-Mod provided the first observation of non-axisymmetric halo currents and non-axisymmetric radiation in mitigated disruptions. Work supported by U.S. DoE

  1. [Psychometric properties of a self-efficacy scale for physical activity in Brazilian adults].

    PubMed

    Rech, Cassiano Ricardo; Sarabia, Tais Taiana; Fermino, Rogério César; Hallal, Pedro Curi; Reis, Rodrigo Siqueira

    2011-04-01

    To test the validity and reliability of a self-efficacy scale for physical activity (PA) in Brazilian adults. A self-efficacy scale was applied jointly with a multidimensional questionnaire through face-to-face interviews with 1,418 individuals (63.4% women) aged ≥ 18 years. The scale was submitted to validity (factorial and construct) and reliability analysis (internal consistency and temporal stability). A test-retest procedure was conducted with 74 individuals to evaluate temporal stability. Exploratory factor analyses revealed two independent factors: self-efficacy for walking and self-efficacy for moderate and vigorous PA (MVPA). Together, these two factors explained 65.4% of the total variance of the scale (20.9% and 44.5% for walking and MVPA, respectively). Cronbach's alpha values were 0.83 for walking and 0.90 for MVPA, indicating high internal consistency. Both factors were significantly and positively correlated (rho ≥ 0.17, P < 0.001) with quality of life indicators (health perception, self-satisfaction, and energy for daily activities), indicating an adequate construct validity. The scale's validity, internal consistency, and reliability were adequate to evaluate self-efficacy for PA in Brazilian adults.

  2. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  3. Detangling the Interrelationships between Self- Regulation and Ill-Structured Problem Solving in Problem-Based Learning

    ERIC Educational Resources Information Center

    Ge, Xun; Law, Victor; Huang, Kun

    2016-01-01

    One of the goals for problem-based learning (PBL) is to promote self-regulation. Although self-regulation has been studied extensively, its interrelationships with ill-structured problem solving have been unclear. In order to clarify the interrelationships, this article proposes a conceptual framework illustrating the iterative processes among…

  4. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  5. Optimization of Large-Scale Daily Hydrothermal System Operations With Multiple Objectives

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cheng, Chuntian; Shen, Jianjian; Cao, Rui; Yeh, William W.-G.

    2018-04-01

    This paper proposes a practical procedure for optimizing the daily operation of a large-scale hydrothermal system. The overall procedure optimizes a monthly model over a period of 1 year and a daily model over a period of up to 1 month. The outputs from the monthly model are used as inputs and boundary conditions for the daily model. The models iterate and update when new information becomes available. The monthly hydrothermal model uses nonlinear programing (NLP) to minimize fuel costs, while maximizing hydropower production. The daily model consists of a hydro model, a thermal model, and a combined hydrothermal model. The hydro model and thermal model generate the initial feasible solutions for the hydrothermal model. The two competing objectives considered in the daily hydrothermal model are minimizing fuel costs and minimizing thermal emissions. We use the constraint method to develop the trade-off curve (Pareto front) between these two objectives. We apply the proposed methodology on the Yunnan hydrothermal system in China. The system consists of 163 individual hydropower plants with an installed capacity of 48,477 MW and 11 individual thermal plants with an installed capacity of 12,400 MW. We use historical operational records to verify the correctness of the model and to test the robustness of the methodology. The results demonstrate the practicability and validity of the proposed procedure.

  6. Gas Flows in Rocket Motors. Volume 2. Appendix C. Time Iterative Solution of Viscous Supersonic Flow

    DTIC Science & Technology

    1989-08-01

    by b!ock number) FIELD GROUP SUB- GROUP nozzle analysis, Navier-Stokes, turbulent flow, equilibrium S 20 04 chemistry 19. ABSTRACT (Continue on reverse... quasi -conservative formulations lead to unacrepilably large mass conservation errors. Along with the investigations of Navier-Stkes algorithins...Characteristics Splitting ................................... 125 4.2.3 Non -Iterative PNS Procedure ............................... 125 4.2.4 Comparisons of

  7. Cross-cultural Adaptation of a Questionnaire on Self-perceived Level of Skills, Abilities and Competencies of Family Physicians in Albania.

    PubMed

    Alla, Arben; Czabanowska, Katarzyna; Kijowska, Violetta; Roshi, Enver; Burazeri, Genc

    2012-01-01

    Our aim was to validate an international instrument measuring self-perceived competency level of family physicians in Albania. A representative sample of 57 family physicians operating in primary health care services was interviewed twice in March-April 2012 in Tirana (26 men and 31 women; median age: 46 years, inter-quartile range: 38-56 years). A structured questionnaire was administered [and subsequently re-administered after two weeks (test-retest)] to all family physicians aiming to self-assess physicians' level of abilities, skills and competencies regarding different domains of quality of health care. The questionnaire included 37 items organized into 6 subscales/domains. Answers for each item of the tool ranged from 1 ("novice" physicians) to 5 ("expert" physicians). An overall summary score (range: 37-185) and a subscale summary score for each domain were calculated for the test and retest procedures. Cronbach's alpha was used to assess the internal consistency for both the test and the retest procedures, whereas Spearman's rho was employed to assess the stability over time (test-retest reliability) of the instrument. Cronbach's alpha was 0.87 for the test and 0.86 for the retest procedure. Overall, Spearman's rho was 0.84 (P<0.001). The overall summary score for the 37 items of the instrument was 96.3±10.0 for the test and 97.3±10.1 for the retest. All the subscale summary scores were very similar for the test and the retest procedure. This study provides evidence on cross-cultural adaptation of an international instrument taping self-perceived level of competencies of family physicians in Albania. The questionnaire displayed a satisfactory internal consistency for both test and retest procedures in this sample of family physicians in Albania. Furthermore, the high test-retest reliability (stability over time) of the instrument suggests a good potential for wide scale application to nationally representative samples of family physicians in Albanian populations.

  8. Acoustic scattering by arbitrary distributions of disjoint, homogeneous cylinders or spheres.

    PubMed

    Hesford, Andrew J; Astheimer, Jeffrey P; Waag, Robert C

    2010-05-01

    A T-matrix formulation is presented to compute acoustic scattering from arbitrary, disjoint distributions of cylinders or spheres, each with arbitrary, uniform acoustic properties. The generalized approach exploits the similarities in these scattering problems to present a single system of equations that is easily specialized to cylindrical or spherical scatterers. By employing field expansions based on orthogonal harmonic functions, continuity of pressure and normal particle velocity are directly enforced at each scatterer using diagonal, analytic expressions to eliminate the need for integral equations. The effect of a cylinder or sphere that encloses all other scatterers is simulated with an outer iterative procedure that decouples the inner-object solution from the effect of the enclosing object to improve computational efficiency when interactions among the interior objects are significant. Numerical results establish the validity and efficiency of the outer iteration procedure for nested objects. Two- and three-dimensional methods that employ this outer iteration are used to measure and characterize the accuracy of two-dimensional approximations to three-dimensional scattering of elevation-focused beams.

  9. Artificial Neural Network Based Fault Diagnostics of Rolling Element Bearings Using Time-Domain Features

    NASA Astrophysics Data System (ADS)

    Samanta, B.; Al-Balushi, K. R.

    2003-03-01

    A procedure is presented for fault diagnosis of rolling element bearings through artificial neural network (ANN). The characteristic features of time-domain vibration signals of the rotating machinery with normal and defective bearings have been used as inputs to the ANN consisting of input, hidden and output layers. The features are obtained from direct processing of the signal segments using very simple preprocessing. The input layer consists of five nodes, one each for root mean square, variance, skewness, kurtosis and normalised sixth central moment of the time-domain vibration signals. The inputs are normalised in the range of 0.0 and 1.0 except for the skewness which is normalised between -1.0 and 1.0. The output layer consists of two binary nodes indicating the status of the machine—normal or defective bearings. Two hidden layers with different number of neurons have been used. The ANN is trained using backpropagation algorithm with a subset of the experimental data for known machine conditions. The ANN is tested using the remaining set of data. The effects of some preprocessing techniques like high-pass, band-pass filtration, envelope detection (demodulation) and wavelet transform of the vibration signals, prior to feature extraction, are also studied. The results show the effectiveness of the ANN in diagnosis of the machine condition. The proposed procedure requires only a few features extracted from the measured vibration data either directly or with simple preprocessing. The reduced number of inputs leads to faster training requiring far less iterations making the procedure suitable for on-line condition monitoring and diagnostics of machines.

  10. Comparison of self-efficacy and its improvement after artificial simulator or live animal model emergency procedure training.

    PubMed

    Hall, Andrew B; Riojas, Ramon; Sharon, Danny

    2014-03-01

    The objective of this study is to compare post-training self-efficacy between artificial simulators and live animal training for the performance of emergency medical procedures. Volunteer airmen of the 81st Medical Group, without prior medical procedure training, were randomly assigned to two experimental arms consisting of identical lectures and training of diagnostic peritoneal lavage, thoracostomy (chest tube), and cricothyroidotomy on either the TraumaMan (Simulab Corp., Seattle, Washington) artificial simulator or a live pig (Sus scrofa domestica) model. Volunteers were given a postlecture and postskills training assessment of self-efficacy. Twenty-seven volunteers that initially performed artificial simulator training subsequently underwent live animal training and provided assessments comparing both modalities. The results were first, postskills training self-efficacy scores were significantly higher than postlecture scores for either training mode and for all procedures (p < 0.0001). Second, post-training self-efficacy scores were not statistically different between live animal and artificial simulator training for diagnostic peritoneal lavage (p = 0.555), chest tube (p = 0.486), and cricothyroidotomy (p = 0.329). Finally, volunteers undergoing both training modalities indicated preference for live animal training (p < 0.0001). We conclude that artificial simulator and live animal training produce equivalent levels of self-efficacy after initial training, but there is a preference in using a live animal model to achieve those skills. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  11. The Role of an Electric Field in the Formation of a Detached Regime in Tokamak Plasma

    NASA Astrophysics Data System (ADS)

    Senichenkov, I.; Kaveeva, E.; Rozhansky, V.; Sytova, E.; Veselova, I.; Voskoboynikov, S.; Coster, D.

    2018-03-01

    Modeling of the transition to the detachment of ASDEX Upgrade tokamak plasma with increasing density is performed using the SOLPS-ITER numerical code with a self-consistent account of drifts and currents. Their role in plasma redistribution both in the confinement region and in the scrape-off layer (SOL) is investigated. The mechanism of high field side high-density formation in the SOL in the course of detachment is suggested. In the full detachment regime, when the cold plasma region expands above the X-point and reaches closed magnetic-flux surfaces, plasma perturbation in a confined region may lead to a change in the confinement regime.

  12. Application of Four-Point Newton-EGSOR iteration for the numerical solution of 2D Porous Medium Equations

    NASA Astrophysics Data System (ADS)

    Chew, J. V. L.; Sulaiman, J.

    2017-09-01

    Partial differential equations that are used in describing the nonlinear heat and mass transfer phenomena are difficult to be solved. For the case where the exact solution is difficult to be obtained, it is necessary to use a numerical procedure such as the finite difference method to solve a particular partial differential equation. In term of numerical procedure, a particular method can be considered as an efficient method if the method can give an approximate solution within the specified error with the least computational complexity. Throughout this paper, the two-dimensional Porous Medium Equation (2D PME) is discretized by using the implicit finite difference scheme to construct the corresponding approximation equation. Then this approximation equation yields a large-sized and sparse nonlinear system. By using the Newton method to linearize the nonlinear system, this paper deals with the application of the Four-Point Newton-EGSOR (4NEGSOR) iterative method for solving the 2D PMEs. In addition to that, the efficiency of the 4NEGSOR iterative method is studied by solving three examples of the problems. Based on the comparative analysis, the Newton-Gauss-Seidel (NGS) and the Newton-SOR (NSOR) iterative methods are also considered. The numerical findings show that the 4NEGSOR method is superior to the NGS and the NSOR methods in terms of the number of iterations to get the converged solutions, the time of computation and the maximum absolute errors produced by the methods.

  13. The impact of functional analysis methodology on treatment choice for self-injurious and aggressive behavior.

    PubMed Central

    Pelios, L; Morren, J; Tesch, D; Axelrod, S

    1999-01-01

    Self-injurious behavior (SIB) and aggression have been the concern of researchers because of the serious impact these behaviors have on individuals' lives. Despite the plethora of research on the treatment of SIB and aggressive behavior, the reported findings have been inconsistent regarding the effectiveness of reinforcement-based versus punishment-based procedures. We conducted a literature review to determine whether a trend could be detected in researchers' selection of reinforcement-based procedures versus punishment-based procedures, particularly since the introduction of functional analysis to behavioral assessment. The data are consistent with predictions made in the past regarding the potential impact of functional analysis methodology. Specifically, the findings indicate that, once maintaining variables for problem behavior are identified, experimenters tend to choose reinforcement-based procedures rather than punishment-based procedures as treatment for both SIB and aggressive behavior. Results indicated an increased interest in studies on the treatment of SIB and aggressive behavior, particularly since 1988. PMID:10396771

  14. Doppler effects on 3-D non-LTE radiation transport and emission spectra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuliani, J. L.; Davis, J.; DasGupta, A.

    2010-10-01

    Spatially and temporally resolved X-ray emission lines contain information about temperatures, densities, velocities, and the gradients in a plasma. Extracting this information from optically thick lines emitted from complex ions in dynamic, three-dimensional, non-LTE plasmas requires self-consistent accounting for both non-LTE atomic physics and non-local radiative transfer. We present a brief description of a hybrid-structure spectroscopic atomic model coupled to an iterative tabular on-the-spot treatment of radiative transfer that can be applied to plasmas of arbitrary material composition, conditions, and geometries. The effects of Doppler line shifts on the self-consistent radiative transfer within the plasma and the emergent emission andmore » absorption spectra are included in the model. Sample calculations for a two-level atom in a uniform cylindrical plasma are given, showing reasonable agreement with more sophisticated transport models and illustrating the potential complexity - or richness - of radially resolved emission lines from an imploding cylindrical plasma. Also presented is a comparison of modeled L- and K-shell spectra to temporally and radially resolved emission data from a Cu:Ni plasma. Finally, some shortcomings of the model and possible paths for improvement are discussed.« less

  15. The trust-region self-consistent field method in Kohn-Sham density-functional theory.

    PubMed

    Thøgersen, Lea; Olsen, Jeppe; Köhn, Andreas; Jørgensen, Poul; Sałek, Paweł; Helgaker, Trygve

    2005-08-15

    The trust-region self-consistent field (TRSCF) method is extended to the optimization of the Kohn-Sham energy. In the TRSCF method, both the Roothaan-Hall step and the density-subspace minimization step are replaced by trust-region optimizations of local approximations to the Kohn-Sham energy, leading to a controlled, monotonic convergence towards the optimized energy. Previously the TRSCF method has been developed for optimization of the Hartree-Fock energy, which is a simple quadratic function in the density matrix. However, since the Kohn-Sham energy is a nonquadratic function of the density matrix, the local energy functions must be generalized for use with the Kohn-Sham model. Such a generalization, which contains the Hartree-Fock model as a special case, is presented here. For comparison, a rederivation of the popular direct inversion in the iterative subspace (DIIS) algorithm is performed, demonstrating that the DIIS method may be viewed as a quasi-Newton method, explaining its fast local convergence. In the global region the convergence behavior of DIIS is less predictable. The related energy DIIS technique is also discussed and shown to be inappropriate for the optimization of the Kohn-Sham energy.

  16. Development of FullWave : Hot Plasma RF Simulation Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei

    2017-10-01

    Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.

  17. LBQ2D, Extending the Line Broadened Quasilinear Model to TAE-EP Interaction

    NASA Astrophysics Data System (ADS)

    Ghantous, Katy; Gorelenkov, Nikolai; Berk, Herbert

    2012-10-01

    The line broadened quasilinear model was proposed and tested on the one dimensional electrostatic case of the bump on tailfootnotetextH.L Berk, B. Breizman and J. Fitzpatrick, Nucl. Fusion, 35:1661, 1995 to study the wave particle interaction. In conventional quasilinear theory, the sea of overlapping modes evolve with time as the particle distribution function self consistently undergo diffusion in phase space. The line broadened quasilinear model is an extension to the conventional theory in a way that allows treatment of isolated modes as well as overlapping modes by broadening the resonant line in phase space. This makes it possible to treat the evolution of modes self consistently from onset to saturation in either case. We describe here the model denoted by LBQ2D which is an extension of the proposed one dimensional line broadened quasilinear model to the case of TAEs interacting with energetic particles in two dimensional phase space, energy as well as canonical angular momentum. We study the saturation of isolated modes in various regimes and present the analytical derivation and numerical results. Finally, we present, using ITER parameters, the case where multiple modes overlap and describe the techniques used for the numerical treatment.

  18. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  19. Self-assembly behavior of tail-to-tail superstructure formed by mono-6-O-(4-carbamoylmethoxy-benzoyl)-β-cyclodextrin in solution and the solid state.

    PubMed

    Xu, Zhe; Chen, Xin; Liu, Jing; Yan, Dong-Qing; Diao, Chun-Hua; Guo, Min-Jie; Fan, Zhi

    2014-07-01

    A novel mono-modified β-cyclodextrin (β-CD) consisting of 4-carbamoylmethoxy-benzoyl unit at the primary side was synthesized and its self-assembly behavior was determined by X-ray crystallography and NMR spectroscopy. The crystal structure shows a 'Yin-Yang'-like packing mode, in which the modified β-CD exhibits a channel superstructure formed by a tail-to-tail dimer as the repeating motif with the substituted group embedded within the hydrophobic cavity of the facing β-CD. The geometry of the substituted group is determined by the inclusion of the cavity and is further stabilized by two intermolecular hydrogen bonds between the carbonyl O atom and phenyl group. Furthermore, NMR ROESY investigation indicates that the self-assembly behavior of the substituted group within the β-CD cavity is retained in aqueous solution, and the effective binding constant Ka was calculated to be 1330 M(-1) by means of (1)H NMR titration according to iterative determination. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate were considered. These equations suggest certain successive approximations iterative procedures for obtaining maximum likelihood estimates. The procedures, which are generalized steepest ascent (deflected gradient) procedures, contain those of Hosmer as a special case.

  1. [Urologic surgical procedures in patients with uterus neoplasm and colon-rectal cancer].

    PubMed

    Marino, G; Laudi, M; Capussotti, L; Zola, P

    2008-01-01

    INTRODUCTION. During the last 30 years, the multidisciplinary treatments of colon and uterus neoplasm have yielded an increase in total survival rates, fostering therefore the increase of cases with regional relapse involving the urinary tract. In these cases the iterative surgery can be performed, if no disease secondary to pelvic pain, haemostatic or debulking procedure is present, and must be considered and discussed with the patient, according to his/her general status. MATERIALS AND METHODS. From 1997 to August 2007 we performed altogether 43 pelvic iterative surgeries, with simultaneous urologic surgical procedure because of pelvic tumor relapse in patients with uterus neoplasm and colon and rectal cancer. In 4 cases of anal cancer, the urological procedure were: one radical prostatectomy with continent vesicostomy in the first case, while in the other 3 cases radical pelvectomy with double-barrelled uretero-cutaneostomy. In 23 cases of colon cancer, the urologic procedures were: 9 cases of radical cystectomy with double-barrelled uretero-cutaneostomy, 4 cases of radical cystectomy with uretero-ileo-cutaneostomy according to Bricker- Wallace II procedure, and 9 cases of partial cystectomy with pelvic ureterectomy and ureterocystoneostomy according to Lich-Gregoire technique (7 cases) and Lembo-Boari (2 cases) procedure. In 16 cases of uterus cancer, the urological procedure were: 7 cases of partial cystectomy with pelvic ureterectomy and uretero-cystoneostomy according to Lich-Gregoire procedure; in 3 cases, a radical cystectomy with urinary continent cutaneous diversion according to the Ileal T-pouch procedure; 2 cases of total pelvectomy and double uretero-cutaneostomy, and 4 cases of bilateral uretero-cutaneostomy. RESULTS. No patients died in the perioperative time; early systemic complications were: 2 esophageal candidiasis, 1 case of venous thrombosis. CONCLUSIONS. The iterative pelvic surgery in the case of oncological relapse involving the urinary tract aims to achieve the best quality of life with the utmost oncological radicality. The equation: eradication of pelvic neoplasm and urinary tract reconstruction, with acceptable quality of life, will be the future target; nevertheless, it is not possible to establish guidelines beforehand, and the therapy must be adapted to each single case.

  2. Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities

    DTIC Science & Technology

    2009-02-01

    levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making

  3. Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    NASA Astrophysics Data System (ADS)

    Williamson, Daniel B.; Blaker, Adam T.; Sinha, Bablu

    2017-04-01

    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function, principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through three waves of iterative refocussing of the NEMO (Nucleus for European Modelling of the Ocean) ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low-resolution ensembles to tune NEMO ORCA configurations at higher resolutions.

  4. Image transmission system using adaptive joint source and channel decoding

    NASA Astrophysics Data System (ADS)

    Liu, Weiliang; Daut, David G.

    2005-03-01

    In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.

  5. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  6. The development of Drink Less: an alcohol reduction smartphone app for excessive drinkers.

    PubMed

    Garnett, Claire; Crane, David; West, Robert; Brown, Jamie; Michie, Susan

    2018-05-04

    Excessive alcohol consumption poses a serious problem for public health. Digital behavior change interventions have the potential to help users reduce their drinking. In accordance with Open Science principles, this paper describes the development of a smartphone app to help individuals who drink excessively to reduce their alcohol consumption. Following the UK Medical Research Council's guidance and the Multiphase Optimization Strategy, development consisted of two phases: (i) selection of intervention components and (ii) design and development work to implement the chosen components into modules to be evaluated further for inclusion in the app. Phase 1 involved a scoping literature review, expert consensus study and content analysis of existing alcohol apps. Findings were integrated within a broad model of behavior change (Capability, Opportunity, Motivation-Behavior). Phase 2 involved a highly iterative process and used the "Person-Based" approach to promote engagement. From Phase 1, five intervention components were selected: (i) Normative Feedback, (ii) Cognitive Bias Re-training, (iii) Self-monitoring and Feedback, (iv) Action Planning, and (v) Identity Change. Phase 2 indicated that each of these components presented different challenges for implementation as app modules; all required multiple iterations and design changes to arrive at versions that would be suitable for inclusion in a subsequent evaluation study. The development of the Drink Less app involved a thorough process of component identification with a scoping literature review, expert consensus, and review of other apps. Translation of the components into app modules required a highly iterative process involving user testing and design modification.

  7. Efficient stabilization and acceleration of numerical simulation of fluid flows by residual recombination

    NASA Astrophysics Data System (ADS)

    Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.

    2017-09-01

    The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.

  8. Assessment of self-perception of transsexual persons: pilot study of 15 patients.

    PubMed

    Barišić, Jasmina; Milosavljević, Marija; Duišin, Dragana; Batinić, Borjanka; Vujović, Svetlana; Milovanović, Srdjan

    2014-01-01

    There have been few studies in the area of Self-Perception in transsexual persons, except for the population of transsexual adolescents. Bearing in mind its importance not only in the assessment of personality but also in predicting adaptive capacity, the goal of our research is based on the examination of Self-Perception of adult transsexual persons. The study was conducted using a Rorschach test, which provides an insight into various aspects of Self-Perception. The sample consisted of 15 transsexual persons, who passed the standard diagnostic procedure. The results suggest that transsexual persons manage to maintain Adequate Self-Esteem. Hypervigilance Index and Obsessive Style Index are negative, while the values showing a negative quality of Self-Regard and the capacity for introspection tend to increase. In the process of Self-Introspection, negative and painful emotional states are often perceived. The estimation of Self-Perception in adult transsexual persons indicates a trend of subjective perception of a personal imperfection or inadequacy. This is probably the result of experiencing discomfort for a number of years due to gender incongruence and dysphoria, in particular in persons who enter the sex reassignment procedure later in their adulthood.

  9. Upper wide-angle viewing system for ITER.

    PubMed

    Lasnier, C J; McLean, A G; Gattuso, A; O'Neill, R; Smiley, M; Vasquez, J; Feder, R; Smith, M; Stratton, B; Johnson, D; Verlaan, A L; Heijmans, J A C

    2016-11-01

    The Upper Wide Angle Viewing System (UWAVS) will be installed on five upper ports of ITER. This paper shows major requirements, gives an overview of the preliminary design with reasons for some design choices, examines self-emitted IR light from UWAVS optics and its effect on accuracy, and shows calculations of signal-to-noise ratios for the two-color temperature output as a function of integration time and divertor temperature. Accurate temperature output requires correction for vacuum window absorption vs. wavelength and for self-emitted IR, which requires good measurement of the temperature of the optical components. The anticipated signal-to-noise ratio using presently available IR cameras is adequate for the required 500 Hz frame rate.

  10. Pellet injection into H-mode ITER plasma with the presence of internal transport barriers

    NASA Astrophysics Data System (ADS)

    Leekhaphan, P.; Onjun, T.

    2011-04-01

    The impacts of pellet injection into ITER type-1 ELMy H-mode plasma with the presence of internal transport barriers (ITBs) are investigated using self-consistent core-edge simulations of 1.5D BALDUR integrated predictive modeling code. In these simulations, the plasma core transport is predicted using a combination of a semi-empirical Mixed B/gB anomalous transport model, which can self-consistently predict the formation of ITBs, and the NCLASS neoclassical model. For simplicity, it is assumed that toroidal velocity for ω E× B calculation is proportional to local ion temperature. In addition, the boundary conditions are predicted using the pedestal temperature model based on magnetic and flow shear stabilization width scaling; while the density of each plasma species, including both hydrogenic and impurity species, at the boundary are assumed to be a large fraction of its line averaged density. For the pellet's behaviors in the hot plasma, the Neutral Gas Shielding (NGS) model by Milora-Foster is used. It was found that the injection of pellet could result in further improvement of fusion performance from that of the formation of ITB. However, the impact of pellet injection is quite complicated. It is also found that the pellets cannot penetrate into a deep core of the plasma. The injection of the pellet results in a formation of density peak in the region close to the plasma edge. The injection of pellet can result in an improved nuclear fusion performance depending on the properties of pellet (i.e., increase up to 5% with a speed of 1 km/s and radius of 2 mm). A sensitivity analysis is carried out to determine the impact of pellet parameters, which are: the pellet radius, the pellet velocity, and the frequency of injection. The increase in the pellet radius and frequency were found to greatly improve the performance and effectiveness of fuelling. However, changing the velocity is observed to exert small impact.

  11. An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Han, Jianqiang; Tang, Huazhong

    2007-01-01

    This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.

  12. Subpixel edge estimation with lens aberrations compensation based on the iterative image approximation for high-precision thermal expansion measurements of solids

    NASA Astrophysics Data System (ADS)

    Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.

    2017-06-01

    A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.

  13. Coarse mesh and one-cell block inversion based diffusion synthetic acceleration

    NASA Astrophysics Data System (ADS)

    Kim, Kang-Seog

    DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.

  14. Iterative combining rules for the van der Waals potentials of mixed rare gas systems

    NASA Astrophysics Data System (ADS)

    Wei, L. M.; Li, P.; Tang, K. T.

    2017-05-01

    An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.

  15. Accelerating the weighted histogram analysis method by direct inversion in the iterative subspace.

    PubMed

    Zhang, Cheng; Lai, Chun-Liang; Pettitt, B Montgomery

    The weighted histogram analysis method (WHAM) for free energy calculations is a valuable tool to produce free energy differences with the minimal errors. Given multiple simulations, WHAM obtains from the distribution overlaps the optimal statistical estimator of the density of states, from which the free energy differences can be computed. The WHAM equations are often solved by an iterative procedure. In this work, we use a well-known linear algebra algorithm which allows for more rapid convergence to the solution. We find that the computational complexity of the iterative solution to WHAM and the closely-related multiple Bennett acceptance ratio (MBAR) method can be improved by using the method of direct inversion in the iterative subspace. We give examples from a lattice model, a simple liquid and an aqueous protein solution.

  16. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-01

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ɛ(r) is close to one everywhere inside the protein. The Gaussian widths σi of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σi. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.

  17. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description.

    PubMed

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by comparison with so-called generalized Born methods. A follow-up paper describes how the method enables Hamiltonian, efficient, and accurate MM molecular dynamics simulations of proteins in dielectric solvent continua.

  18. Achievements in the development of the Water Cooled Solid Breeder Test Blanket Module of Japan to the milestones for installation in ITER

    NASA Astrophysics Data System (ADS)

    Tsuru, Daigo; Tanigawa, Hisashi; Hirose, Takanori; Mohri, Kensuke; Seki, Yohji; Enoeda, Mikio; Ezato, Koichiro; Suzuki, Satoshi; Nishi, Hiroshi; Akiba, Masato

    2009-06-01

    As the primary candidate of ITER Test Blanket Module (TBM) to be tested under the leadership of Japan, a water cooled solid breeder (WCSB) TBM is being developed. This paper shows the recent achievements towards the milestones of ITER TBMs prior to the installation, which consist of design integration in ITER, module qualification and safety assessment. With respect to the design integration, targeting the detailed design final report in 2012, structure designs of the WCSB TBM and the interfacing components (common frame and backside shielding) that are placed in a test port of ITER and the layout of the cooling system are presented. As for the module qualification, a real-scale first wall mock-up fabricated by using the hot isostatic pressing method by structural material of reduced activation martensitic ferritic steel, F82H, and flow and irradiation test of the mock-up are presented. As for safety milestones, the contents of the preliminary safety report in 2008 consisting of source term identification, failure mode and effect analysis (FMEA) and identification of postulated initiating events (PIEs) and safety analyses are presented.

  19. A Centered Projective Algorithm for Linear Programming

    DTIC Science & Technology

    1988-02-01

    zx/l to (PA Karmarkar’s algorithm iterates this procedure. An alternative method, the so-called affine variant (first proposed by Dikin [6] in 1967...trajectories, II. Legendre transform coordinates . central trajectories," manuscripts, to appear in Transactions of the American [6] I.I. Dikin ...34Iterative solution of problems of linear and quadratic programming," Soviet Mathematics Dokladv 8 (1967), 674-675. [7] I.I. Dikin , "On the speed of an

  20. MPL-A program for computations with iterated integrals on moduli spaces of curves of genus zero

    NASA Astrophysics Data System (ADS)

    Bogner, Christian

    2016-06-01

    We introduce the Maple program MPL for computations with multiple polylogarithms. The program is based on homotopy invariant iterated integrals on moduli spaces M0,n of curves of genus 0 with n ordered marked points. It includes the symbol map and procedures for the analytic computation of period integrals on M0,n. It supports the automated computation of a certain class of Feynman integrals.

  1. New Manning System Field Evaluation

    DTIC Science & Technology

    1985-11-01

    absence of welcoming attention has deleterious effects on family member stability and adaptability. Decreased self - esteem and negative attitudes toward unit...through TCATA and their BDM on-stat..on data collection agents, is conducting self - administered attitudinal surveys among members (80% or more) of...soldier-unit performance. Data collection will involve three iterations of a self -administered mailed survey over an 18--month period. (3) Battalton

  2. Self-Regulated Learning: The Continuous-Change Conceptual Framework and a Vision of New Paradigm, Technology System, and Pedagogical Support

    ERIC Educational Resources Information Center

    Huh, Yeol; Reigeluth, Charles M.

    2017-01-01

    A modified conceptual framework called the Continuous-Change Framework for self-regulated learning (SRL) is presented. Common elements and limitations among the past frameworks are discussed in relation to the modified conceptual framework. The iterative nature of the goal setting process and overarching presence of self-efficacy and motivational…

  3. Student Perceptions of Instructional Choices in Middle School Physical Education

    ERIC Educational Resources Information Center

    Agbuga, Bulent; Xiang, Ping; McBride, Ron E.; Su, Xiaoxia

    2016-01-01

    Purpose: Framed within self-determination theory, this study examined relationships among perceived instructional choices (cognitive, organizational, and procedural), autonomy need satisfaction, and engagement (behavioral, cognitive, and emotional) among Turkish students in middle school physical education. Methods: Participants consisted of 246…

  4. The Self-Stigma of Depression Scale (SSDS): development and psychometric evaluation of a new instrument.

    PubMed

    Barney, Lisa J; Griffiths, Kathleen M; Christensen, Helen; Jorm, Anthony F

    2010-12-01

    Self-stigma may feature strongly and be detrimental for people with depression, but the understanding of its nature and prevalence is limited by the lack of psychometrically-validated measures. This study aimed to develop and validate a measure of self-stigma about depression. Items assessing self-stigma were developed from focus group discussions, and were tested and refined over three studies using surveys of 408 university students, 330 members of a depression Internet network, and 1312 members of the general Australian public. Evaluation involved item-level and bivariate analyses, and factor analytic procedures. Items performed consistently across the three surveys. The resulting Self-Stigma of Depression Scale (SSDS) comprised 16 items representing subscales of Shame, Self-Blame, Social Inadequacy, and Help-Seeking Inhibition. Construct validity, internal consistency and test-retest reliability were satisfactory. The SSDS distinguishes self-stigma from perceptions of stigma by others, yields in-depth information about self-stigma of depression, and possesses good psychometric properties. It is a promising tool for the measurement of self-stigma and is likely to be useful in further understanding self-stigma and evaluating stigma interventions. Copyright © 2010 John Wiley & Sons, Ltd.

  5. Development of renormalization group analysis of turbulence

    NASA Technical Reports Server (NTRS)

    Smith, L. M.

    1990-01-01

    The renormalization group (RG) procedure for nonlinear, dissipative systems is now quite standard, and its applications to the problem of hydrodynamic turbulence are becoming well known. In summary, the RG method isolates self similar behavior and provides a systematic procedure to describe scale invariant dynamics in terms of large scale variables only. The parameterization of the small scales in a self consistent manner has important implications for sub-grid modeling. This paper develops the homogeneous, isotropic turbulence and addresses the meaning and consequence of epsilon-expansion. The theory is then extended to include a weak mean flow and application of the RG method to a sequence of models is shown to converge to the Navier-Stokes equations.

  6. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2007-10-15

    An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less

  7. A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems

    NASA Astrophysics Data System (ADS)

    Gambolati, Giuseppe; Teatini, Pietro

    1996-01-01

    A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.

  8. Interferometric Imaging Directly with Closure Phases and Closure Amplitudes

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Johnson, Michael D.; Bouman, Katherine L.; Blackburn, Lindy L.; Akiyama, Kazunori; Narayan, Ramesh

    2018-04-01

    Interferometric imaging now achieves angular resolutions as fine as ∼10 μas, probing scales that are inaccessible to single telescopes. Traditional synthesis imaging methods require calibrated visibilities; however, interferometric calibration is challenging, especially at high frequencies. Nevertheless, most studies present only a single image of their data after a process of “self-calibration,” an iterative procedure where the initial image and calibration assumptions can significantly influence the final image. We present a method for efficient interferometric imaging directly using only closure amplitudes and closure phases, which are immune to station-based calibration errors. Closure-only imaging provides results that are as noncommittal as possible and allows for reconstructing an image independently from separate amplitude and phase self-calibration. While closure-only imaging eliminates some image information (e.g., the total image flux density and the image centroid), this information can be recovered through a small number of additional constraints. We demonstrate that closure-only imaging can produce high-fidelity results, even for sparse arrays such as the Event Horizon Telescope, and that the resulting images are independent of the level of systematic amplitude error. We apply closure imaging to VLBA and ALMA data and show that it is capable of matching or exceeding the performance of traditional self-calibration and CLEAN for these data sets.

  9. Signalling networks and dynamics of allosteric transitions in bacterial chaperonin GroEL: implications for iterative annealing of misfolded proteins.

    PubMed

    Thirumalai, D; Hyeon, Changbong

    2018-06-19

    Signal transmission at the molecular level in many biological complexes occurs through allosteric transitions. Allostery describes the responses of a complex to binding of ligands at sites that are spatially well separated from the binding region. We describe the structural perturbation method, based on phonon propagation in solids, which can be used to determine the signal-transmitting allostery wiring diagram (AWD) in large but finite-sized biological complexes. Application to the bacterial chaperonin GroEL-GroES complex shows that the AWD determined from structures also drives the allosteric transitions dynamically. From both a structural and dynamical perspective these transitions are largely determined by formation and rupture of salt-bridges. The molecular description of allostery in GroEL provides insights into its function, which is quantitatively described by the iterative annealing mechanism. Remarkably, in this complex molecular machine, a deep connection is established between the structures, reaction cycle during which GroEL undergoes a sequence of allosteric transitions, and function, in a self-consistent manner.This article is part of a discussion meeting issue 'Allostery and molecular machines'. © 2018 The Author(s).

  10. High-performance finite-difference time-domain simulations of C-Mod and ITER RF antennas

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, David N.

    2015-12-01

    Finite-difference time-domain methods have, in recent years, developed powerful capabilities for modeling realistic ICRF behavior in fusion plasmas [1, 2, 3, 4]. When coupled with the power of modern high-performance computing platforms, such techniques allow the behavior of antenna near and far fields, and the flow of RF power, to be studied in realistic experimental scenarios at previously inaccessible levels of resolution. In this talk, we present results and 3D animations from high-performance FDTD simulations on the Titan Cray XK7 supercomputer, modeling both Alcator C-Mod's field-aligned ICRF antenna and the ITER antenna module. Much of this work focuses on scans over edge density, and tailored edge density profiles, to study dispersion and the physics of slow wave excitation in the immediate vicinity of the antenna hardware and SOL. An understanding of the role of the lower-hybrid resonance in low-density scenarios is emerging, and possible implications of this for the NSTX launcher and power balance are also discussed. In addition, we discuss ongoing work centered on using these simulations to estimate sputtering and impurity production, as driven by the self-consistent sheath potentials at antenna surfaces.

  11. High-performance finite-difference time-domain simulations of C-Mod and ITER RF antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, Thomas G., E-mail: tgjenkins@txcorp.com; Smithe, David N., E-mail: smithe@txcorp.com

    Finite-difference time-domain methods have, in recent years, developed powerful capabilities for modeling realistic ICRF behavior in fusion plasmas [1, 2, 3, 4]. When coupled with the power of modern high-performance computing platforms, such techniques allow the behavior of antenna near and far fields, and the flow of RF power, to be studied in realistic experimental scenarios at previously inaccessible levels of resolution. In this talk, we present results and 3D animations from high-performance FDTD simulations on the Titan Cray XK7 supercomputer, modeling both Alcator C-Mod’s field-aligned ICRF antenna and the ITER antenna module. Much of this work focuses on scansmore » over edge density, and tailored edge density profiles, to study dispersion and the physics of slow wave excitation in the immediate vicinity of the antenna hardware and SOL. An understanding of the role of the lower-hybrid resonance in low-density scenarios is emerging, and possible implications of this for the NSTX launcher and power balance are also discussed. In addition, we discuss ongoing work centered on using these simulations to estimate sputtering and impurity production, as driven by the self-consistent sheath potentials at antenna surfaces.« less

  12. Microencapsulation of Polyfunctional Amines for Self-Healing of Epoxy-Based Composites

    DTIC Science & Technology

    2008-01-01

    MICROENCAPSULATION OF POLYFUNCTIONAL AMINES FOR SELF-HEALING OF EPOXY-BASED COMPOSITES David A. McIlroy*§, Ben J. Blaiszik,¥ Paul V. Braun... microcapsules containing an amine hardener (DEH-52, Dow Chemical) for use as the hardener in a 2-part epoxy healing system consisting of epoxy...microscope. Scanning electron microscopy was performed on a Philips XL30 ESEM-FEG instrument. Microencapsulation Procedure. 10 g of a 2:1 v/v

  13. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  14. Fast solution of elliptic partial differential equations using linear combinations of plane waves.

    PubMed

    Pérez-Jordá, José M

    2016-02-01

    Given an arbitrary elliptic partial differential equation (PDE), a procedure for obtaining its solution is proposed based on the method of Ritz: the solution is written as a linear combination of plane waves and the coefficients are obtained by variational minimization. The PDE to be solved is cast as a system of linear equations Ax=b, where the matrix A is not sparse, which prevents the straightforward application of standard iterative methods in order to solve it. This sparseness problem can be circumvented by means of a recursive bisection approach based on the fast Fourier transform, which makes it possible to implement fast versions of some stationary iterative methods (such as Gauss-Seidel) consuming O(NlogN) memory and executing an iteration in O(Nlog(2)N) time, N being the number of plane waves used. In a similar way, fast versions of Krylov subspace methods and multigrid methods can also be implemented. These procedures are tested on Poisson's equation expressed in adaptive coordinates. It is found that the best results are obtained with the GMRES method using a multigrid preconditioner with Gauss-Seidel relaxation steps.

  15. Novel approach in k0-NAA for highly concentrated REE Samples.

    PubMed

    Abdollahi Neisiani, M; Latifi, M; Chaouki, J; Chilian, C

    2018-04-01

    The present paper presents a new approach for k 0 -NAA for accurate quantification with short turnaround analysis times for rare earth elements (REEs) in high content mineral matrices. REE k 0 and Q 0 values, spectral interferences and nuclear interferences were experimentally evaluated and improved with Alfa Aesar Specpure Plasma Standard 1000mgkg -1 mono-rare earth solutions. The new iterative gamma-ray self-attenuation and neutron self-shielding methods were investigated with powder standards prepared from 100mg of 99.9% Alfa Aesar mono rare earth oxide diluted with silica oxide. The overall performance of the new k 0 -NAA method for REEs was validated using a certified reference material (CRM) from Canadian Certified Reference Materials Project (REE-2) with REE content ranging from 7.2mgkg -1 for Yb to 9610mgkg -1 for Ce. The REE concentration was determined with uncertainty below 7% (at 95% confidence level) and proved good consistency with the CRM certified concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Experiments on Learning by Back Propagation.

    ERIC Educational Resources Information Center

    Plaut, David C.; And Others

    This paper describes further research on a learning procedure for layered networks of deterministic, neuron-like units, described by Rumelhart et al. The units, the way they are connected, the learning procedure, and the extension to iterative networks are presented. In one experiment, a network learns a set of filters, enabling it to discriminate…

  17. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    PubMed

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  18. Review of Data Integrity Models in Multi-Level Security Environments

    DTIC Science & Technology

    2011-02-01

    2: (E-1 extension) Only executions described in a (User, TP, (CDIs)) relation are allowed • E-3: Users must be authenticated before allowing TP... authentication and verification procedures for upgrading the integrity of certain objects. The mechanism used to manage access to objects is primarily...that is, the self-consistency of interdependent data and the consistency of real-world environment data. The prevention of authorised users from making

  19. How good are the Garvey-Kelson predictions of nuclear masses?

    NASA Astrophysics Data System (ADS)

    Morales, Irving O.; López Vieyra, J. C.; Hirsch, J. G.; Frank, A.

    2009-09-01

    The Garvey-Kelson relations are used in an iterative process to predict nuclear masses in the neighborhood of nuclei with measured masses. Average errors in the predicted masses for the first three iteration shells are smaller than those obtained with the best nuclear mass models. Their quality is comparable with the Audi-Wapstra extrapolations, offering a simple and reproducible procedure for short range mass predictions. A systematic study of the way the error grows as a function of the iteration and the distance to the known masses region, shows that a correlation exists between the error and the residual neutron-proton interaction, produced mainly by the implicit assumption that V varies smoothly along the nuclear landscape.

  20. Networking Theories by Iterative Unpacking

    ERIC Educational Resources Information Center

    Koichu, Boris

    2014-01-01

    An iterative unpacking strategy consists of sequencing empirically-based theoretical developments so that at each step of theorizing one theory serves as an overarching conceptual framework, in which another theory, either existing or emerging, is embedded in order to elaborate on the chosen element(s) of the overarching theory. The strategy is…

  1. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    DTIC Science & Technology

    1990-11-01

    1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and

  2. Iterative methods for plasma sheath calculations: Application to spherical probe

    NASA Technical Reports Server (NTRS)

    Parker, L. W.; Sullivan, E. C.

    1973-01-01

    The computer cost of a Poisson-Vlasov iteration procedure for the numerical solution of a steady-state collisionless plasma-sheath problem depends on: (1) the nature of the chosen iterative algorithm, (2) the position of the outer boundary of the grid, and (3) the nature of the boundary condition applied to simulate a condition at infinity (as in three-dimensional probe or satellite-wake problems). Two iterative algorithms, in conjunction with three types of boundary conditions, are analyzed theoretically and applied to the computation of current-voltage characteristics of a spherical electrostatic probe. The first algorithm was commonly used by physicists, and its computer costs depend primarily on the boundary conditions and are only slightly affected by the mesh interval. The second algorithm is not commonly used, and its costs depend primarily on the mesh interval and slightly on the boundary conditions.

  3. Aerodynamic optimization by simultaneously updating flow variables and design parameters

    NASA Technical Reports Server (NTRS)

    Rizk, M. H.

    1990-01-01

    The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.

  4. Iterative refinement of structure-based sequence alignments by Seed Extension

    PubMed Central

    Kim, Changhoon; Tai, Chin-Hsien; Lee, Byungkook

    2009-01-01

    Background Accurate sequence alignment is required in many bioinformatics applications but, when sequence similarity is low, it is difficult to obtain accurate alignments based on sequence similarity alone. The accuracy improves when the structures are available, but current structure-based sequence alignment procedures still mis-align substantial numbers of residues. In order to correct such errors, we previously explored the possibility of replacing the residue-based dynamic programming algorithm in structure alignment procedures with the Seed Extension algorithm, which does not use a gap penalty. Here, we describe a new procedure called RSE (Refinement with Seed Extension) that iteratively refines a structure-based sequence alignment. Results RSE uses SE (Seed Extension) in its core, which is an algorithm that we reported recently for obtaining a sequence alignment from two superimposed structures. The RSE procedure was evaluated by comparing the correctly aligned fractions of residues before and after the refinement of the structure-based sequence alignments produced by popular programs. CE, DaliLite, FAST, LOCK2, MATRAS, MATT, TM-align, SHEBA and VAST were included in this analysis and the NCBI's CDD root node set was used as the reference alignments. RSE improved the average accuracy of sequence alignments for all programs tested when no shift error was allowed. The amount of improvement varied depending on the program. The average improvements were small for DaliLite and MATRAS but about 5% for CE and VAST. More substantial improvements have been seen in many individual cases. The additional computation times required for the refinements were negligible compared to the times taken by the structure alignment programs. Conclusion RSE is a computationally inexpensive way of improving the accuracy of a structure-based sequence alignment. It can be used as a standalone procedure following a regular structure-based sequence alignment or to replace the traditional iterative refinement procedures based on residue-level dynamic programming algorithm in many structure alignment programs. PMID:19589133

  5. Upper wide-angle viewing system for ITER

    DOE PAGES

    Lasnier, C. J.; McLean, A. G.; Gattuso, A.; ...

    2016-08-15

    The Upper Wide Angle Viewing System (UWAVS) will be installed on five upper ports of ITER. Here, this paper shows major requirements, gives an overview of the preliminary design with reasons for some design choices, examines self-emitted IR light from UWAVS optics and its effect on accuracy, and shows calculations of signal-to-noise ratios for the two-color temperature output as a function of integration time and divertor temperature. Accurate temperature output requires correction for vacuum window absorption vs. wavelength and for self-emitted IR, which requires good measurement of the temperature of the optical components. The anticipated signal-to-noise ratio using presently availablemore » IR cameras is adequate for the required 500 Hz frame rate.« less

  6. Transformation of two and three-dimensional regions by elliptic systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1991-01-01

    A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.

  7. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE PAGES

    Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...

    2018-04-06

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  8. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Qiang; Du, Qizhen; Gong, Xufei

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  9. Citizen Science Air Monitor (CSAM) Operating Procedures

    EPA Science Inventory

    The Citizen Science Air Monitor (CSAM) is an air monitoring system designed for measuring nitrogen dioxide (NO2) and particulate matter (PM) pollutants simultaneously. This self-contained system consists of a CairPol CairClip NO2 sensor, a Thermo Scientific personal DataRAM PM2.5...

  10. Laser simulation applying Fox-Li iteration: investigation of reason for non-convergence

    NASA Astrophysics Data System (ADS)

    Paxton, Alan H.; Yang, Chi

    2017-02-01

    Fox-Li iteration is often used to numerically simulate lasers. If a solution is found, the complex field amplitude is a good indication of the laser mode. The case of a semiconductor laser, for which the medium possesses a self-focusing nonlinearity, was investigated. For a case of interest, the iterations did not yield a converged solution. Another approach was needed to explore the properties of the laser mode. The laser was treated (unphysically) as a regenerative amplifier. As the input to the amplifier, we required a smooth complex field distribution that matched the laser resonator. To obtain such a field, we found what would be the solution for the laser field if the strength of the self focusing nonlinearity were α = 0. This was used as the input to the laser, treated as an amplifier. Because the beam deteriorated as it propagated multiple passes in the resonator and through the gain medium (for α = 2.7), we concluded that a mode with good beam quality could not exist in the laser.

  11. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  12. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  13. Relationships between Contextual and Task Performance and Interrater Agreement: Are There Any?

    PubMed

    Díaz-Vilela, Luis F; Delgado Rodríguez, Naira; Isla-Díaz, Rosa; Díaz-Cabrera, Dolores; Hernández-Fernaud, Estefanía; Rosales-Sánchez, Christian

    2015-01-01

    Work performance is one of the most important dependent variables in Work and Organizational Psychology. The main objective of this paper was to explore the relationships between citizenship performance and task performance measures obtained from different appraisers and their consistency through a seldom-used methodology, intraclass correlation coefficients. Participants were 135 public employees, the total staff in a local government department. Jobs were clustered into job families through a work analysis based on standard questionnaires. A task description technique was used to develop a performance appraisal questionnaire for each job family, with three versions: self-, supervisor-, and peer-evaluation, in addition to a measure of citizenship performance. Only when the self-appraisal bias is controlled, significant correlations appeared between task performance rates. However, intraclass correlations analyses show that only self- (contextual and task) performance measures are consistent, while interrater agreement disappears. These results provide some interesting clues about the procedure of appraisal instrument development, the role of appraisers, and the importance of choosing adequate consistency analysis methods.

  14. Relationships between Contextual and Task Performance and Interrater Agreement: Are There Any?

    PubMed Central

    Díaz-Cabrera, Dolores; Hernández-Fernaud, Estefanía; Rosales-Sánchez, Christian

    2015-01-01

    Work performance is one of the most important dependent variables in Work and Organizational Psychology. The main objective of this paper was to explore the relationships between citizenship performance and task performance measures obtained from different appraisers and their consistency through a seldom-used methodology, intraclass correlation coefficients. Participants were 135 public employees, the total staff in a local government department. Jobs were clustered into job families through a work analysis based on standard questionnaires. A task description technique was used to develop a performance appraisal questionnaire for each job family, with three versions: self-, supervisor-, and peer-evaluation, in addition to a measure of citizenship performance. Only when the self-appraisal bias is controlled, significant correlations appeared between task performance rates. However, intraclass correlations analyses show that only self- (contextual and task) performance measures are consistent, while interrater agreement disappears. These results provide some interesting clues about the procedure of appraisal instrument development, the role of appraisers, and the importance of choosing adequate consistency analysis methods. PMID:26473956

  15. A hybrid multiview stereo algorithm for modeling urban scenes.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  16. PRogram In Support of Moms (PRISM): Development and Beta Testing.

    PubMed

    Byatt, Nancy; Pbert, Lori; Hosein, Safiyah; Swartz, Holly A; Weinreb, Linda; Allison, Jeroan; Ziedonis, Douglas

    2016-08-01

    Most women with perinatal depression do not receive depression treatment. The authors describe the development and beta testing of a new program, PRogram In Support of Moms (PRISM), to improve treatment of perinatal depression in obstetric practices. A multidisciplinary work group of seven perinatal and behavioral health professionals was convened to design, refine, and beta-test PRISM in an obstetric practice. Iterative feedback and problem solving facilitated development of PRISM components, which include provider training and a toolkit, screening procedures, implementation assistance, and access to immediate psychiatric consultation. Beta testing with 50 patients over two months demonstrated feasibility and suggested that PRISM may improve provider screening rates and self-efficacy to address depression. On the basis of lessons learned, PRISM will be enhanced to integrate proactive patient engagement and monitoring into obstetric practices. PRISM may help overcome patient-, provider-, and system-level barriers to managing perinatal depression in obstetric settings.

  17. Stepwise Iterative Fourier Transform: The SIFT

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.; Benignus, G.

    1975-01-01

    A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.

  18. Phase-field simulation of microstructure formation in technical castings - A self-consistent homoenthalpic approach to the micro-macro problem

    NASA Astrophysics Data System (ADS)

    Böttger, B.; Eiken, J.; Apel, M.

    2009-10-01

    Performing microstructure simulation of technical casting processes suffers from the strong interdependency between latent heat release due to local microstructure formation and heat diffusion on the macroscopic scale: local microstructure formation depends on the macroscopic heat fluxes and, in turn, the macroscopic temperature solution depends on the latent heat release, and therefore on the microstructure formation, in all parts of the casting. A self-consistent homoenthalpic approximation to this micro-macro problem is proposed, based on the assumption of a common enthalpy-temperature relation for the whole casting which is used for the description of latent heat production on the macroscale. This enthalpy-temperature relation is iteratively obtained by phase-field simulations on the microscale, thus taking into account the specific morphological impact on the latent heat production. This new approach is discussed and compared to other approximations for the coupling of the macroscopic heat flux to complex microstructure models. Simulations are performed for the binary alloy Al-3at%Cu, using a multiphase-field solidification model which is coupled to a thermodynamic database. Microstructure formation is simulated for several positions in a simple model plate casting, using a one-dimensional macroscopic temperature solver which can be directly coupled to the microscopic phase-field simulation tool.

  19. Hierarchical Approach to 'Atomistic' 3-D MOSFET Simulation

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Brown, Andrew R.; Davies, John H.; Saini, Subhash

    1999-01-01

    We present a hierarchical approach to the 'atomistic' simulation of aggressively scaled sub-0.1 micron MOSFET's. These devices are so small that their characteristics depend on the precise location of dopant atoms within them, not just on their average density. A full-scale three-dimensional drift-diffusion atomistic simulation approach is first described and used to verify more economical, but restricted, options. To reduce processor time and memory requirements at high drain voltage, we have developed a self-consistent option based on a solution of the current continuity equation restricted to a thin slab of the channel. This is coupled to the solution of the Poisson equation in the whole simulation domain in the Gummel iteration cycles. The accuracy of this approach is investigated in comparison to the full self-consistent solution. At low drain voltage, a single solution of the nonlinear Poisson equation is sufficient to extract the current with satisfactory accuracy. In this case, the current is calculated by solving the current continuity equation in a drift approximation only, also in a thin slab containing the MOSFET channel. The regions of applicability for the different components of this hierarchical approach are illustrated in example simulations covering the random dopant-induced threshold voltage fluctuations, threshold voltage lowering, threshold voltage asymmetry, and drain current fluctuations.

  20. Comparison between iteration schemes for three-dimensional coordinate-transformed saturated-unsaturated flow model

    NASA Astrophysics Data System (ADS)

    An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu

    2012-11-01

    SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.

  1. A new least-squares transport equation compatible with voids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, J. B.; Morel, J. E.

    2013-07-01

    We define a new least-squares transport equation that is applicable in voids, can be solved using source iteration with diffusion-synthetic acceleration, and requires only the solution of an independent set of second-order self-adjoint equations for each direction during each source iteration. We derive the equation, discretize it using the S{sub n} method in conjunction with a linear-continuous finite-element method in space, and computationally demonstrate various of its properties. (authors)

  2. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  3. Is This a Meaningful Learning Experience? Interactive Critical Self-Inquiry as Investigation

    ERIC Educational Resources Information Center

    Allard, Andrea C.; Gallant, Andrea

    2012-01-01

    What conditions enable educators to engage in meaningful learning experiences with peers and beginning practitioners? This article documents a self-study on our actions-in-practice in a peer mentoring project. The investigation involved an iterative process to improve our knowledge as teacher educators, reflective practitioners, and researchers.…

  4. Improved pressure-velocity coupling algorithm based on minimization of global residual norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatwani, A.U.; Turan, A.

    1991-01-01

    In this paper an improved pressure velocity coupling algorithm is proposed based on the minimization of the global residual norm. The procedure is applied to SIMPLE and SIMPLEC algorithms to automatically select the pressure underrelaxation factor to minimize the global residual norm at each iteration level. Test computations for three-dimensional turbulent, isothermal flow is a toroidal vortex combustor indicate that velocity underrelaxation factors as high as 0.7 can be used to obtain a converged solution in 300 iterations.

  5. SimCenter Hawaii Technology Enabled Learning and Intervention Systems

    DTIC Science & Technology

    2008-01-01

    manikin training in acquiring triage skills and self -efficacy. Phase II includes the development of the VR training scenarios, which includes iterative...Task A5. Skills acquisition relative to self -efficacy study See Appendix F, Mass Casualty Triage Training using Human Patient Simulators Improves...relative to self -efficacy study • See Appendix F, Mass Casualty Triage Training using Human Patient Simulators Improves Speed and Accuracy of First

  6. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  7. Formation and termination of runaway beams in ITER disruptions

    NASA Astrophysics Data System (ADS)

    Martín-Solís, J. R.; Loarte, A.; Lehnen, M.

    2017-06-01

    A self-consistent analysis of the relevant physics regarding the formation and termination of runaway beams during mitigated disruptions by Ar and Ne injection is presented for selected ITER scenarios with the aim of improving our understanding of the physics underlying the runaway heat loads onto the plasma facing components (PFCs) and identifying open issues for developing and accessing disruption mitigation schemes for ITER. This is carried out by means of simplified models, but still retaining sufficient details of the key physical processes, including: (a) the expected dominant runaway generation mechanisms (avalanche and primary runaway seeds: Dreicer and hot tail runaway generation, tritium decay and Compton scattering of γ rays emitted by the activated wall), (b) effects associated with the plasma and runaway current density profile shape, and (c) corrections to the runaway dynamics to account for the collisions of the runaways with the partially stripped impurity ions, which are found to have strong effects leading to low runaway current generation and low energy conversion during current termination for mitigated disruptions by noble gas injection (particularly for Ne injection) for the shortest current quench times compatible with acceptable forces on the ITER vessel and in-vessel components ({τ\\text{res}}∼ 22~\\text{ms} ). For the case of long current quench times ({τ\\text{res}}∼ 66~\\text{ms} ), runaway beams up to  ∼10 MA can be generated during the disruption current quench and, if the termination of the runaway current is slow enough, the generation of runaways by the avalanche mechanism can play an important role, increasing substantially the energy deposited by the runaways onto the PFCs up to a few hundreds of MJs. Mixed impurity (Ar or Ne) plus deuterium injection proves to be effective in controlling the formation of the runaway current during the current quench, even for the longest current quench times, as well as in decreasing the energy deposited on the runaway electrons during current termination.

  8. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins.

    PubMed

    Harper, Angela F; Leuthaeuser, Janelle B; Babbitt, Patricia C; Morris, John H; Ferrin, Thomas E; Poole, Leslie B; Fetrow, Jacquelyn S

    2017-02-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially-MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method's novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences.

  9. An Atlas of Peroxiredoxins Created Using an Active Site Profile-Based Approach to Functionally Relevant Clustering of Proteins

    PubMed Central

    Babbitt, Patricia C.; Ferrin, Thomas E.

    2017-01-01

    Peroxiredoxins (Prxs or Prdxs) are a large protein superfamily of antioxidant enzymes that rapidly detoxify damaging peroxides and/or affect signal transduction and, thus, have roles in proliferation, differentiation, and apoptosis. Prx superfamily members are widespread across phylogeny and multiple methods have been developed to classify them. Here we present an updated atlas of the Prx superfamily identified using a novel method called MISST (Multi-level Iterative Sequence Searching Technique). MISST is an iterative search process developed to be both agglomerative, to add sequences containing similar functional site features, and divisive, to split groups when functional site features suggest distinct functionally-relevant clusters. Superfamily members need not be identified initially—MISST begins with a minimal representative set of known structures and searches GenBank iteratively. Further, the method’s novelty lies in the manner in which isofunctional groups are selected; rather than use a single or shifting threshold to identify clusters, the groups are deemed isofunctional when they pass a self-identification criterion, such that the group identifies itself and nothing else in a search of GenBank. The method was preliminarily validated on the Prxs, as the Prxs presented challenges of both agglomeration and division. For example, previous sequence analysis clustered the Prx functional families Prx1 and Prx6 into one group. Subsequent expert analysis clearly identified Prx6 as a distinct functionally relevant group. The MISST process distinguishes these two closely related, though functionally distinct, families. Through MISST search iterations, over 38,000 Prx sequences were identified, which the method divided into six isofunctional clusters, consistent with previous expert analysis. The results represent the most complete computational functional analysis of proteins comprising the Prx superfamily. The feasibility of this novel method is demonstrated by the Prx superfamily results, laying the foundation for potential functionally relevant clustering of the universe of protein sequences. PMID:28187133

  10. Video-Based Modeling: Differential Effects due to Treatment Protocol

    ERIC Educational Resources Information Center

    Mason, Rose A.; Ganz, Jennifer B.; Parker, Richard I.; Boles, Margot B.; Davis, Heather S.; Rispoli, Mandy J.

    2013-01-01

    Identifying evidence-based practices for individuals with disabilities requires specification of procedural implementation. Video-based modeling (VBM), consisting of both video self-modeling and video modeling with others as model (VMO), is one class of interventions that has frequently been explored in the literature. However, current information…

  11. Reduction of Adolescent Drug Abuse Through Post-Hypnotic Cue Association

    ERIC Educational Resources Information Center

    Martin, Roger D.

    1974-01-01

    Six adolescents, all females, who were involved in a variety of drug misuse were self-referrals for treatment. Treatment consisted of an initial comprehensive psychological examination, three intensive sessions of hypnosis and a procedure to develop cue association in situations where the girls felt tense. Results were favorable. (Author)

  12. Consumer Health Education. Breast Cancer.

    ERIC Educational Resources Information Center

    Arkansas Univ., Fayetteville, Cooperative Extension Service.

    This short booklet is designed to be used by health educators when teaching women about breast cancer and its early detection and the procedure for breast self-examination. It includes the following: (1) A one-page teaching plan consisting of objectives, subject matter, methods (including titles of films and printed materials), target audience,…

  13. Evaluation of inter-laminar shear strength of GFRP composed of bonded glass/polyimide tapes and cyanate-ester/epoxy blended resin for ITER TF coils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hemmi, T.; Matsui, K.; Koizumi, N.

    2014-01-27

    The insulation system of the ITER TF coils consists of multi-layer glass/polyimide tapes impregnated a cyanate-ester/epoxy resin. The ITER TF coils are required to withstand an irradiation of 10 MGy from gamma-ray and neutrons since the ITER TF coils is exposed by fast neutron (>0.1 MeV) of 10{sup 22} n/m{sup 2} during the ITER operation. Cyanate-ester/epoxy blended resins and bonded glass/polyimide tapes are developed as insulation materials to realize the required radiation-hardness for the insulation of the ITER TF coils. To evaluate the radiation-hardness of the developed insulation materials, the inter-laminar shear strength (ILSS) of glass-fiber reinforced plastics (GFRP) fabricatedmore » using developed insulation materials is measured as one of most important mechanical properties before/after the irradiation in a fission reactor of JRR-3M. As a result, it is demonstrated that the GFRPs using the developed insulation materials have a sufficient performance to apply for the ITER TF coil insulation.« less

  14. Convergence characteristics of nonlinear vortex-lattice methods for configuration aerodynamics

    NASA Technical Reports Server (NTRS)

    Seginer, A.; Rusak, Z.; Wasserstrom, E.

    1983-01-01

    Nonlinear panel methods have no proof for the existence and uniqueness of their solutions. The convergence characteristics of an iterative, nonlinear vortex-lattice method are, therefore, carefully investigated. The effects of several parameters, including (1) the surface-paneling method, (2) an integration method of the trajectories of the wake vortices, (3) vortex-grid refinement, and (4) the initial conditions for the first iteration on the computed aerodynamic coefficients and on the flow-field details are presented. The convergence of the iterative-solution procedure is usually rapid. The solution converges with grid refinement to a constant value, but the final value is not unique and varies with the wing surface-paneling and wake-discretization methods within some range in the vicinity of the experimental result.

  15. Fast iterative censoring CFAR algorithm for ship detection from SAR images

    NASA Astrophysics Data System (ADS)

    Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng

    2017-11-01

    Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.

  16. Synthesis of 3-aminopropyl glycosides of linear β-(1 → 3)-D-glucooligosaccharides.

    PubMed

    Yashunsky, Dmitry V; Tsvetkov, Yury E; Grachev, Alexey A; Chizhov, Alexander O; Nifantiev, Nikolay E

    2016-01-01

    3-Aminopropyl glycosides of a series of linear β-(1 → 3)-linked D-glucooligosaccharides containing from 3 to 13 monosaccharide units were efficiently prepared. The synthetic scheme featured highly regioselective glycosylation of 4,6-O-benzylidene-protected 2,3-diol glycosyl acceptors with a disaccharide thioglycoside donor bearing chloroacetyl groups at O-2' and -3' as a temporary protection of the diol system. Iteration of the deprotection and glycosylation steps afforded the series of the title oligoglucosides differing in length by two monosaccharide units. A novel procedure for selective removal of acetyl groups in the presence of benzoyl ones consisting in a brief treatment with a large excess of hydrazine hydrate has been proposed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Variational study on the vibrational level structure and vibrational level mixing of highly vibrationally excited S₀ D₂CO.

    PubMed

    Rashev, Svetoslav; Moule, David C; Rashev, Vladimir

    2012-11-01

    We perform converged high precision variational calculations to determine the frequencies of a large number of vibrational levels in S(0) D(2)CO, extending from low to very high excess vibrational energies. For the calculations we use our specific vibrational method (recently employed for studies on H(2)CO), consisting of a combination of a search/selection algorithm and a Lanczos iteration procedure. Using the same method we perform large scale converged calculations on the vibrational level spectral structure and fragmentation at selected highly excited overtone states, up to excess vibrational energies of ∼17,000 cm(-1), in order to study the characteristics of intramolecular vibrational redistribution (IVR), vibrational level density and mode selectivity. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  19. Self-consistent modeling of CFETR baseline scenarios for steady-state operation

    NASA Astrophysics Data System (ADS)

    Chen, Jiale; Jian, Xiang; Chan, Vincent S.; Li, Zeyu; Deng, Zhao; Li, Guoqiang; Guo, Wenfeng; Shi, Nan; Chen, Xi; CFETR Physics Team

    2017-07-01

    Integrated modeling for core plasma is performed to increase confidence in the proposed baseline scenario in the 0D analysis for the China Fusion Engineering Test Reactor (CFETR). The steady-state scenarios are obtained through the consistent iterative calculation of equilibrium, transport, auxiliary heating and current drives (H&CD). Three combinations of H&CD schemes (NB + EC, NB + EC + LH, and EC + LH) are used to sustain the scenarios with q min > 2 and fusion power of ˜70-150 MW. The predicted power is within the target range for CFETR Phase I, although the confinement based on physics models is lower than that assumed in 0D analysis. Ideal MHD stability analysis shows that the scenarios are stable against n = 1-10 ideal modes, where n is the toroidal mode number. Optimization of RF current drive for the RF-only scenario is also presented. The simulation workflow for core plasma in this work provides a solid basis for a more extensive research and development effort for the physics design of CFETR.

  20. GRAPE- TWO-DIMENSIONAL GRIDS ABOUT AIRFOILS AND OTHER SHAPES BY THE USE OF POISSON'S EQUATION

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1994-01-01

    The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids, including those about airfoils. In a grid used for computing aerodynamic flow over an airfoil, or any other body shape, the surface of the body is usually treated as an inner boundary and often cannot be easily represented as an analytic function. The GRAPE computer program was developed to incorporate a method for generating two-dimensional finite-difference grids about airfoils and other shapes by the use of the Poisson differential equation. GRAPE can be used with any boundary shape, even one specified by tabulated points and including a limited number of sharp corners. The GRAPE program has been developed to be numerically stable and computationally fast. GRAPE can provide the aerodynamic analyst with an efficient and consistent means of grid generation. The GRAPE procedure generates a grid between an inner and an outer boundary by utilizing an iterative procedure to solve the Poisson differential equation subject to geometrical restraints. In this method, the inhomogeneous terms of the equation are automatically chosen such that two important effects are imposed on the grid. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. Along with the iterative solution to Poisson's equation, a technique of coarse-fine sequencing is employed to accelerate numerical convergence. GRAPE program control cards and input data are entered via the NAMELIST feature. Each variable has a default value such that user supplied data is kept to a minimum. Basic input data consists of the boundary specification, mesh point spacings on the boundaries, and mesh line angles at the boundaries. Output consists of a dataset containing the grid data and, if requested, a plot of the generated mesh. The GRAPE program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 135K (octal) of 60 bit words. For plotted output the commercially available DISSPLA graphics software package is required. The GRAPE program was developed in 1980.

  1. Assessment of Anatomical Knowledge and Core Trauma Competency Vascular Skills.

    PubMed

    Granite, Guinevere; Pugh, Kristy; Chen, Hegang; Longinaker, Nyaradzo; Garofalo, Evan; Shackelford, Stacy; Shalin, Valerie; Puche, Adam; Pasley, Jason; Sarani, Babak; Henry, Sharon; Bowyer, Mark; Mackenzie, Colin

    2018-03-01

    Surgical residents express confidence in performing specific vascular exposures before training, but such self-reported confidence did not correlate with co-located evaluator ratings. This study reports residents' self-confidence evaluated before and after Advanced Surgical Skills for Exposure in Trauma (ASSET) cadaver-based training, and 12-18 mo later. We hypothesize that residents will better judge their own skill after ASSET than before when compared with evaluator ratings. Forty PGY2-7 surgical residents performed four procedures: axillary artery (AA), brachial artery (BA), femoral artery exposure and control (FA), and lower extremity fasciotomy (FAS) at the three evaluations. Using 5-point Likert scales, surgeons self-assessed their confidence in anatomical understanding and procedure performance after each procedure and evaluators rated each surgeon accordingly. For all the three evaluations, residents consistently rated their anatomical understanding (p < 0.04) and surgical performance (p < 0.03) higher than evaluators for both FA and FAS. Residents rated their anatomical understanding and surgical performance higher (p < 0.005) than evaluators for BA after training and up to 18 mo later. Only for third AA evaluation were there no rating differences. Residents overrate their anatomical understanding and performance abilities for BA, FA, and FAS even after performing the procedures and being debriefed three times in 18 mo.

  2. Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?

    NASA Astrophysics Data System (ADS)

    Swartjes, Ivo; Theune, Mariët

    We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.

  3. Developing a patient-centered outcome measure for complementary and alternative medicine therapies II: Refining content validity through cognitive interviews

    PubMed Central

    2011-01-01

    Background Available measures of patient-reported outcomes for complementary and alternative medicine (CAM) inadequately capture the range of patient-reported treatment effects. The Self-Assessment of Change questionnaire was developed to measure multi-dimensional shifts in well-being for CAM users. With content derived from patient narratives, items were subsequently focused through interviews on a new cohort of participants. Here we present the development of the final version in which the content and format is refined through cognitive interviews. Methods We conducted cognitive interviews across five iterations of questionnaire refinement with a culturally diverse sample of 28 CAM users. In each iteration, participant critiques were used to revise the questionnaire, which was then re-tested in subsequent rounds of cognitive interviews. Following all five iterations, transcripts of cognitive interviews were systematically coded and analyzed to examine participants' understanding of the format and content of the final questionnaire. Based on this data, we established summary descriptions and selected exemplar quotations for each word pair on the final questionnaire. Results The final version of the Self-Assessment of Change questionnaire (SAC) includes 16 word pairs, nine of which remained unchanged from the original draft. Participants consistently said that these stable word pairs represented opposite ends of the same domain of experience and the meanings of these terms were stable across the participant pool. Five pairs underwent revision and two word pairs were added. Four word pairs were eliminated for redundancy or because participants did not agree on the meaning of the terms. Cognitive interviews indicate that participants understood the format of the questionnaire and considered each word pair to represent opposite poles of a shared domain of experience. Conclusions We have placed lay language and direct experience at the center of questionnaire revision and refinement. In so doing, we provide an innovative model for the development of truly patient-centered outcome measures. Although this instrument was designed and tested in a CAM-specific population, it may be useful in assessing multi-dimensional shifts in well-being across a broader patient population. PMID:22206409

  4. From virtual clustering analysis to self-consistent clustering analysis: a mathematical study

    NASA Astrophysics Data System (ADS)

    Tang, Shaoqiang; Zhang, Lei; Liu, Wing Kam

    2018-03-01

    In this paper, we propose a new homogenization algorithm, virtual clustering analysis (VCA), as well as provide a mathematical framework for the recently proposed self-consistent clustering analysis (SCA) (Liu et al. in Comput Methods Appl Mech Eng 306:319-341, 2016). In the mathematical theory, we clarify the key assumptions and ideas of VCA and SCA, and derive the continuous and discrete Lippmann-Schwinger equations. Based on a key postulation of "once response similarly, always response similarly", clustering is performed in an offline stage by machine learning techniques (k-means and SOM), and facilitates substantial reduction of computational complexity in an online predictive stage. The clear mathematical setup allows for the first time a convergence study of clustering refinement in one space dimension. Convergence is proved rigorously, and found to be of second order from numerical investigations. Furthermore, we propose to suitably enlarge the domain in VCA, such that the boundary terms may be neglected in the Lippmann-Schwinger equation, by virtue of the Saint-Venant's principle. In contrast, they were not obtained in the original SCA paper, and we discover these terms may well be responsible for the numerical dependency on the choice of reference material property. Since VCA enhances the accuracy by overcoming the modeling error, and reduce the numerical cost by avoiding an outer loop iteration for attaining the material property consistency in SCA, its efficiency is expected even higher than the recently proposed SCA algorithm.

  5. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  6. A Reliability-Based Particle Filter for Humanoid Robot Self-Localization in RoboCup Standard Platform League

    PubMed Central

    Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.

    2013-01-01

    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098

  7. How children perceive fractals: Hierarchical self-similarity and cognitive development

    PubMed Central

    Martins, Maurício Dias; Laaha, Sabine; Freiberger, Eva Maria; Choi, Soonja; Fitch, W. Tecumseh

    2014-01-01

    The ability to understand and generate hierarchical structures is a crucial component of human cognition, available in language, music, mathematics and problem solving. Recursion is a particularly useful mechanism for generating complex hierarchies by means of self-embedding rules. In the visual domain, fractals are recursive structures in which simple transformation rules generate hierarchies of infinite depth. Research on how children acquire these rules can provide valuable insight into the cognitive requirements and learning constraints of recursion. Here, we used fractals to investigate the acquisition of recursion in the visual domain, and probed for correlations with grammar comprehension and general intelligence. We compared second (n = 26) and fourth graders (n = 26) in their ability to represent two types of rules for generating hierarchical structures: Recursive rules, on the one hand, which generate new hierarchical levels; and iterative rules, on the other hand, which merely insert items within hierarchies without generating new levels. We found that the majority of fourth graders, but not second graders, were able to represent both recursive and iterative rules. This difference was partially accounted by second graders’ impairment in detecting hierarchical mistakes, and correlated with between-grade differences in grammar comprehension tasks. Empirically, recursion and iteration also differed in at least one crucial aspect: While the ability to learn recursive rules seemed to depend on the previous acquisition of simple iterative representations, the opposite was not true, i.e., children were able to acquire iterative rules before they acquired recursive representations. These results suggest that the acquisition of recursion in vision follows learning constraints similar to the acquisition of recursion in language, and that both domains share cognitive resources involved in hierarchical processing. PMID:24955884

  8. Nomarski differential interference contrast microscopy for surface slope measurements: an examination of techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartman, J.S.; Gordon, R.L.; Lessor, D.L.

    1981-08-01

    Alternate measurement and data analysis procedures are discussed and compared for the application of reflective Nomarski differential interference contrast microscopy for the determination of surface slopes. The discussion includes the interpretation of a previously reported iterative procedure using the results of a detailed optical model and the presentation of a new procedure based on measured image intensity extrema. Surface slope determinations from these procedures are presented and compared with results from a previously reported curve fit analysis of image intensity data. The accuracy and advantages of the different procedures are discussed.

  9. Ultrametric properties of the attractor spaces for random iterated linear function systems

    NASA Astrophysics Data System (ADS)

    Buchovets, A. G.; Moskalev, P. V.

    2018-03-01

    We investigate attractors of random iterated linear function systems as independent spaces embedded in the ordinary Euclidean space. The introduction on the set of attractor points of a metric that satisfies the strengthened triangle inequality makes this space ultrametric. Then inherent in ultrametric spaces the properties of disconnectedness and hierarchical self-similarity make it possible to define an attractor as a fractal. We note that a rigorous proof of these properties in the case of an ordinary Euclidean space is very difficult.

  10. Iterative spectral methods and spectral solutions to compressible flows

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Zang, T. A.

    1982-01-01

    A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.

  11. A comparative study of coarse-graining methods for polymeric fluids: Mori-Zwanzig vs. iterative Boltzmann inversion vs. stochastic parametric optimization

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em

    2016-07-01

    We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.

  12. A comparative study of coarse-graining methods for polymeric fluids: Mori-Zwanzig vs. iterative Boltzmann inversion vs. stochastic parametric optimization.

    PubMed

    Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em

    2016-07-28

    We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.

  13. A modified interval symmetric single step procedure ISS-5D for simultaneous inclusion of polynomial zeros

    NASA Astrophysics Data System (ADS)

    Sham, Atiyah W. M.; Monsi, Mansor; Hassan, Nasruddin; Suleiman, Mohamed

    2013-04-01

    The aim of this paper is to present a new modified interval symmetric single-step procedure ISS-5D which is the extension from the previous procedure, ISS1. The ISS-5D method will produce successively smaller intervals that are guaranteed to still contain the zeros. The efficiency of this method is measured on the CPU times and the number of iteration. The procedure is run on five test polynomials and the results obtained are shown in this paper.

  14. CGHnormaliter: an iterative strategy to enhance normalization of array CGH data with imbalanced aberrations

    PubMed Central

    van Houte, Bart PP; Binsl, Thomas W; Hettling, Hannes; Pirovano, Walter; Heringa, Jaap

    2009-01-01

    Background Array comparative genomic hybridization (aCGH) is a popular technique for detection of genomic copy number imbalances. These play a critical role in the onset of various types of cancer. In the analysis of aCGH data, normalization is deemed a critical pre-processing step. In general, aCGH normalization approaches are similar to those used for gene expression data, albeit both data-types differ inherently. A particular problem with aCGH data is that imbalanced copy numbers lead to improper normalization using conventional methods. Results In this study we present a novel method, called CGHnormaliter, which addresses this issue by means of an iterative normalization procedure. First, provisory balanced copy numbers are identified and subsequently used for normalization. These two steps are then iterated to refine the normalization. We tested our method on three well-studied tumor-related aCGH datasets with experimentally confirmed copy numbers. Results were compared to a conventional normalization approach and two more recent state-of-the-art aCGH normalization strategies. Our findings show that, compared to these three methods, CGHnormaliter yields a higher specificity and precision in terms of identifying the 'true' copy numbers. Conclusion We demonstrate that the normalization of aCGH data can be significantly enhanced using an iterative procedure that effectively eliminates the effect of imbalanced copy numbers. This also leads to a more reliable assessment of aberrations. An R-package containing the implementation of CGHnormaliter is available at . PMID:19709427

  15. Sensing resonant objects in the presence of noise and clutter using iterative, single-channel acoustic time reversal

    NASA Astrophysics Data System (ADS)

    Waters, Zachary John

    The presence of noise and coherent returns from clutter often confounds efforts to acoustically detect and identify target objects buried in inhomogeneous media. Using iterative time reversal with a single channel transducer, returns from resonant targets are enhanced, yielding convergence to a narrowband waveform characteristic of the dominant mode in a target's elastic scattering response. The procedure consists of exciting the target with a broadband acoustic pulse, sampling the return using a finite time window, reversing the signal in time, and using this reversed signal as the source waveform for the next interrogation. Scaled laboratory experiments (0.4-2 MHz) are performed employing a piston transducer and spherical targets suspended in the free field and buried in a sediment phantom. In conjunction with numerical simulations, these experiments provide an inexpensive and highly controlled means with which to examine the efficacy of the technique. Signal-to-noise enhancement of target echoes is demonstrated. The methodology reported provides a means to extract both time and frequency information for surface waves that propagate on an elastic target. Methods developed in the laboratory are then applied in medium scale (20-200 kHz) pond experiments for the detection of a steel shell buried in sandy sediment.

  16. Low Quality of Basic Caregiving Environments in Child Care: Actual Reality or Artifact of Scoring?

    ERIC Educational Resources Information Center

    Norris, Deborah J.; Guss, Shannon

    2016-01-01

    Quality Rating Improvement Systems (QRIS) frequently include the Infant-Toddler Environment Rating Scale-Revised (ITERS-R) as part of rating and improving child care quality. However, studies utilizing the ITERS-R consistently report low quality, especially for basic caregiving items. This research examined whether the low scores reflected the…

  17. Using an Iterative Mixed-Methods Research Design to Investigate Schools Facing Exceptionally Challenging Circumstances within Trinidad and Tobago

    ERIC Educational Resources Information Center

    De Lisle, Jerome; Seunarinesingh, Krishna; Mohammed, Rhoda; Lee-Piggott, Rinnelle

    2017-01-01

    In this study, methodology and theory were linked to explicate the nature of education practice within schools facing exceptionally challenging circumstances (SFECC) in Trinidad and Tobago. The research design was an iterative quan>QUAL-quan>qual multi-method research programme, consisting of 3 independent projects linked together by overall…

  18. Co-Mentoring: The Iterative Process of Learning about Self and "Becoming" Leaders

    ERIC Educational Resources Information Center

    Allison, Valerie A.; Ramirez, Laurie A.

    2016-01-01

    Two pre-tenured faculty members at dissimilar institutions found themselves in similar positions--both were assigned to administrative positions that they did not seek. This self-study is an investigation of their processes of becoming leaders and how they aligned and/or conflicted with their espoused beliefs. A review of the literature that…

  19. A Simple Classroom Simulation of Heat Energy Diffusing through a Metal Bar

    ERIC Educational Resources Information Center

    Kinsler, Mark; Kinzel, Evelyn

    2007-01-01

    We present an iterative procedure that does not rely on calculus to model heat flow through a uniform bar of metal and thus avoids the use of the partial differential equation typically needed to describe heat diffusion. The procedure is based on first principles and can be done with students at the blackboard. It results in a plot that…

  20. Self-consistent calculation of the Sommerfeld enhancement

    DOE PAGES

    Blum, Kfir; Sato, Ryosuke; Slatyer, Tracy R.

    2016-06-08

    A calculation of the Sommerfeld enhancement is presented and applied to the problem of s-wave non-relativistic dark matter annihilation. The difference from previous computations in the literature is that the effect of the underlying short-range scattering process is consistently included together with the long-range force in the effective QM Schrödinger problem. Our procedure satisfies partial-wave unitarity where previous calculations fail. We provide analytic results for some potentials of phenomenological relevance.

  1. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    NASA Astrophysics Data System (ADS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  2. An exploration of inter-organisational partnership assessment tools in the context of Australian Aboriginal-mainstream partnerships: a scoping review of the literature.

    PubMed

    Tsou, Christina; Haynes, Emma; Warner, Wayne D; Gray, Gordon; Thompson, Sandra C

    2015-04-23

    The need for better partnerships between Aboriginal organisations and mainstream agencies demands attention on process and relational elements of these partnerships, and improving partnership functioning through transformative or iterative evaluation procedures. This paper presents the findings of a literature review which examines the usefulness of existing partnership tools to the Australian Aboriginal-mainstream partnership (AMP) context. Three sets of best practice principles for successful AMP were selected based on authors' knowledge and experience. Items in each set of principles were separated into process and relational elements and used to guide the analysis of partnership assessment tools. The review and analysis of partnership assessment tools were conducted in three distinct but related parts. Part 1- identify and select reviews of partnership tools; part 2 - identify and select partnership self-assessment tool; part 3 - analysis of selected tools using AMP principles. The focus on relational and process elements in the partnership tools reviewed is consistent with the focus of Australian AMP principles by reconciliation advocates; however, historical context, lived experience, cultural context and approaches of Australian Aboriginal people represent key deficiencies in the tools reviewed. The overall assessment indicated that the New York Partnership Self-Assessment Tool and the VicHealth Partnership Analysis Tools reflect the greatest number of AMP principles followed by the Nuffield Partnership Assessment Tool. The New York PSAT has the strongest alignment with the relational elements while VicHealth and Nuffield tools showed greatest alignment with the process elements in the chosen AMP principles. Partnership tools offer opportunities for providing evidence based support to partnership development. The multiplicity of tools in existence and the reported uniqueness of each partnership, mean the development of a generic partnership analysis for AMP may not be a viable option for future effort.

  3. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE PAGES

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    2017-11-09

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  4. Automated main-chain model building by template matching and iterative fragment extension.

    PubMed

    Terwilliger, Thomas C

    2003-01-01

    An algorithm for the automated macromolecular model building of polypeptide backbones is described. The procedure is hierarchical. In the initial stages, many overlapping polypeptide fragments are built. In subsequent stages, the fragments are extended and then connected. Identification of the locations of helical and beta-strand regions is carried out by FFT-based template matching. Fragment libraries of helices and beta-strands from refined protein structures are then positioned at the potential locations of helices and strands and the longest segments that fit the electron-density map are chosen. The helices and strands are then extended using fragment libraries consisting of sequences three amino acids long derived from refined protein structures. The resulting segments of polypeptide chain are then connected by choosing those which overlap at two or more C(alpha) positions. The fully automated procedure has been implemented in RESOLVE and is capable of model building at resolutions as low as 3.5 A. The algorithm is useful for building a preliminary main-chain model that can serve as a basis for refinement and side-chain addition.

  5. A nonlinear dynamic finite element approach for simulating muscular hydrostats.

    PubMed

    Vavourakis, V; Kazakidi, A; Tsakiris, D P; Ekaterinaris, J A

    2014-01-01

    An implicit nonlinear finite element model for simulating biological muscle mechanics is developed. The numerical method is suitable for dynamic simulations of three-dimensional, nonlinear, nearly incompressible, hyperelastic materials that undergo large deformations. These features characterise biological muscles, which consist of fibres and connective tissues. It can be assumed that the stress distribution inside the muscles is the superposition of stresses along the fibres and the connective tissues. The mechanical behaviour of the surrounding tissues is determined by adopting a Mooney-Rivlin constitutive model, while the mechanical description of fibres is considered to be the sum of active and passive stresses. Due to the nonlinear nature of the problem, evaluation of the Jacobian matrix is carried out in order to subsequently utilise the standard Newton-Raphson iterative procedure and to carry out time integration with an implicit scheme. The proposed methodology is implemented into our in-house, open source, finite element software, which is validated by comparing numerical results with experimental measurements and other numerical results. Finally, the numerical procedure is utilised to simulate primitive octopus arm manoeuvres, such as bending and reaching.

  6. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  7. AUTOMOTIVE DIESEL MAINTENANCE 2. UNIT XV, UNDERSTANDING DC GENERATOR PRINCIPLES (PART II).

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 25-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF MAINTENANCE PROCEDURES FOR DIRECT CURRENT GENERATORS USED ON DIESEL POWERED EQUIPMENT. TOPICS ARE SPECIAL GENERATOR CIRCUITS, GENERATOR TESTING, AND GENERATOR POLARITY. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL PROGRAMED TRAINING FILM "DC GENERATORS II--GENERATOR…

  8. Academic Program Review: Guidelines and Procedures.

    ERIC Educational Resources Information Center

    State Univ. of New York, Delhi. Agricultural and Technical Coll.

    The Academic Program Review system at the State University Agricultural and Technical College at Delhi consists of two phases: preparation of a self-study report by specialized faculty providing instruction in the particular program, and review of the report and program operation by a visiting panel of experts in the field or academic discipline.…

  9. Nonlinear optical response in narrow graphene nanoribbons

    NASA Astrophysics Data System (ADS)

    Karimi, Farhad; Knezevic, Irena

    We present an iterative method to calculate the nonlinear optical response of armchair graphene nanoribbons (aGNRs) and zigzag graphene nanoribbons (zGNRs) while including the effects of dissipation. In contrast to methods that calculate the nonlinear response in the ballistic (dissipation-free) regime, here we obtain the nonlinear response of an electronic system to an external electromagnetic field while interacting with a dissipative environment (to second order). We use a self-consistent-field approach within a Markovian master-equation formalism (SCF-MMEF) coupled with full-wave electromagnetic equations, and we solve the master equation iteratively to obtain the higher-order response functions. We employ the SCF-MMEF to calculate the nonlinear conductance and susceptibility, as well as to calculate the dependence of the plasmon dispersion and plasmon propagation length on the intensity of the electromagnetic field in GNRs. The electron scattering mechanisms included in this work are scattering with intrinsic phonons, ionized impurities, surface optical phonons, and line-edge roughness. Unlike in wide GNRs, where ionized-impurity scattering dominates dissipation, in ultra-narrow nanoribbons on polar substrates optical-phonon scattering and ionized-impurity scattering are equally prominent. Support by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0008712.

  10. Recursive Factorization of the Inverse Overlap Matrix in Linear-Scaling Quantum Molecular Dynamics Simulations.

    PubMed

    Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N

    2016-07-12

    We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.

  11. The influence of cooperation and defection on social decision making in depression: A study of the iterated Prisoner's Dilemma Game.

    PubMed

    Sorgi, Kristen M; van 't Wout, Mascha

    2016-12-30

    This study evaluated the influence of self-reported levels of depression on interpersonal strategic decision making when interacting with partners who differed in their predetermined tendency to cooperate in three separate computerized iterated Prisoner's Dilemma Games (iPDGs). Across 29 participants, cooperation was lowest when interacting with a predominantly defecting partner and highest when interacting with a predominantly cooperating partner. Greater depression severity was related to steadier and continued cooperation over trials with the cooperating partner, seeming to reflect a prosocial response tendency when interacting with this partner. With the unbiased partner, depression severity was associated with a more volatile response pattern in reaction to cooperation and defection by this partner. Severity of depression did not influence cooperation with a defecting partner or expectations about partner cooperation reported before the task began. Taken together, these data appear to show that in predominately positive interactions, as in the cooperating partner condition, depression is associated with less volatile, more consistent cooperation. When such clear feedback is absent, as in the unbiased partner condition, depression is associated with more volatile behavior. Nonetheless, participants were generally able to adapt their behavior accordingly in this dynamic interpersonal decision making context. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Gyrokinetic simulation of edge blobs and divertor heat-load footprint

    NASA Astrophysics Data System (ADS)

    Chang, C. S.; Ku, S.; Hager, R.; Churchill, M.; D'Azevedo, E.; Worley, P.

    2015-11-01

    Gyrokinetic study of divertor heat-load width Lq has been performed using the edge gyrokinetic code XGC1. Both neoclassical and electrostatic turbulence physics are self-consistently included in the simulation with fully nonlinear Fokker-Planck collision operation and neutral recycling. Gyrokinetic ions and drift kinetic electrons constitute the plasma in realistic magnetic separatrix geometry. The electron density fluctuations from nonlinear turbulence form blobs, as similarly seen in the experiments. DIII-D and NSTX geometries have been used to represent today's conventional and tight aspect ratio tokamaks. XGC1 shows that the ion neoclassical orbit dynamics dominates over the blob physics in setting Lq in the sample DIII-D and NSTX plasmas, re-discovering the experimentally observed 1/Ip type scaling. Magnitude of Lq is in the right ballpark, too, in comparison with experimental data. However, in an ITER standard plasma, XGC1 shows that the negligible neoclassical orbit excursion effect makes the blob dynamics to dominate Lq. Differently from Lq 1mm (when mapped back to outboard midplane) as was predicted by simple-minded extrapolation from the present-day data, XGC1 shows that Lq in ITER is about 1 cm that is somewhat smaller than the average blob size. Supported by US DOE and the INCITE program.

  13. [Path analysis of the Influence of Hospital Ethical Climate Perceived by Nurses on Supervisor Trust and Organizational Effectiveness].

    PubMed

    Noh, Yoon Goo; Jung, Myun Sook

    2016-12-01

    The purpose of this study was to analyze the paths of influence that a hospital's ethical climate exerts on nurses' organizational commitment and organizational citizenship behavior, with supervisor trust as the mediating factor, and verify compatibility of the models in hospital nurses. The sample consisted of 374 nurses recruited from four hospitals in 3 cities in Korea. The measurements included the Ethical Climate Questionnaire, Supervisor Trust Questionnaire, Organizational Commitment Questionnaire and Organizational Citizenship Behavior Questionnaire. Ethical Climate Questionnaire consisted of 6 factors; benevolence, personal morality, company rules and procedures, laws and professional codes, self-interest and efficiency. Data were analysed using SPSS version 18.0 and AMOS version 18.0. Supervisor trust was explained by benevolence and self-interest (29.8%). Organizational commitment was explained by benevolence, supervisor trust, personal morality, and rules and procedures (40.4%). Organizational citizenship behavior was explained by supervisor trust, laws and codes, and benevolence (21.8%). Findings indicate that managers need to develop a positive hospital ethical climate in order to improve nurses' trust in supervisors, organizational commitment and organizational citizenship behavior.

  14. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining

    PubMed Central

    Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-01-01

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946

  15. Self-Calibrated In-Process Photogrammetry for Large Raw Part Measurement and Alignment before Machining.

    PubMed

    Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai

    2017-09-09

    Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.

  16. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  17. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.

    PubMed

    Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.

  18. Exploiting parallel computing with limited program changes using a network of microcomputers

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.; Sobieszczanski-Sobieski, J.

    1985-01-01

    Network computing and multiprocessor computers are two discernible trends in parallel processing. The computational behavior of an iterative distributed process in which some subtasks are completed later than others because of an imbalance in computational requirements is of significant interest. The effects of asynchronus processing was studied. A small existing program was converted to perform finite element analysis by distributing substructure analysis over a network of four Apple IIe microcomputers connected to a shared disk, simulating a parallel computer. The substructure analysis uses an iterative, fully stressed, structural resizing procedure. A framework of beams divided into three substructures is used as the finite element model. The effects of asynchronous processing on the convergence of the design variables are determined by not resizing particular substructures on various iterations.

  19. An iterative requirements specification procedure for decision support systems.

    PubMed

    Brookes, C H

    1987-08-01

    Requirements specification is a key element in a DSS development project because it not only determines what is to be done, it also drives the evolution process. A procedure for requirements elicitation is described that is based on the decomposition of the DSS design task into a number of functions, subfunctions, and operators. It is postulated that the procedure facilitates the building of a DSS that is complete and integrates MIS, modelling and expert system components. Some examples given are drawn from the health administration field.

  20. Predictive Power Estimation Algorithm (PPEA) - A New Algorithm to Reduce Overfitting for Genomic Biomarker Discovery

    PubMed Central

    Liu, Jiangang; Jolly, Robert A.; Smith, Aaron T.; Searfoss, George H.; Goldstein, Keith M.; Uversky, Vladimir N.; Dunker, Keith; Li, Shuyu; Thomas, Craig E.; Wei, Tao

    2011-01-01

    Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA), which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1) PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2) the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3) using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4) more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses. PMID:21935387

  1. Restoration of multichannel microwave radiometric images

    NASA Technical Reports Server (NTRS)

    Chin, R. T.; Yeh, C. L.; Olson, W. S.

    1983-01-01

    A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.

  2. Implementation on a nonlinear concrete cracking algorithm in NASTRAN

    NASA Technical Reports Server (NTRS)

    Herting, D. N.; Herendeen, D. L.; Hoesly, R. L.; Chang, H.

    1976-01-01

    A computer code for the analysis of reinforced concrete structures was developed using NASTRAN as a basis. Nonlinear iteration procedures were developed for obtaining solutions with a wide variety of loading sequences. A direct access file system was used to save results at each load step to restart within the solution module for further analysis. A multi-nested looping capability was implemented to control the iterations and change the loads. The basis for the analysis is a set of mutli-layer plate elements which allow local definition of materials and cracking properties.

  3. 40 CFR 230.5 - General procedures to be followed.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Evaluation and Testing (§ 230.61). (j) Identify appropriate and practicable changes to the project plan to... of illustration. The actual process followed may be iterative, with the results of one step leading...

  4. 40 CFR 230.5 - General procedures to be followed.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Evaluation and Testing (§ 230.61). (j) Identify appropriate and practicable changes to the project plan to... of illustration. The actual process followed may be iterative, with the results of one step leading...

  5. VIMOS Instrument Control Software Design: an Object Oriented Approach

    NASA Astrophysics Data System (ADS)

    Brau-Nogué, Sylvie; Lucuix, Christian

    2002-12-01

    The Franco-Italian VIMOS instrument is a VIsible imaging Multi-Object Spectrograph with outstanding multiplex capabilities, allowing to take spectra of more than 800 objects simultaneously, or integral field spectroscopy mode in a 54x54 arcsec area. VIMOS is being installed at the Nasmyth focus of the third Unit Telescope of the European Southern Observatory Very Large Telescope (VLT) at Mount Paranal in Chile. This paper will describe the analysis, the design and the implementation of the VIMOS Instrument Control System, using UML notation. Our Control group followed an Object Oriented software process while keeping in mind the ESO VLT standard control concepts. At ESO VLT a complete software library is available. Rather than applying waterfall lifecycle, ICS project used iterative development, a lifecycle consisting of several iterations. Each iteration consisted in : capture and evaluate the requirements, visual modeling for analysis and design, implementation, test, and deployment. Depending of the project phases, iterations focused more or less on specific activity. The result is an object model (the design model), including use-case realizations. An implementation view and a deployment view complement this product. An extract of VIMOS ICS UML model will be presented and some implementation, integration and test issues will be discussed.

  6. Highly efficient and exact method for parallelization of grid-based algorithms and its implementation in DelPhi

    PubMed Central

    Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil

    2012-01-01

    The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480

  7. Perceived sources of change in trainees' self-efficacy beliefs.

    PubMed

    Lent, Robert W; Cinamon, Rachel Gali; Bryan, Nicole A; Jezzi, Matthew M; Martin, Helena M; Lim, Robert

    2009-09-01

    Thought-listing procedures were used to examine the perceived incidence, size, direction, and bases of change in the session-level self-efficacy of therapists in training. Ninety-eight Master's-level trainees completed a cognitive assessment task immediately after each session with a client in their first practicum. Participants typically reported modest-sized, positive changes in their therapeutic self-efficacy at each session. Seven perceived sources of change in self-efficacy were identified. Some of these sources (e.g., trainees' performance evaluations, affective reactions) were consistent with general self-efficacy theory; others reflected the interpersonal performance context of therapy (e.g., perceptions of the therapeutic relationship and client behavior). Implications of the findings for training and future research on therapist development are considered. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  8. The ITER bolometer diagnostic: Status and plansa)

    NASA Astrophysics Data System (ADS)

    Meister, H.; Giannone, L.; Horton, L. D.; Raupp, G.; Zeidner, W.; Grunda, G.; Kalvin, S.; Fischer, U.; Serikov, A.; Stickel, S.; Reichle, R.

    2008-10-01

    A consortium consisting of four EURATOM Associations has been set up to develop the project plan for the full development of the ITER bolometer diagnostic and to continue urgent R&D activities. An overview of the current status is given, including detector development, line-of-sight optimization, performance analysis as well as the design of the diagnostic components and their integration in ITER. This is complemented by the presentation of plans for future activities required to successfully implement the bolometer diagnostic, ranging from the detector development over diagnostic design and prototype testing to RH tools for calibration.

  9. Fractal nematic colloids

    NASA Astrophysics Data System (ADS)

    Hashemi, S. M.; Jagodič, U.; Mozaffari, M. R.; Ejtehadi, M. R.; Muševič, I.; Ravnik, M.

    2017-01-01

    Fractals are remarkable examples of self-similarity where a structure or dynamic pattern is repeated over multiple spatial or time scales. However, little is known about how fractal stimuli such as fractal surfaces interact with their local environment if it exhibits order. Here we show geometry-induced formation of fractal defect states in Koch nematic colloids, exhibiting fractal self-similarity better than 90% over three orders of magnitude in the length scales, from micrometers to nanometres. We produce polymer Koch-shaped hollow colloidal prisms of three successive fractal iterations by direct laser writing, and characterize their coupling with the nematic by polarization microscopy and numerical modelling. Explicit generation of topological defect pairs is found, with the number of defects following exponential-law dependence and reaching few 100 already at fractal iteration four. This work demonstrates a route for generation of fractal topological defect states in responsive soft matter.

  10. Fostering Self-Regulated Learning in a Blended Environment Using Group Awareness and Peer Assistance as External Scaffolds

    ERIC Educational Resources Information Center

    Lin, J-W.; Lai, Y-C.; Lai, Y-C.; Chang, L-C.

    2016-01-01

    Most systems for training self-regulated learning (SRL) behaviour focus on the provision of a learner-centred environment. Such systems repeat the training process and place learners alone to experience that process iteratively. According to the relevant literature, external scaffolds are more promising for effective SRL training. In this work,…

  11. Estimating the Minimum Number of Judges Required for Test-Centred Standard Setting on Written Assessments. Do Discussion and Iteration Have an Influence?

    ERIC Educational Resources Information Center

    Fowell, S. L.; Fewtrell, R.; McLaughlin, P. J.

    2008-01-01

    Absolute standard setting procedures are recommended for assessment in medical education. Absolute, test-centred standard setting procedures were introduced for written assessments in the Liverpool MBChB in 2001. The modified Angoff and Ebel methods have been used for short answer question-based and extended matching question-based papers,…

  12. Low-authority control synthesis for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1982-01-01

    The control of vibrations of large space structures by distributed sensors and actuators is studied. A procedure is developed for calculating the feedback loop gains required to achieve specified amounts of damping. For moderate damping (Low Authority Control) the procedure is purely algebraic, but it can be applied iteratively when larger amounts of damping are required and is generalized for arbitrary time invariant systems.

  13. Parent Caregiver Self-Efficacy and Child Reactions to Pediatric Cancer Treatment Procedures

    PubMed Central

    Peterson, Amy M.; Harper, Felicity W. K.; Albrecht, Terrance L.; Taub, Jeffrey W.; Orom, Heather; Phipps, Sean; Penner, Louis A.

    2014-01-01

    This study examined how parents’ sense of self-efficacy specific to caregiving for their child during cancer treatment procedures affected children’s distress and cooperation during procedures. Potential correlates of caregiver self-efficacy (ie, demographics, child clinical characteristics, parent dispositional attributes, and social support) were also examined. Participants were 119 children undergoing cancer treatment procedures and their parents. Parents’ self-efficacy about 6 procedure-specific caregiver tasks was measured. Parents, children, nurses, and observers rated child distress and parents, nurses and observers rated child cooperation during procedures. Higher parent self-efficacy about keeping children calm during procedures predicted lower child distress and higher child cooperation during procedures. Parent dispositional attributes (eg, enduring positive mood, empathy) and social support predicted self-efficacy. Parent caregiver self-efficacy influences child distress and cooperation during procedures and is associated with certain parent attributes. Findings suggest the utility of identifying parents who would benefit from targeted interventions to increase self-efficacy about caregiving during treatment procedures. PMID:24378818

  14. Interdisciplinary Development of an Improved Emergency Department Procedural Work Surface Through Iterative Design and Use Testing in Simulated and Clinical Environments.

    PubMed

    Zhang, Xiao C; Bermudez, Ana M; Reddy, Pranav M; Sarpatwari, Ravi R; Chheng, Darin B; Mezoian, Taylor J; Schwartz, Victoria R; Simmons, Quinneil J; Jay, Gregory D; Kobayashi, Leo

    2017-03-01

    A stable and readily accessible work surface for bedside medical procedures represents a valuable tool for acute care providers. In emergency department (ED) settings, the design and implementation of traditional Mayo stands and related surface devices often limit their availability, portability, and usability, which can lead to suboptimal clinical practice conditions that may affect the safe and effective performance of medical procedures and delivery of patient care. We designed and built a novel, open-source, portable, bedside procedural surface through an iterative development process with use testing in simulated and live clinical environments. The procedural surface development project was conducted between October 2014 and June 2016 at an academic referral hospital and its affiliated simulation facility. An interdisciplinary team of emergency physicians, mechanical engineers, medical students, and design students sought to construct a prototype bedside procedural surface out of off-the-shelf hardware during a collaborative university course on health care design. After determination of end-user needs and core design requirements, multiple prototypes were fabricated and iteratively modified, with early variants featuring undermattress stabilizing supports or ratcheting clamp mechanisms. Versions 1 through 4 underwent 2 hands-on usability-testing simulation sessions; version 5 was presented at a design critique held jointly by a panel of clinical and industrial design faculty for expert feedback. Responding to select feedback elements over several surface versions, investigators arrived at a near-final prototype design for fabrication and use testing in a live clinical setting. This experimental procedural surface (version 8) was constructed and then deployed for controlled usability testing against the standard Mayo stands in use at the study site ED. Clinical providers working in the ED who opted to participate in the study were provided with the prototype surface and just-in-time training on its use when performing bedside procedures. Subjects completed the validated 10-point System Usability Scale postshift for the surface that they had used. The study protocol was approved by the institutional review board. Multiple prototypes and recursive design revisions resulted in a fully functional, portable, and durable bedside procedural surface that featured a stainless steel tray and intuitive hook-and-lock mechanisms for attachment to ED stretcher bed rails. Forty-two control and 40 experimental group subjects participated and completed questionnaires. The median System Usability Scale score (out of 100; higher scores associated with better usability) was 72.5 (interquartile range [IQR] 51.3 to 86.3) for the Mayo stand; the experimental surface was scored at 93.8 (IQR 84.4 to 97.5 for a difference in medians of 17.5 (95% confidence interval 10 to 27.5). Subjects reported several usability challenges with the Mayo stand; the experimental surface was reviewed as easy to use, simple, and functional. In accordance with experimental live environment deployment, questionnaire responses, and end-user suggestions, the project team finalized the design specification for the experimental procedural surface for open dissemination. An iterative, interdisciplinary approach was used to generate, evaluate, revise, and finalize the design specification for a new procedural surface that met all core end-user requirements. The final surface design was evaluated favorably on a validated usability tool against Mayo stands when use tested in simulated and live clinical settings. Copyright © 2016 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  15. Beamforming Based Full-Duplex for Millimeter-Wave Communication

    PubMed Central

    Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen

    2016-01-01

    In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256

  16. Finite-beta equilibria for Wendelstein 7-X configurations using the Princeton Iterative Equilibrium Solver code

    NASA Astrophysics Data System (ADS)

    Arndt, S.; Merkel, P.; Monticello, D. A.; Reiman, A. H.

    1999-04-01

    Fixed- and free-boundary equilibria for Wendelstein 7-X (W7-X) [W. Lotz et al., Plasma Physics and Controlled Nuclear Fusion Research 1990 (Proc. 13th Int. Conf. Washington, DC, 1990), (International Atomic Energy Agency, Vienna, 1991), Vol. 2, p. 603] configurations are calculated using the Princeton Iterative Equilibrium Solver (PIES) [A. H. Reiman et al., Comput. Phys. Commun., 43, 157 (1986)] to deal with magnetic islands and stochastic regions. Usually, these W7-X configurations require a large number of iterations for PIES convergence. Here, two methods have been successfully tested in an attempt to decrease the number of iterations needed for convergence. First, periodic sequences of different blending parameters are used. Second, the initial guess is vastly improved by using results of the Variational Moments Equilibrium Code (VMEC) [S. P. Hirshmann et al., Phys. Fluids 26, 3553 (1983)]. Use of these two methods have allowed verification of the Hamada condition and tendency of "self-healing" of islands has been observed.

  17. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  18. Self-Consistent Sources Extensions of Modified Differential-Difference KP Equation

    NASA Astrophysics Data System (ADS)

    Gegenhasi; Li, Ya-Qian; Zhang, Duo-Duo

    2018-04-01

    In this paper, we investigate a modified differential-difference KP equation which is shown to have a continuum limit into the mKP equation. It is also shown that the solution of the modified differential-difference KP equation is related to the solution of the differential-difference KP equation through a Miura transformation. We first present the Grammian solution to the modified differential-difference KP equation, and then produce a coupled modified differential-difference KP system by applying the source generation procedure. The explicit N-soliton solution of the resulting coupled modified differential-difference system is expressed in compact forms by using the Grammian determinant and Casorati determinant. We also construct and solve another form of the self-consistent sources extension of the modified differential-difference KP equation, which constitutes a Bäcklund transformation for the differential-difference KP equation with self-consistent sources. Supported by the National Natural Science Foundation of China under Grant Nos. 11601247 and 11605096, the Natural Science Foundation of Inner Mongolia Autonomous Region under Grant Nos. 2016MS0115 and 2015MS0116 and the Innovation Fund Programme of Inner Mongolia University No. 20161115

  19. A new design approach to innovative spectrometers. Case study: TROPOLITE

    NASA Astrophysics Data System (ADS)

    Volatier, Jean-Baptiste; Baümer, Stefan; Kruizinga, Bob; Vink, Rob

    2014-05-01

    Designing a novel optical system is a nested iterative process. The optimization loop, from a starting point to final system is already mostly automated. However this loop is part of a wider loop which is not. This wider loop starts with an optical specification and ends with a manufacturability assessment. When designing a new spectrometer with emphasis on weight and cost, numerous iterations between the optical- and mechanical designer are inevitable. The optical designer must then be able to reliably produce optical designs based on new input gained from multidisciplinary studies. This paper presents a procedure that can automatically generate new starting points based on any kind of input or new constraint that might arise. These starting points can then be handed over to a generic optimization routine to make the design tasks extremely efficient. The optical designer job is then not to design optical systems, but to meta-design a procedure that produces optical systems paving the way for system level optimization. We present here this procedure and its application to the design of TROPOLITE a lightweight push broom imaging spectrometer.

  20. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  1. Two-dimensional imaging of two types of radicals by the CW-EPR method

    NASA Astrophysics Data System (ADS)

    Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech

    2008-01-01

    The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.

  2. A semi-direct procedure using a local relaxation factor and its application to an internal flow problem

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1984-01-01

    Generally, fast direct solvers are not directly applicable to a nonseparable elliptic partial differential equation. This limitation, however, is circumvented by a semi-direct procedure, i.e., an iterative procedure using fast direct solvers. An efficient semi-direct procedure which is easy to implement and applicable to a variety of boundary conditions is presented. The current procedure also possesses other highly desirable properties, i.e.: (1) the convergence rate does not decrease with an increase of grid cell aspect ratio, and (2) the convergence rate is estimated using the coefficients of the partial differential equation being solved.

  3. Robust determination of the chemical potential in the pole expansion and selected inversion method for solving Kohn-Sham density functional theory

    NASA Astrophysics Data System (ADS)

    Jia, Weile; Lin, Lin

    2017-10-01

    Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

  4. An iterative method for hydrodynamic interactions in Brownian dynamics simulations of polymer dynamics

    NASA Astrophysics Data System (ADS)

    Miao, Linling; Young, Charles D.; Sing, Charles E.

    2017-07-01

    Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.

  5. Robust determination of the chemical potential in the pole expansion and selected inversion method for solving Kohn-Sham density functional theory.

    PubMed

    Jia, Weile; Lin, Lin

    2017-10-14

    Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.

  6. Developing "My Asthma Diary": a process exemplar of a patient-driven arts-based knowledge translation tool.

    PubMed

    Archibald, Mandy M; Hartling, Lisa; Ali, Samina; Caine, Vera; Scott, Shannon D

    2018-06-05

    Although it is well established that family-centered education is critical to managing childhood asthma, the information needs of parents of children with asthma are not being met through current educational approaches. Patient-driven educational materials that leverage the power of the storytelling and the arts show promise in communicating health information and assisting in illness self-management. However, such arts-based knowledge translation approaches are in their infancy, and little is known about how to develop such tools for parents. This paper reports on the development of "My Asthma Diary" - an innovative knowledge translation tool based on rigorous research evidence and tailored to parents' asthma-related information needs. We used a multi-stage process to develop four eBook prototypes of "My Asthma Diary." We conducted formative research on parents' information needs and identified high quality research evidence on childhood asthma, and used these data to inform the development of the asthma eBooks. We established interdisciplinary consulting teams with health researchers, practitioners, and artists to help iteratively create the knowledge translation tools. We describe the iterative, transdisciplinary process of developing asthma eBooks which incorporates: (I) parents' preferences and information needs on childhood asthma, (II) quality evidence on childhood asthma and its management, and (III) the engaging and informative powers of storytelling and visual art as methods to communicate complex health information to parents. We identified four dominant methodological and procedural challenges encountered during this process: (I) working within an inter-disciplinary team, (II) quantity and ordering of information, (III) creating a composite narrative, and (IV) balancing actual and ideal management scenarios. We describe a replicable and rigorous multi-staged approach to developing a patient-driven, creative knowledge translation tool, which can be adapted for use with different populations and contexts. We identified specific procedural and methodological challenges that others conducting comparable work should consider, particularly as creative, patient-driven knowledge translation strategies continue to emerge across health disciplines.

  7. Experiment of low resistance joints for the ITER correction coil.

    PubMed

    Liu, Huajun; Wu, Yu; Wu, Weiyue; Liu, Bo; Shi, Yi; Guo, Shuai

    2013-01-01

    A test method was designed and performed to measure joint resistance of the ITER correction coil (CC) in liquid helium (LHe) temperature. A 10 kA superconducting transformer was manufactured to provide the joints current. The transformer consisted of two concentric layer-wound superconducting solenoids. NbTi superconducting wire was wound in the primary coil and the ITER CC conductor was wound in the secondary coil. The primary and the secondary coils were both immersed in liquid helium of a 300 mm useful bore diameter cryostat. Two ITER CC joints were assembled in the secondary loop and tested. The current of the secondary loop was ramped to 9 kA in several steps. The two joint resistances were measured to be 1.2 nΩ and 1.65 nΩ, respectively.

  8. Improvement of tritium accountancy technology for ITER fuel cycle safety enhancement

    NASA Astrophysics Data System (ADS)

    O'hira, S.; Hayashi, T.; Nakamura, H.; Kobayashi, K.; Tadokoro, T.; Nakamura, H.; Itoh, T.; Yamanishi, T.; Kawamura, Y.; Iwai, Y.; Arita, T.; Maruyama, T.; Kakuta, T.; Konishi, S.; Enoeda, M.; Yamada, M.; Suzuki, T.; Nishi, M.; Nagashima, T.; Ohta, M.

    2000-03-01

    In order to improve the safe handling and control of tritium for the ITER fuel cycle, effective in situ tritium accounting methods have been developed at the Tritium Process Laboratory in the Japan Atomic Energy Research Institute under one of the ITER-EDA R&D tasks. The remote and multilocation analysis of process gases by an application of laser Raman spectroscopy developed and tested could provide a measurement of hydrogen isotope gases with a detection limit of 0.3 kPa analytical periods of 120 s. An in situ tritium inventory measurement by application of a `self-assaying' storage bed with 25 g tritium capacity could provide a measurement with the required detection limit of less than 1% and a design proof of a bed with 100 g tritium capacity.

  9. Applications of self-control procedures by children: a review.

    PubMed Central

    O'Leary, S G; Dubey, D R

    1979-01-01

    Self-control procedures as used by children to affect their own behavior were reviewed. Particular emphasis was placed on self-instruction, self-determined criteria, self-assessment, and self-reinforcement. Self-punishment, comprehensive programs, and innovative self-control procedures (distraction and restatement of contingencies) were also evaluated. Basic effectiveness, comparisons with similar externally imposed interventions, maintenance, and the augmental value of the procedures were assessed. Important problems for future research were identified. PMID:389917

  10. PADF RF localization experiments with multi-agent caged-MAV platforms

    NASA Astrophysics Data System (ADS)

    Barber, Christopher; Gates, Miguel; Selmic, Rastko; Al-Issa, Huthaifa; Ordonez, Raul; Mitra, Atindra

    2011-06-01

    This paper provides a summary of preliminary RF direction finding results generated within an AFOSR funded testbed facility recently developed at Louisiana Tech University. This facility, denoted as the Louisiana Tech University Micro- Aerial Vehicle/Wireless Sensor Network (MAVSeN) Laboratory, has recently acquired a number of state-of-the-art MAV platforms that enable us to analyze, design, and test some of our recent results in the area of multiplatform position-adaptive direction finding (PADF) [1] [2] for localization of RF emitters in challenging embedded multipath environments. Discussions within the segmented sections of this paper include a description of the MAVSeN Laboratory and the preliminary results from the implementation of mobile platforms with the PADF algorithm. This novel approach to multi-platform RF direction finding is based on the investigation of iterative path-loss based (i.e. path loss exponent) metrics estimates that are measured across multiple platforms in order to develop a control law that robotically/intelligently positionally adapt (i.e. self-adjust) the location of each distributed/cooperative platform. The body of this paper provides a summary of our recent results on PADF and includes a discussion on state-of-the-art Sensor Mote Technologies as applied towards the development of sensor-integrated caged-MAV platform for PADF applications. Also, a discussion of recent experimental results that incorporate sample approaches to real-time singleplatform data pruning is included as part of a discussion on potential approaches to refining a basic PADF technique in order to integrate and perform distributed self-sensitivity and self-consistency analysis as part of a PADF technique with distributed robotic/intelligent features. These techniques are extracted in analytical form from a parallel study denoted as "PADF RF Localization Criteria for Multi-Model Scattering Environments". The focus here is on developing and reporting specific approaches to self-sensitivity and self-consistency within this experimental PADF framework via the exploitation of specific single-agent caged-MAV trajectories that are unique to this experiment set.

  11. Computer method for identification of boiler transfer functions

    NASA Technical Reports Server (NTRS)

    Miles, J. H.

    1972-01-01

    Iterative computer aided procedure was developed which provides for identification of boiler transfer functions using frequency response data. Method uses frequency response data to obtain satisfactory transfer function for both high and low vapor exit quality data.

  12. Modeling regional freight flow assignment through intermodal terminals

    DOT National Transportation Integrated Search

    2005-03-01

    An analytical model is developed to assign regional freight across a multimodal highway and railway network using geographic information systems. As part of the regional planning process, the model is an iterative procedure that assigns multimodal fr...

  13. Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1973-01-01

    Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.

  14. Systems and methods for predicting materials properties

    DOEpatents

    Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano

    2007-11-06

    Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.

  15. [Tissular expansion in giant congenital nevi treatment].

    PubMed

    Nguyen Van Nuoi, V; Francois-Fiquet, C; Diner, P; Sergent, B; Zazurca, F; Franchi, G; Buis, J; Vazquez, M-P; Picard, A; Kadlub, N

    2014-08-01

    Surgical management of giant melanotic naevi remains a surgical challenge. Tissue expansion provides tissue of the same quality for the repair of defects. The aim of this study is to review tissular expansion for giant melanotic naevi. We conducted a retrospective study from 2000 to 2012. All children patients who underwent a tissular expansion for giant congenital naevi had been included. Epidemiological data, surgical procedure, complication rate and results had been analysed. Thirty-tree patients had been included; they underwent 61 procedures with 79 tissular-expansion prosthesis. Previous surgery, mostly simple excision had been performed before tissular expansion. Complete naevus excision had been performed in 63.3% of the cases. Complications occurred in 45% of the cases, however in 50% of them were minor. Iterative surgery increased the complication rate. Tissular expansion is a valuable option for giant congenital naevus. However, complication rate remained high, especially when iterative surgery is needed. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  16. Iterative-Transform Phase Retrieval Using Adaptive Diversity

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.

    2007-01-01

    A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.

  17. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Du; Yang, Weitao

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  18. Robust and Efficient Spin Purification for Determinantal Configuration Interaction.

    PubMed

    Fales, B Scott; Hohenstein, Edward G; Levine, Benjamin G

    2017-09-12

    The limited precision of floating point arithmetic can lead to the qualitative and even catastrophic failure of quantum chemical algorithms, especially when high accuracy solutions are sought. For example, numerical errors accumulated while solving for determinantal configuration interaction wave functions via Davidson diagonalization may lead to spin contamination in the trial subspace. This spin contamination may cause the procedure to converge to roots with undesired ⟨Ŝ 2 ⟩, wasting computer time in the best case and leading to incorrect conclusions in the worst. In hopes of finding a suitable remedy, we investigate five purification schemes for ensuring that the eigenvectors have the desired ⟨Ŝ 2 ⟩. These schemes are based on projection, penalty, and iterative approaches. All of these schemes rely on a direct, graphics processing unit-accelerated algorithm for calculating the S 2 c matrix-vector product. We assess the computational cost and convergence behavior of these methods by application to several benchmark systems and find that the first-order spin penalty method is the optimal choice, though first-order and Löwdin projection approaches also provide fast convergence to the desired spin state. Finally, to demonstrate the utility of these approaches, we computed the lowest several excited states of an open-shell silver cluster (Ag 19 ) using the state-averaged complete active space self-consistent field method, where spin purification was required to ensure spin stability of the CI vector coefficients. Several low-lying states with significant multiply excited character are predicted, suggesting the value of a multireference approach for modeling plasmonic nanomaterials.

  19. Accurate and efficient calculation of excitation energies with the active-space particle-particle random phase approximation

    DOE PAGES

    Zhang, Du; Yang, Weitao

    2016-10-13

    An efficient method for calculating excitation energies based on the particle-particle random phase approximation (ppRPA) is presented. Neglecting the contributions from the high-lying virtual states and the low-lying core states leads to the significantly smaller active-space ppRPA matrix while keeping the error to within 0.05 eV from the corresponding full ppRPA excitation energies. The resulting computational cost is significantly reduced and becomes less than the construction of the non-local Fock exchange potential matrix in the self-consistent-field (SCF) procedure. With only a modest number of active orbitals, the original ppRPA singlet-triplet (ST) gaps as well as the low-lying single and doublemore » excitation energies can be accurately reproduced at much reduced computational costs, up to 100 times faster than the iterative Davidson diagonalization of the original full ppRPA matrix. For high-lying Rydberg excitations where the Davidson algorithm fails, the computational savings of active-space ppRPA with respect to the direct diagonalization is even more dramatic. The virtues of the underlying full ppRPA combined with the significantly lower computational cost of the active-space approach will significantly expand the applicability of the ppRPA method to calculate excitation energies at a cost of O(K^{4}), with a prefactor much smaller than a single SCF Hartree-Fock (HF)/hybrid functional calculation, thus opening up new possibilities for the quantum mechanical study of excited state electronic structure of large systems.« less

  20. Defense Advanced Research Projects Agency (DARPA) Network Archive (DNA)

    DTIC Science & Technology

    2008-12-01

    therefore decided for an iterative development process even within such a small project. The first iteration consisted of conducting specific...Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington

  1. AUTOMOTIVE DIESEL MAINTENANCE 1. UNIT XX, CUMMINS DIESEL ENGINE, MAINTENANCE SUMMARY.

    ERIC Educational Resources Information Center

    Minnesota State Dept. of Education, St. Paul. Div. of Vocational and Technical Education.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO PROVIDE A SUMMARY OF THE REASONS AND PROCEDURES FOR DIESEL ENGINE MAINTENANCE. TOPICS ARE WHAT ENGINE BREAK-IN MEANS, ENGINE BREAK-IN, TORQUING BEARINGS (TEMPLATE METHOD), AND THE NEED FOR MAINTENANCE. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL BRANCH PROGRAMED TRAINING FILM "CUMMINS DIESEL ENGINE…

  2. Allocating Study Time Appropriately: Spontaneous and Instructed Performance.

    ERIC Educational Resources Information Center

    Dufresne, Annette; And Others

    Two aspects of allocation of study time were examined among 48 third- and 48 fifth-grade children. Aspects examined were: (1) allocation of more time to more difficult material; and (2) allocation of sufficient time to meet a recall goal. Under a self-terminated procedure, children studied two booklets, one of which consisted of easy or highly…

  3. Enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase-change heat transfer in metal foams.

    PubMed

    Liu, Qing; He, Ya-Ling; Li, Qing

    2017-08-01

    In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.

  4. Enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase-change heat transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Liu, Qing; He, Ya-Ling; Li, Qing

    2017-08-01

    In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase-change heat transfer in metal foams under the local thermal nonequilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase-change material (PCM) and metal-foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure; thus it retains the inherent merits of the standard LB method and is superior to the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across the phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can serve as an accurate and efficient numerical tool for studying metal-foam enhanced solid-liquid phase-change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.

  5. Computer program for solving laminar, transitional, or turbulent compressible boundary-layer equations for two-dimensional and axisymmetric flow

    NASA Technical Reports Server (NTRS)

    Harris, J. E.; Blanchard, D. K.

    1982-01-01

    A numerical algorithm and computer program are presented for solving the laminar, transitional, or turbulent two dimensional or axisymmetric compressible boundary-layer equations for perfect-gas flows. The governing equations are solved by an iterative three-point implicit finite-difference procedure. The software, program VGBLP, is a modification of the approach presented in NASA TR R-368 and NASA TM X-2458, respectively. The major modifications are: (1) replacement of the fourth-order Runge-Kutta integration technique with a finite-difference procedure for numerically solving the equations required to initiate the parabolic marching procedure; (2) introduction of the Blottner variable-grid scheme; (3) implementation of an iteration scheme allowing the coupled system of equations to be converged to a specified accuracy level; and (4) inclusion of an iteration scheme for variable-entropy calculations. These modifications to the approach presented in NASA TR R-368 and NASA TM X-2458 yield a software package with high computational efficiency and flexibility. Turbulence-closure options include either two-layer eddy-viscosity or mixing-length models. Eddy conductivity is modeled as a function of eddy viscosity through a static turbulent Prandtl number formulation. Several options are provided for specifying the static turbulent Prandtl number. The transitional boundary layer is treated through a streamwise intermittency function which modifies the turbulence-closure model. This model is based on the probability distribution of turbulent spots and ranges from zero to unity for laminar and turbulent flow, respectively. Several test cases are presented as guides for potential users of the software.

  6. Inferring the demographic history from DNA sequences: An importance sampling approach based on non-homogeneous processes.

    PubMed

    Ait Kaci Azzou, S; Larribe, F; Froda, S

    2016-10-01

    In Ait Kaci Azzou et al. (2015) we introduced an Importance Sampling (IS) approach for estimating the demographic history of a sample of DNA sequences, the skywis plot. More precisely, we proposed a new nonparametric estimate of a population size that changes over time. We showed on simulated data that the skywis plot can work well in typical situations where the effective population size does not undergo very steep changes. In this paper, we introduce an iterative procedure which extends the previous method and gives good estimates under such rapid variations. In the iterative calibrated skywis plot we approximate the effective population size by a piecewise constant function, whose values are re-estimated at each step. These piecewise constant functions are used to generate the waiting times of non homogeneous Poisson processes related to a coalescent process with mutation under a variable population size model. Moreover, the present IS procedure is based on a modified version of the Stephens and Donnelly (2000) proposal distribution. Finally, we apply the iterative calibrated skywis plot method to a simulated data set from a rapidly expanding exponential model, and we show that the method based on this new IS strategy correctly reconstructs the demographic history. Copyright © 2016. Published by Elsevier Inc.

  7. An efficient strongly coupled immersed boundary method for deforming bodies

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Colonius, Tim

    2016-11-01

    Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.

  8. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  9. A block iterative finite element algorithm for numerical solution of the steady-state, compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Cooke, C. H.

    1976-01-01

    An iterative method for numerically solving the time independent Navier-Stokes equations for viscous compressible flows is presented. The method is based upon partial application of the Gauss-Seidel principle in block form to the systems of nonlinear algebraic equations which arise in construction of finite element (Galerkin) models approximating solutions of fluid dynamic problems. The C deg-cubic element on triangles is employed for function approximation. Computational results for a free shear flow at Re = 1,000 indicate significant achievement of economy in iterative convergence rate over finite element and finite difference models which employ the customary time dependent equations and asymptotic time marching procedure to steady solution. Numerical results are in excellent agreement with those obtained for the same test problem employing time marching finite element and finite difference solution techniques.

  10. Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection

    DOE PAGES

    Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...

    2017-09-18

    As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less

  11. Single-shot dual-wavelength in-line and off-axis hybrid digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Fengpeng; Wang, Dayong; Rong, Lu; Wang, Yunxin; Zhao, Jie

    2018-02-01

    We propose an in-line and off-axis hybrid holographic real-time imaging technique. The in-line and off-axis digital holograms are generated simultaneously by two lasers with different wavelengths, and they are recorded using a color camera with a single shot. The reconstruction is carried using an iterative algorithm in which the initial input is designed to include the intensity of the in-line hologram and the approximate phase distributions obtained from the off-axis hologram. In this way, the complex field in the object plane and the output by the iterative procedure can produce higher quality amplitude and phase images compared to traditional iterative phase retrieval. The performance of the technique has been demonstrated by acquiring the amplitude and phase images of a green lacewing's wing and a living moon jellyfish.

  12. Developing a Conceptually Equivalent Type 2 Diabetes Risk Score for Indian Gujaratis in the UK

    PubMed Central

    Patel, Naina; Stone, Margaret; Barber, Shaun; Gray, Laura; Davies, Melanie; Khunti, Kamlesh

    2016-01-01

    Aims. To apply and assess the suitability of a model consisting of commonly used cross-cultural translation methods to achieve a conceptually equivalent Gujarati language version of the Leicester self-assessment type 2 diabetes risk score. Methods. Implementation of the model involved multiple stages, including pretesting of the translated risk score by conducting semistructured interviews with a purposive sample of volunteers. Interviews were conducted on an iterative basis to enable findings to inform translation revisions and to elicit volunteers' ability to self-complete and understand the risk score. Results. The pretest stage was an essential component involving recruitment of a diverse sample of 18 Gujarati volunteers, many of whom gave detailed suggestions for improving the instructions for the calculation of the risk score and BMI table. Volunteers found the standard and level of Gujarati accessible and helpful in understanding the concept of risk, although many of the volunteers struggled to calculate their BMI. Conclusions. This is the first time that a multicomponent translation model has been applied to the translation of a type 2 diabetes risk score into another language. This project provides an invaluable opportunity to share learning about the transferability of this model for translation of self-completed risk scores in other health conditions. PMID:27703985

  13. Temporally extended self-awareness and affective engagement in three-year-olds.

    PubMed

    Zocchi, Silvia; Borasio, Francesca; Rivolta, Davide; Rositano, Luana; Scotti, Ilaria; Liccione, Davide

    2018-01-01

    The aim of the current study was to analyze the role of affective engagement during social interaction on the emergence of a temporally extended self (TES). A Delayed Self Recognition task was administered in two different social contexts: in presence of the mother ("Mother condition") or in presence of an unfamiliar person ("Experimenter condition"). The same sample of 71 tree-year-olds was tested twice in these two treatment conditions. Results showed higher self-recognition scores in the "Mother condition". These findings are consistent with developing-self theories that emphasize the impact of reciprocal social interaction on the emergence of self-awareness, and support a conception of the Self as a dialogic entity. We interpreted this link as a evidence that, when completing the procedure with their mother, children are aware of her attention, which corresponds to a familiar mode of self-perception, as well as to a peculiar affective consciousness of Self. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.

  15. Computer-Aided Design Of Turbine Blades And Vanes

    NASA Technical Reports Server (NTRS)

    Hsu, Wayne Q.

    1988-01-01

    Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.

  16. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  17. Dielectric response of periodic systems from quantum Monte Carlo calculations.

    PubMed

    Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola

    2005-11-11

    We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.

  18. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  19. Three dimensional fluid-kinetic model of a magnetically guided plasma jet

    NASA Astrophysics Data System (ADS)

    Ramos, Jesús J.; Merino, Mario; Ahedo, Eduardo

    2018-06-01

    A fluid-kinetic model of the collisionless plasma flow in a convergent-divergent magnetic nozzle is presented. The model combines the leading-order Vlasov equation and the fluid continuity and perpendicular momentum equation for magnetized electrons, and the fluid equations for cold ions, which must be solved iteratively to determine the self-consistent plasma response in a three-dimensional magnetic field. The kinetic electron solution identifies three electron populations and provides the plasma density and pressure tensor. The far downstream asymptotic behavior shows the anisotropic cooling of the electron populations. The fluid equations determine the electric potential and the fluid velocities. In the small ion-sound gyroradius case, the solution is constructed one magnetic line at a time. In the large ion-sound gyroradius case, ion detachment from magnetic lines makes the problem fully three-dimensional.

  20. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    PubMed

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

Top